Investigation of One-Go Evolution Strategy/Quasi-Newton Hybridizations

  • Thomas Bartz-Beielstein
  • Mike Preuss
  • Günter Rudolph
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4030)


It is general knowledge that hybrid approaches can improve the performance of search heurististics. The first phase, exploration, should detect regions of good solutions, whereas the second phase, exploitation, shall tune these solutions locally. Therefore a combination (hybridization) of global and local optimization techniques is recommended. Although plausible at the first sight, it remains unclear how to implement the hybridization, e.g., to distribute the resources, i.e., number of function evaluations or CPU time, to the global and local search optimization algorithm. This budget allocation becomes important if the available resources are very limited. We present an approach to analyze hybridization in this case. An evolution strategy and a quasi-Newton method are combined and tested on standard test functions.


Local Search Memetic Algorithm Gray Arrow Exogenous Parameter Local Search Technique 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bäck, T.: Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York (1996)zbMATHGoogle Scholar
  2. 2.
    Bartz-Beielstein, T.: Experimental analysis of evolution strategies—overview and comprehensive introduction. Interner Bericht des Sonderforschungsbereichs 531 Computational Intelligence CI–157/03, Universität Dortmund, Germany (November 2003)Google Scholar
  3. 3.
    Bartz-Beielstein, T.: Experimental Research in Evolutionary Computation—The New Experimentalism. Springer, Heidelberg (2006)zbMATHGoogle Scholar
  4. 4.
    Beyer, H.-G., Schwefel, H.-P.: Evolution strategies—A comprehensive introduction. Natural Computing 1, 3–52 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. Journal of the Institute of Mathematics and Its Applications 6, 76–90 (1970)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Cohen, P.R.: Empirical Methods for Artificial Intelligence. MIT Press, Cambridge (1995)zbMATHGoogle Scholar
  7. 7.
    Fletcher, R.: A new approach to variable metric algorithms. Computer Journal 13, 317–322 (1970)zbMATHCrossRefGoogle Scholar
  8. 8.
    Goldfarb, D.: A family of variable metric updates derived by variational means. Mathematics of Computing 24, 23–26 (1970)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13, 455–492 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Krasnogor, N., Smith, J.E.: A Tutorial for Competent Memetic Algorithms: Model, Taxonomy and Design Issues. IEEE Transactions on Evolutionary Computation 5(9), 474–488 (2005)CrossRefGoogle Scholar
  11. 11.
    Lophaven, S.N., Nielsen, H.B., Søndergaard, J.: DACE—A Matlab Kriging Toolbox. Technical Report IMM-REP-2002-12, Informatics and Mathematical Modelling, Technical University of Denmark, Copenhagen, Denmark (2002)Google Scholar
  12. 12.
    More, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Transactions on Mathematical Software 7(1), 17–41 (1981)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Moscato, P.: On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. Technical Report Caltech Concurrent Computation Program, Report. 826, California Institute of Technology, Pasadena, California, USA (1989)Google Scholar
  14. 14.
    Neumaier, A.: Results for moré/garbow/hillstrom test problems (2006) (Cited Mai 19, 2006),
  15. 15.
    Rosenbrock, H.H.: An automatic method for finding the greatest or least value of a function. Computer Journal 3, 175–184 (1960)CrossRefMathSciNetGoogle Scholar
  16. 16.
    Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Statistical Science 4(4), 409–435 (1989)zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Santner, T.J., Williams, B.J., Notz, W.I.: The Design and Analysis of Computer Experiments. Springer, Heidelberg (2003)zbMATHGoogle Scholar
  18. 18.
    Shanno, D.F.: Conditioning of quasi-Newton methods for function minimization. Mathematics of Computing 24, 647–656 (1970)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Thomas Bartz-Beielstein
    • 1
  • Mike Preuss
    • 1
  • Günter Rudolph
    • 1
  1. 1.Dortmund UniversityDortmundGermany

Personalised recommendations