University of Cambridge > Talks.cam > Machine Learning @ CUED > Lipschitz Global Optimization

Lipschitz Global Optimization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Pat Wilson.

Global optimization is a thriving branch of applied mathematics and an extensive literature is dedicated to it. In this lecture, the global optimization problem of a multidimensional function satisfying the Lipschitz condition over a hyperinterval with an unknown Lipschitz constant is considered. It is supposed that the objective function can be black box, multiextremal, and non-differentiable. It is also assumed that evaluation of the objective function at a point is a time-consuming operation. Many algorithms for solving this problem have been discussed in the literature. They can be distinguished, for example, by the way of obtaining information about the Lipschitz constant and by the strategy of exploration of the search domain. Different exploration techniques based on various adaptive partition strategies are analyzed. The main attention is dedicated to two types of algorithms. The first of them is based on using space-filling curves in global optimization. A family of derivative-free numerical algorithms applying space-filling curves to reduce the dimensionality of the global optimization problem is discussed. A number of unconventional ideas, such as adaptive strategies for estimating Lipschitz constant, balancing global and local information to accelerate the search, etc. are presented. Diagonal global optimization algorithms is the second type of methods under consideration. They have a number of attractive theoretical properties and have proved to be efficient in solving applied problems. In these algorithms, the search hyperinterval is adaptively partitioned into smaller hyperintervals and the objective function is evaluated only at two vertices corresponding to the main diagonal of the generated hyperintervals. It is demonstrated that the traditional diagonal partition strategies do not fulfil the requirements of computational efficiency because of executing many redundant evaluations of the objective function. A new adaptive diagonal partition strategy that allows one to avoid such computational redundancy is described. Some powerful multidimensional global optimization algorithms based on the new strategy are introduced. Extensive numerical experiments are performed on the GKLS -generator that is used nowadays in more than 40 countries in the world to test numerical methods. Results of the tests demonstrate that proposed methods outperform their competitors in terms of both number of trials of the objective function and qualitative analysis of the search domain, which is characterized by the number of generated hyperintervals. Selected references 1. Ya.D. Sergeyev, R.G. Strongin, and D. Lera, Introduction to Global Optimization Exploiting Space-Filling Curves, Springer, New York, 2013. 2. R.G. Strongin and Ya.D. Sergeyev, Global Optimization with Non-Convex Constraints: Sequential and Parallel Algorithms, Springer, New York, 3rd ed., 2014. 3. Sergeyev Ya.D., Kvasov D.E. Deterministic global optimization: An introduction to the diagonal approach, Springer, New York, 2017.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity