2020, issue 1, p. 5-14

Received 07.02.2020; Revised 15.02.2020; Accepted 10.03.2020

Published 31.03.2020; First Online 26.04.2020

https://doi.org/10.34229/2707-451X.20.1.1

Previous  |  Full text  |  Next

 

MSC 65K05, 90C26, 90C15

A STOCHASTIC SMOOTHING METHOD FOR NONSMOOTH GLOBAL OPTIMIZATION

V.I. Norkin 1, 2 * ORCID ID favicon Big

1 V.M. Glushkov Institute of Cybernetics, The National Academy of Sciences of Ukraine, Kyiv, Ukraine

2 National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"

* Correspondence: This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Abstract. The paper presents the results of testing the stochastic smoothing method for global optimization of a multiextremal function in a convex feasible subset of Euclidean space. Preliminarily, the objective function is extended outside the admissible region so that its global minimum does not change, and it becomes coercive. The smoothing of a function at any point is carried out by averaging the values of the function over some neighborhood of this point. The size of the neighborhood is a smoothing parameter. Smoothing eliminates small local extrema of the original function. With a sufficiently large value of the smoothing parameter, the averaged function can have only one minimum. The smoothing method consists in replacing the original function with a sequence of smoothed approximations with vanishing to zero smoothing parameter and optimization of the latter functions by contemporary stochastic optimization methods. Passing from the minimum of one smoothed function to a close minimum of the next smoothed function, we can gradually come to the region of the global minimum of the original function. The smoothing method is also applicable for the optimization of nonsmooth nonconvex functions. It is shown that the smoothing method steadily solves test global optimization problems of small dimensions from the literature.

 

Keywords: global optimization; Steklov smoothing; averaged functions; stochastic optimization; nonsmooth nonconvex optimization.

 

Cite as: Norkin V.I. A Stochastic Smoothing Method for Nonsmooth Global Optimization. Cybernetics and Computer Technologies. 2020. 1. 5–14. https://doi.org/10.34229/2707-451X.20.1.1

 

References

           1.     Steklov V.A. Sur les expressions asymptotiques de certaines fonctions d´efinies par les equations differentielles du second ordre et leurs applications au probleme du developement dune fonction arbitraire en series procedant suivant les diverses fonctions. Communications de la Société mathématique de Kharkow, Série 2. 1907. 10. 97–199. (in French).

           2.     Gupal A.M. A method for the minimization of almost-differentiable functions. Cybernetics. 1977. 13 (1). 115–117. https://doi.org/10.1007/BF01071397

           3.     Gupal A.M., Norkin, V.I. Algorithm for the minimization of discontinuous functions. Cybernetics. 1977. 13 (2). 220223. https://doi.org/10.1007/BF01073313

           4.     Norkin V.I. Two random search algorithms for the minimization of non-differentiable functions. In Matematicheskie metody issledovaniya operatsyi i teorii nadezhnosti; Ermoliev Y.M.; Kovalenko I.N., Eds.; V.M. Glushkov Institute of Cybernetics of the National Academy of Sciences of Ukraine: Kyiv. 1978. 3640.

           5.     Gupal A.M. Stochastic Methods for Solving Nonsmooth Extremal Problems. Naukova Dumka: Kyiv, 1979. (in Russian)

           6.     Mikhalevich V.S., Gupal A.M., Norkin, V.I. Methods of Nonconvex Optimization. Nauka: Moscow, 1987. (in Russian)

           7.     Ermoliev Y.M., Norkin V.I., Wets R.J.-B. The minimization of semicontinuous functions: mollifier subgradients. SIAM J. Control Optim. 1995. 33 (1). 149–167. https://doi.org/10.1137/S0363012992238369

           8.     Yudin D.B. Quantitative Methods for the Analysis of Complex Systems. Izvestia AN SSSR. Tehnicheskaia kibernetika. 1965. 1. 313. (in Russian)

           9.     Norkin V.I. Two notes: on the smoothing method in multi-extreme optimization and on the finite-difference method in non-differentiable optimization. Abstracts of the III All-Union Seminar "Numerical Methods of Nonlinear Programming". Kharkiv State University: Kharkiv, 1979. (in Russian)

       10.     Arıkan O., Burachik R.S., Kaya C.Y. Steklov Regularization and Trajectory Methods for Univariate Global Optimization. J. Glob. Optim. 2020. 76 (1). 91120. https://doi.org/10.1007/s10898-019-00837-3

       11.     Burachik R.S., Kaya C.Y. Steklov Convexification and a Trajectory Method for Global Optimization of Multivariate Quartic Polynomials. 2019. https://arxiv.org/abs/1912.00332v1

       12.     Clarke F.H. Optimization and Nonsmooth Analysis, 2nd ed.; Volume 5 of Classics in Applied Mathematics.  SIAM: Philadelphia, PA: 1990. https://doi.org/10.1137/1.9781611971309

       13.     Ermoliev Y.M. Methods of Stochastic Programming. Moscow: Nauka, 1976. (in Russian)

       14.     Polyak B.T. Introduction to Optimization. Optimization Software. New York, 1987.

       15.     Ermoliev Y.M., Norkin V.I. Solution of nonconvex nonsmooth stochastic optimization problems. Cybern. Syst. Anal. 2003. 39 (5). 701715.  https://doi.org/10.1023/B:CASA.0000012091.84864.65

       16.     Bottou L., Curtis F.E., Nocedal J. Optimization Methods for Large-Scale Machine Learning. SIAM Review. 2018. 60 (2). 223–311. https://doi.org/10.1137/16m1080173

       17.     Törn A., Zilinskas A. Global Optimization. Springer: Berlin, 1989. https://doi.org/10.1007/3-540-50871-6

       18.     Qi L., Wan Z., Yang Y.-F. Global Minimization of Normal Quartic Polynomials Based on Global Descent Directions. SIAM J. Optim. 2004. 15 (1). 275–302. https://doi.org/10.1137/s1052623403420857

 

 

ISSN 2707-451X (Online)

ISSN 2707-4501 (Print)

Previous  |  Full text  |  Next

 

 

 

Copyright © 2019-2020 V.M. Glushkov Institute of Cybernetics of the NAS of Ukraine,

National Academy of Sciences of Ukraine.

All rights reserved.