Optimization problems in engineering involve very often nonlinear functions with multiple minima or discontinuities, or the simulation of a system in order to determine its parameters. Global search methods can compute a set of points and provide alternative design answers to a problem, but are computationally expensive. A solution for the CPU dependency is parallelization, which leads to the need to control the sampling of the search space. This paper presents a parallel implementation of two free-derivative optimization methods (Nelder-Mead and Powell), combined with two restart strategies to globalize the search. The first is based on a probability density function, while the second uses a fast algorithm to uniformly sample the space. The implementation is suited to a faculty network, avoiding special hardware requirements, complex installation or coding details. © Springer Science+Business Media B.V. 2008.
CITATION STYLE
Koscianski, A., & Luersen, M. A. (2008). Globalization and parallelization of nelder-mead and powell optimization methods. In Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering (pp. 93–98). https://doi.org/10.1007/978-1-4020-8735-6_18
Mendeley helps you to discover research relevant for your work.