This paper is focused in the parallelization of Direct Search Optimization methods, which are part of the family of derivative-free methods. These methods are known to be quite slow, but are easily parallelizable, and have the advantage of achieving global convergence in some problems where standard Newton-like methods (based on derivatives) fail. These methods have been tested with the Inverse Additive Singular Value Problem, which is a difficult highly nonlinear problem. The results obtained have been compared with those obtained with derivative methods; the efficiency of the parallel versions has been studied. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Trujillo Rasúa, R. A., Vidal, A. M., & García, V. M. (2006). Parallel optimization methods based on Direct Search. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3991 LNCS-I, pp. 324–331). Springer Verlag. https://doi.org/10.1007/11758501_46
Mendeley helps you to discover research relevant for your work.