In this work we present a strategy for the parallelization of gradient-based nonlinear programming techniques, which is particularly suitable to solve optimization problems whose objective functions and/or constraints are costly to evaluate. The algorithm was conceived to be run on heterogeneous distributed computing environments. In order to avoid communication overheads and improve load balancing, we propose the use of a hybrid computational task-scheduling model.
CITATION STYLE
Vazquez, G. E., & Brignole, N. B. (1999). Parallel NLP strategies using PVM on heterogeneous distributed environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1697, pp. 533–540). Springer Verlag. https://doi.org/10.1007/3-540-48158-3_66
Mendeley helps you to discover research relevant for your work.