Sequential Knowledge Transfer Across Problems

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter, we build upon the foundations of Chap. 4 to develop a theoretically principled optimization algorithm in the image of an adaptive memetic automaton. For the most part, we retain the abstract interpretation of memes as computationally encoded probabilistic building-blocks of knowledge that can be learned from one task and spontaneously transmitted (for reuse) to another. Most importantly, we make the assumption that the set of all tasks faced by the memetic automatons are put forth sequentially, such that the transfer of memes occurs in a unidirectional manner—from the past to the present. One of the main challenges emerging in this regard is that, given a diverse pool of memes accumulated over time, an appropriate selection and integration of (source) memes must be carried out in order to induce a search bias that suits the ongoing target task of interest. To this end, we propose a mixture modeling approach capable of adaptive online integration of all available knowledge memes—driven entirely by the data generated during the course of a search. Our proposal is particularly well-suited for black-box optimization problems where task-specific datasets may not be available for offline assessments. We conclude the chapter by illustrating how the basic idea of online mixture modeling extends to the case of computationally expensive problems as well.

Cite

CITATION STYLE

APA

Gupta, A., & Ong, Y. S. (2019). Sequential Knowledge Transfer Across Problems. In Adaptation, Learning, and Optimization (Vol. 21, pp. 63–82). Springer Verlag. https://doi.org/10.1007/978-3-030-02729-2_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free