Information-Theoretic Multi-task Learning Framework for Bayesian Optimisation

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Bayesian optimisation is a widely used technique for finding the optima of black-box functions in a sample efficient way. When there are concurrent optimisation tasks/functions then it may be possible to transfer knowledge across each other in a multi-task setting and improve the efficiency further. Transferring knowledge requires estimation of task similarity, which in turn requires good knowledge about the objective functions. However, in a multi-task Bayesian optimisation setting the number of observations for all functions can be small, especially at the beginning, making reliable computation of task similarities difficult. In this paper, we propose a novel multi-task Bayesian optimisation method that uses information theory based approach to transfer knowledge across tasks and handle the uncertainty of similarity measurements in an unified framework. Each optimisation task uses contribution from other optimisation task via a mixture model on the location of optima by appropriately combining distribution over optimal locations for each individual task. The probability distribution of the optimal location for individual tasks can be obtained because the objective functions are modeled using Gaussian processes. The weights of the mixture distributions are computed based on the similarities (measured via KL divergence) between two distributions and then appropriately weighting down by the uncertainty in the knowledge. That is, we encourage transfer of knowledge only when two tasks are confident about their high similarity measure and discourage if they are not confident, even if the similarity is high. We evaluate and demonstrate the effectiveness of our proposed method on both synthetic and a set of hyperparameter tuning tests compared to state-of-the-art algorithms.

Cite

CITATION STYLE

APA

Ramachandran, A., Gupta, S., Rana, S., & Venkatesh, S. (2019). Information-Theoretic Multi-task Learning Framework for Bayesian Optimisation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11919 LNAI, pp. 497–509). Springer. https://doi.org/10.1007/978-3-030-35288-2_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free