Multi-Objective Bayesian Optimization with Active Preference Learning

7Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

There are a lot of real-world black-box optimization problems that need to optimize multiple criteria simultaneously. However, in a multi-objective optimization (MOO) problem, identifying the whole Pareto front requires the prohibitive search cost, while in many practical scenarios, the decision maker (DM) only needs a specific solution among the set of the Pareto optimal solutions. We propose a Bayesian optimization (BO) approach to identifying the most preferred solution in the MOO with expensive objective functions, in which a Bayesian preference model of the DM is adaptively estimated by an interactive manner based on the two types of supervisions called the pairwise preference and improvement request. To explore the most preferred solution, we define an acquisition function in which the uncertainty both in the objective function and the DM preference is incorporated. Further, to minimize the interaction cost with the DM, we also propose an active learning strategy for the preference estimation. We empirically demonstrate the effectiveness of our proposed method through the benchmark function optimization and the hyper-parameter optimization problems for machine learning models.

Cite

CITATION STYLE

APA

Ozaki, R., Ishikawa, K., Kanzaki, Y., Takeno, S., Takeuchi, I., & Karasuyama, M. (2024). Multi-Objective Bayesian Optimization with Active Preference Learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 14490–14498). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i13.29364

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free