Implicit Bilevel Optimization: Differentiating through Bilevel Optimization Programming

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Bilevel Optimization Programming is used to model complex and conflicting interactions between agents, for example in Robust AI or Privacy-preserving AI. Integrating bilevel mathematical programming within deep learning is thus an essential objective for the Machine Learning community. Previously proposed approaches only consider single-level programming. In this paper, we extend existing single-level optimization programming approaches and thus propose Differentiating through Bilevel Optimization Programming (BIGRAD) for end-to-end learning of models that use Bilevel Programming as a layer. BIGRAD has wide applicability and can be used in modern machine learning frameworks. BIGRAD is applicable to both continuous and combinatorial Bilevel optimization problems. We describe a class of gradient estimators for the combinatorial case which reduces the requirements in terms of computation complexity; for the case of the continuous variable, the gradient computation takes advantage of the push-back approach (i.e. vectorjacobian product) for an efficient implementation. Experiments show that the BIGRAD successfully extends existing single-level approaches to Bilevel Programming.

Cite

CITATION STYLE

APA

Alesiani, F. (2023). Implicit Bilevel Optimization: Differentiating through Bilevel Optimization Programming. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 14683–14691). AAAI Press. https://doi.org/10.1609/aaai.v37i12.26716

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free