Meta-strategy for learning tuning parameters with guarantees

8Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Online learning methods, similar to the online gradient algorithm (OGA) and exponentially weighted aggregation (EWA), often depend on tuning parameters that are difficult to set in practice. We consider an online meta-learning scenario, and we propose a meta-strategy to learn these parameters from past tasks. Our strategy is based on the minimization of a regret bound. It allows us to learn the initialization and the step size in OGA with guarantees. It also allows us to learn the prior or the learning rate in EWA. We provide a regret analysis of the strategy. It allows to identify settings where meta-learning indeed improves on learning each task in isolation.

Cite

CITATION STYLE

APA

Meunier, D., & Alquier, P. (2021). Meta-strategy for learning tuning parameters with guarantees. Entropy, 23(10). https://doi.org/10.3390/e23101257

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free