Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning

3Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance taskspecific models and can even learn general task-agnostic representations for efficient finetuning to downstream tasks. However, deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning. This paper provides an overview of model reprogramming to bridge this gap. Model reprogramming enables resource-efficient crossdomain machine learning by repurposing and reusing a welldeveloped pre-trained model from a source domain to solve tasks in a target domain without model finetuning, where the source and target domains can be vastly different. In many applications, model reprogramming outperforms transfer learning and training from scratch. This paper elucidates the methodology of model reprogramming, summarizes existing use cases, provides a theoretical explanation of the success of model reprogramming, and concludes with a discussion on open-ended research questions and opportunities.

Cite

CITATION STYLE

APA

Chen, P. Y. (2024). Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 22584–22591). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i20.30267

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free