Visual Tuning

  • Yu B
  • Chang J
  • Wang H
  • et al.
N/ACitations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

Fine-tuning visual models has been widely shown promising performance on many downstream visual tasks. With the surprising development of pre-trained visual foundation models, visual tuning jumped out of the standard modus operandi that fine-tunes the whole pre-trained model or just the fully connected layer. Instead, recent advances can achieve superior performance than full-tuning the whole pre-trained parameters by updating far fewer parameters, enabling edge devices and downstream applications to reuse the increasingly large foundation models deployed on the cloud. With the aim of helping researchers get the full picture and future directions of visual tuning, this survey characterizes a large and thoughtful selection of recent works, providing a systematic and comprehensive overview of existing work and models. Specifically, it provides a detailed background of visual tuning and categorizes recent visual tuning techniques into five groups: fine-tuning, prompt tuning, adapter tuning, parameter tuning, and remapping tuning. Meanwhile, it offers some exciting research directions for prospective pre-training and various interactions in visual tuning.

Cite

CITATION STYLE

APA

Yu, B. X. B., Chang, J., Wang, H., Liu, L., Wang, S., Wang, Z., … Chen, C. W. (2024). Visual Tuning. ACM Computing Surveys, 56(12), 1–38. https://doi.org/10.1145/3657632

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free