Learned Adapters Are Better Than Manually Designed Adapters

9Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Recently, a series of works have looked into further improving the adapter-based tuning by manually designing better adapter architectures. Understandably, these manually designed solutions are sub-optimal. In this work, we propose the Learned Adapter framework to automatically learn the optimal adapter architectures for better task adaptation of pre-trained models (PTMs). First, we construct a unified search space for adapter architecture designs. In terms of the optimization method on the search space, we propose a simple-yet-effective method, GDNAS, for better architecture optimization. Extensive experiments show that our Learned Adapter framework can outperform the previous parameter-efficient tuning (PETuning) baselines while tuning comparable or fewer parameters. Moreover: (a) the learned adapter architectures are explainable and transferable across tasks. (b) We demonstrate that our architecture search space design is valid.

Cite

CITATION STYLE

APA

Zhang, Y., Wang, P., Tan, M., & Zhu, W. (2023). Learned Adapters Are Better Than Manually Designed Adapters. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 7420–7437). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.468

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free