Recently, a series of works have looked into further improving the adapter-based tuning by manually designing better adapter architectures. Understandably, these manually designed solutions are sub-optimal. In this work, we propose the Learned Adapter framework to automatically learn the optimal adapter architectures for better task adaptation of pre-trained models (PTMs). First, we construct a unified search space for adapter architecture designs. In terms of the optimization method on the search space, we propose a simple-yet-effective method, GDNAS, for better architecture optimization. Extensive experiments show that our Learned Adapter framework can outperform the previous parameter-efficient tuning (PETuning) baselines while tuning comparable or fewer parameters. Moreover: (a) the learned adapter architectures are explainable and transferable across tasks. (b) We demonstrate that our architecture search space design is valid.
CITATION STYLE
Zhang, Y., Wang, P., Tan, M., & Zhu, W. (2023). Learned Adapters Are Better Than Manually Designed Adapters. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 7420–7437). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.468
Mendeley helps you to discover research relevant for your work.