In this paper, we move towards combining large parametric models with non-parametric prototypical networks. We propose prototypical fine-tuning, a novel prototypical framework for fine-tuning pretrained language models (LM), which automatically learns a bias to improve predictive performance for varying data sizes, especially low-resource settings. Our prototypical fine-tuning approach can automatically adjust the model capacity according to the number of data points and the model's inherent attributes. Moreover, we propose four principles for effective prototype fine-tuning towards the optimal solution. Experimental results across various datasets show that our work achieves significant performance improvements under various low-resource settings, as well as comparable and usually better performances in high-resource scenarios.
CITATION STYLE
Jin, Y., Wang, X., Hao, Y., Sun, Y., & Xie, X. (2023). Prototypical Fine-Tuning: Towards Robust Performance under Varying Data Sizes. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 12968–12976). AAAI Press. https://doi.org/10.1609/aaai.v37i11.26524
Mendeley helps you to discover research relevant for your work.