Personalized LoRA for Human-Centered Text Understanding

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Effectively and efficiently adapting a pre-trained language model (PLM) for human-centered text understanding (HCTU) is challenging since user tokens are million-level in most personalized applications and do not have concrete explicit semantics. A standard and parameter-efficient approach (e.g., LoRA) necessitates memorizing numerous suits of adapters for each user. In this work, we introduce a personalized LoRA (PLoRA) with a plug-and-play (PnP) framework for the HCTU task. PLoRA is effective, parameter-efficient, and dynamically deploying in PLMs. Moreover, a personalized dropout and a mutual information maximizing strategies are adopted and hence the proposed PLoRA can be well adapted to few/zero-shot learning scenarios for the cold-start issue. Experiments conducted on four benchmark datasets show that the proposed method outperforms existing methods in full/few/zero-shot learning scenarios for the HCTU task, even though it has fewer trainable parameters. For reproducibility, the code for this paper is available at: https://github.com/yoyo-yun/PLoRA.

Cite

CITATION STYLE

APA

Zhang, Y., Wang, J., Yu, L. C., Xu, D., & Zhang, X. (2024). Personalized LoRA for Human-Centered Text Understanding. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 19588–19596). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i17.29931

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free