Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emotion Recognition in Conversation (ERC) has been widely studied due to its importance in developing emotion-aware empathetic machines. The rise of pre-trained language models (PLMs) has further pushed the limit of ERC performance. However, most recent works on ERC using PLMs are heavily data-driven and require fine-tuning the entire PLMs. To improve both sample and computational efficiency, we propose a derivative-free optimization method called Cross-Task Prompt Tuning (CTPT) for few-shot conversational emotion recognition. Unlike existing methods that learn independent knowledge from individual tasks, CTPT leverages sharable cross-task knowledge by exploiting external knowledge from other source tasks to improve learning performance under the few-shot setting. Moreover, CTPT only needs to optimize a vector under the low intrinsic dimensionality without gradient, which is highly training-efficient compared with existing approaches. Experiments on five different contextual conversation datasets demonstrate that our CTPT method has superior results on both few-shot scenarios and zero-shot transfers.

Cite

CITATION STYLE

APA

Xu, Y., Zeng, Z., & Shen, Z. (2023). Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 11654–11666). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.780

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free