Friend-training: Learning from Models of Different but Related Tasks

0Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Current self-training methods such as standard self-training, co-training, tri-training, and others often focus on improving model performance on a single task, utilizing differences in input features, model architectures, and training processes. However, many tasks in natural language processing are about different but related aspects of language, and models trained for one task can be great teachers for other related tasks. In this work, we propose friend-training, a cross-task self-training framework, where models trained to do different tasks are used in an iterative training, pseudo-labeling, and retraining process to help each other for better selection of pseudo-labels. With two dialogue understanding tasks, conversational semantic role labeling and dialogue rewriting, chosen for a case study, we show that the models trained with the friend-training framework achieve the best performance compared to strong baselines.

Cite

CITATION STYLE

APA

Zhang, M., Jin, L., Song, L., Mi, H., Zhou, X., & Yu, D. (2023). Friend-training: Learning from Models of Different but Related Tasks. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 232–247). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free