ConTinTin: Continual Learning from Task Instructions

19Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

The mainstream machine learning paradigms for NLP often work with two underlying presumptions. First, the target task is predefined and static; a system merely needs to learn to solve it exclusively. Second, the supervision of a task mainly comes from a set of labeled examples. A question arises: how to build a system that can keep learning new tasks from their instructions? This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction. The system is required to (i) generate the expected outputs of a new task by learning from its instruction, (ii) transfer the knowledge acquired from upstream tasks to help solve downstream tasks (i.e., forward-transfer), and (iii) retain or even improve the performance on earlier tasks after learning new tasks (i.e., backward-transfer). This new problem is studied on a stream of more than 60 tasks, each equipped with an instruction. Technically, our method InstructionSpeak contains two strategies that make full use of task instructions to improve forward-transfer and backward-transfer: one is to learn from negative outputs, the other is to re-visit instructions of previous tasks. To our knowledge, this is the first time to study ConTinTin in NLP. In addition to the problem formulation and our promising approach, this work also contributes to providing rich analyses for the community to better understand this novel learning problem.

References Powered by Scopus

Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem

3091Citations
N/AReaders
Get full text

Exploiting cloze questions for few shot text classification and natural language inference

917Citations
N/AReaders
Get full text

DER: Dynamically Expandable Representation for Class Incremental Learning

490Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Fine-tuned Language Models are Continual Learners

52Citations
N/AReaders
Get full text

Continual Training of Language Models for Few-Shot Learning

12Citations
N/AReaders
Get full text

Robustness of Learning from Task Instructions

9Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Yin, W., Li, J., & Xiong, C. (2022). ConTinTin: Continual Learning from Task Instructions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3062–3072). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.218

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 15

68%

Researcher 5

23%

Professor / Associate Prof. 1

5%

Lecturer / Post doc 1

5%

Readers' Discipline

Tooltip

Computer Science 20

80%

Linguistics 3

12%

Neuroscience 1

4%

Physics and Astronomy 1

4%

Save time finding and organizing research with Mendeley

Sign up for free