Transforming task representations to perform novel tasks

14Citations
Citations of this article
132Readers
Mendeley users who have this article in their library.

Abstract

An important aspect of intelligence is the ability to adapt to a novel task without any direct experience (zero shot), based on its relationship to previous tasks. Humans can exhibit this cognitive flexibility. By contrast, models that achieve superhuman performance in specific tasks often fail to adapt to even slight task alterations. To address this, we propose a general computational framework for adapting to novel tasks based on their relationship to prior tasks. We begin by learning vector representations of tasks. To adapt to new tasks, we propose metamappings, higher-order tasks that transform basic task representations. We demonstrate the effectiveness of this framework across a wide variety of tasks and computational paradigms, ranging from regression to image classification and reinforcement learning. We compare to both human adaptability and language-based approaches to zero-shot learning. Across these domains, metamapping is successful, often achieving 80 to 90% performance, without any data, on a novel task, even when the new task directly contradicts prior experience. We further show that metamapping can not only generalize to new tasks via learned relationships, but can also generalize using novel relationships unseen during training. Finally, using metamapping as a starting point can dramatically accelerate later learning on a new task and reduce learning time and cumulative error substantially. Our results provide insight into a possible computational basis of intelligent adaptability and offer a possible framework for modeling cognitive flexibility and building more flexible artificial intelligence systems.

References Powered by Scopus

Mastering the game of Go with deep neural networks and tree search

12829Citations
N/AReaders
Get full text

Grandmaster level in StarCraft II using multi-agent reinforcement learning

2862Citations
N/AReaders
Get full text

Eliciting self-explanations improves understanding

1833Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Abstraction and analogy-making in artificial intelligence

86Citations
N/AReaders
Get full text

Continual Robot Learning Using Self-Supervised Task Inference

6Citations
N/AReaders
Get full text

Reconciling shared versus context-specific information in a neural network model of latent causes

2Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Lampinen, A. K., & McClelland, J. L. (2020). Transforming task representations to perform novel tasks. Proceedings of the National Academy of Sciences of the United States of America, 117(52), 32970–32981. https://doi.org/10.1073/PNAS.2008852117

Readers over time

‘20‘21‘22‘23‘24‘25020406080

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 52

67%

Researcher 19

24%

Professor / Associate Prof. 4

5%

Lecturer / Post doc 3

4%

Readers' Discipline

Tooltip

Computer Science 32

52%

Neuroscience 14

23%

Psychology 10

16%

Engineering 5

8%

Article Metrics

Tooltip
Social Media
Shares, Likes & Comments: 1

Save time finding and organizing research with Mendeley

Sign up for free
0