CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings

8Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents CoLLIE: a simple, yet effective model for continual learning of how language is grounded in vision. Given a pre-trained multimodal embedding model, where language and images are projected in the same semantic space (in this case CLIP by OpenAI), CoLLIE learns a transformation function that adjusts the language embeddings when needed to accommodate new language use. This is done by predicting the difference vector that needs to be applied, as well as a scaling factor for this vector, so that the adjustment is only applied when needed. Unlike traditional few-shot learning, the model does not just learn new classes and labels, but can also generalize to similar language use and leverage semantic compositionality. We verify the model's performance on two different tasks of identifying the targets of referring expressions, where it has to learn new language use. The results show that the model can efficiently learn and generalize from only a few examples, with little interference with the model's original zero-shot performance.

Cite

CITATION STYLE

APA

Skantze, G., & Willemsen, B. (2022). CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings. Journal of Artificial Intelligence Research, 74, 1201–1223. https://doi.org/10.1613/JAIR.1.13689

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free