Visual bilingual lexicon induction with transferred convnet features

38Citations
Citations of this article
119Readers
Mendeley users who have this article in their library.

Abstract

This paper is concerned with the task of bilingual lexicon induction using imagebased features. By applying features from a convolutional neural network (CNN), we obtain state-of-the-art performance on a standard dataset, obtaining a 79% relative improvement over previous work which uses bags of visual words based on SIFT features. The CNN image-based approach is also compared with state-of-the-art linguistic approaches to bilingual lexicon induction, even outperforming these for one of three language pairs on another standard dataset. Furthermore, we shed new light on the type of visual similarity metric to use for genuine similarity versus relatedness tasks, and experiment with using multiple layers from the same network in an attempt to improve performance.

Cite

CITATION STYLE

APA

Kiela, D., Vulić, I., & Clark, S. (2015). Visual bilingual lexicon induction with transferred convnet features. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 148–158). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1015

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free