Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps

17Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.

Abstract

Property norms have the potential to aid a wide range of semantic tasks, provided that they can be obtained for large numbers of concepts. Recent work has focused on text as the main source of information for automatic property extraction. In this paper we examine property norm prediction from visual, rather than textual, data, using cross-modal maps learnt between property norm and visual spaces. We also investigate the importance of having a complete feature norm dataset, for both training and testing. Finally, we evaluate how these datasets and cross-modal maps can be used in an image retrieval task.

References Powered by Scopus

ImageNet: A Large-Scale Hierarchical Image Database

51389Citations
N/AReaders
Get full text

Caffe: Convolutional architecture for fast feature embedding

8235Citations
N/AReaders
Get full text

The symbol grounding problem

2578Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Modelling metaphor with attribute-based semantics

52Citations
N/AReaders
Get full text

Decoding word embeddings with brain-based semantic features

35Citations
N/AReaders
Get full text

Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain

21Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Bulat, L., Kiela, D., & Clark, S. (2016). Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps. In 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference (pp. 579–588). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n16-1071

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 37

76%

Researcher 6

12%

Professor / Associate Prof. 4

8%

Lecturer / Post doc 2

4%

Readers' Discipline

Tooltip

Computer Science 39

74%

Linguistics 8

15%

Engineering 4

8%

Neuroscience 2

4%

Save time finding and organizing research with Mendeley

Sign up for free