Property norms have the potential to aid a wide range of semantic tasks, provided that they can be obtained for large numbers of concepts. Recent work has focused on text as the main source of information for automatic property extraction. In this paper we examine property norm prediction from visual, rather than textual, data, using cross-modal maps learnt between property norm and visual spaces. We also investigate the importance of having a complete feature norm dataset, for both training and testing. Finally, we evaluate how these datasets and cross-modal maps can be used in an image retrieval task.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Bulat, L., Kiela, D., & Clark, S. (2016). Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps. In 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference (pp. 579–588). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n16-1071