Visual Classifier Prediction by Distributional Semantic Embedding of Text Descriptions

0Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the main challenges for scaling up object recognition systems is the lack of annotated images for real-world categories. It is estimated that humans can recognize and discriminate among about 30,000 categories (Biederman and others, 1987). Typically there are few images available for training classifiers form most of these categories. This is reflected in the number of images per category available for training in most object categorization datasets, which, as pointed out in (Salakhutdinov et al., 2011), shows a Zipf distribution.

Cite

CITATION STYLE

APA

Elhoseiny, M., & Elgammal, A. (2015). Visual Classifier Prediction by Distributional Semantic Embedding of Text Descriptions. In A Workshop of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015 - Workshop on Vision and Language 2015, VL 2015: Vision and Language Meet Cognitive Systems - Proceedings (pp. 48–50). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-2809

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free