Discovering and distinguishing multiple visual senses for polysemous words

22Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

To reduce the dependence on labeled data, there have been increasing research efforts on learning visual classifiers by exploiting web images. One issue that limits their performance is the problem of polysemy. To solve this problem, in this work, we present a novel framework that solves the problem of polysemy by allowing sense-specific diversity in search results. Specifically, we first discover a list of possible semantic senses to retrieve sense-specific images. Then we merge visual similar semantic senses and prune noises by using the retrieved images. Finally, we train a visual classifier for each selected semantic sense and use the learned sense-specific classifiers to distinguish multiple visual senses. Extensive experiments on classifying images into sense-specific categories and re-ranking search results demonstrate the superiority of our proposed approach.

Cite

CITATION STYLE

APA

Yao, Y., Zhang, J., Shen, F., Yang, W., Huang, P., & Tang, Z. (2018). Discovering and distinguishing multiple visual senses for polysemous words. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 523–530). AAAI press. https://doi.org/10.1609/aaai.v32i1.11255

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free