Extracting visual knowledge from the internet: Making sense of image data

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent successes in visual recognition can be primarily attributed to feature representation, learning algorithms, and the everincreasing size of labeled training data. Extensive research has been devoted to the first two, but much less attention has been paid to the third. Due to the high cost of manual data labeling, the size of recent efforts such as ImageNet is still relatively small in respect to daily applications. In this work, we mainly focus on how to automatically generate identifying image data for a given visual concept on a vast scale. With the generated image data, we can train a robust recognition model for the given concept. We evaluate the proposed webly supervised approach on the benchmark Pascal VOC 2007 dataset and the results demonstrates the superiority of our method over many other state-ofthe- art methods in image data collection.

Cite

CITATION STYLE

APA

Yao, Y., Zhang, J., Hua, X. S., Shen, F., & Tang, Z. (2016). Extracting visual knowledge from the internet: Making sense of image data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9516, pp. 862–873). Springer Verlag. https://doi.org/10.1007/978-3-319-27671-7_72

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free