Most of us are not experts in specific fields, such as ornithology. Nonetheless, we do have general image and language understanding capabilities that we use to match what we see to expert resources. This allows us to expand our knowledge and perform novel tasks without ad-hoc external supervision. On the contrary, machines have a much harder time consulting expert-curated knowledge bases unless trained specifically with that knowledge in mind. Thus, in this paper we consider a new problem: fine-grained image recognition without expert annotations, which we address by leveraging the vast knowledge available in web encyclopedias. First, we learn a model to describe the visual appearance of objects using non-expert image descriptions. We then train a fine-grained textual similarity model that matches image descriptions with documents on a sentence-level basis. We evaluate the method on two datasets (CUB-200 and Oxford-102 Flowers) and compare with several strong baselines and the state of the art in cross-modal retrieval. Code is available at: https://github.com/subhc/clever .
CITATION STYLE
Choudhury, S., Laina, I., Rupprecht, C., & Vedaldi, A. (2024). The Curious Layperson: Fine-Grained Image Recognition Without Expert Labels. International Journal of Computer Vision, 132(2), 537–554. https://doi.org/10.1007/s11263-023-01885-9
Mendeley helps you to discover research relevant for your work.