Learning semantic representations of objects and their parts

10Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

Recently, large scale image annotation datasets have been collected with millions of images and thousands of possible annotations. Latent variable models, or embedding methods, that simultaneously learn semantic representations of object labels and image representations can provide tractable solutions on such tasks. In this work, we are interested in jointly learning representations both for the objects in an image, and the parts of those objects, because such deeper semantic representations could bring a leap forward in image retrieval or browsing. Despite the size of these datasets, the amount of annotated data for objects and parts can be costly and may not be available. In this paper, we propose to bypass this cost with a method able to learn to jointly label objects and parts without requiring exhaustively labeled data. We design a model architecture that can be trained under a proxy supervision obtained by combining standard image annotation (from ImageNet) with semantic part-based within-label relations (from WordNet). The model itself is designed to model both object image to object label similarities, and object label to object part label similarities in a single joint system. Experiments conducted on our combined data and a precisely annotated evaluation set demonstrate the usefulness of our approach. © 2013 The Author(s).

References Powered by Scopus

ImageNet: A Large-Scale Hierarchical Image Database

51099Citations
N/AReaders
Get full text

Regression Shrinkage and Selection Via the Lasso

35674Citations
N/AReaders
Get full text

WordNet: A Lexical Database for English

11663Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Scene classification based on single-layer SAE and SVM

64Citations
N/AReaders
Get full text

A new strategy for bridging the semantic gap in image retrieval

16Citations
N/AReaders
Get full text

Anticipating next goal for robot plan prediction

5Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Mesnil, G., Bordes, A., Weston, J., Chechik, G., & Bengio, Y. (2014). Learning semantic representations of objects and their parts. Machine Learning, 94(2), 281–301. https://doi.org/10.1007/s10994-013-5336-9

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 37

57%

Researcher 17

26%

Professor / Associate Prof. 9

14%

Lecturer / Post doc 2

3%

Readers' Discipline

Tooltip

Computer Science 55

87%

Engineering 6

10%

Materials Science 1

2%

Neuroscience 1

2%

Save time finding and organizing research with Mendeley

Sign up for free