Can visual recognition benefit from auxiliary information in training?

13Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We examine an under-explored visual recognition problem, where we have a main view along with an auxiliary view of visual information present in the training data, but merely the main view is available in the test data. To effectively leverage the auxiliary view to train a stronger classifier, we propose a collaborative auxiliary learning framework based on a new discriminative canonical correlation analysis. This framework reveals a common semantic space shared across both viewsthrough enforcing a series of nonlinear projections. Such projectionsautomatically embed the discriminative cues hidden in both views intothe common space, and better visual recognition is thus achieved on the test data that stems from only the main view. The efficacy of our proposed auxiliary learning approach is demonstrated through three challenging visual recognition tasks with different kinds of auxiliary information.

Cite

CITATION STYLE

APA

Zhang, Q., Hua, G., Liu, W., Liu, Z., & Zhang, Z. (2015). Can visual recognition benefit from auxiliary information in training? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9003, pp. 65–80). Springer Verlag. https://doi.org/10.1007/978-3-319-16865-4_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free