Crafting a multi-task CNN for viewpoint estimation

45Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

Convolutional Neural Networks (CNNs) were recently shown to provide state-of-the-art results for object category viewpoint estimation. However different ways of formulating this problem have been proposed and the competing approaches have been explored with very different design choices. This paper presents a comparison of these approaches in a unified setting as well as a detailed analysis of the key factors that impact performance. Followingly, we present a new joint training method with the detection task and demonstrate its benefit. We also highlight the superiority of classification approaches over regression approaches, quantify the benefits of deeper architectures and extended training data, and demonstrate that synthetic data is beneficial even when using ImageNet training data. By combining all these elements, we demonstrate an improvement of approximately 5% mAVP over previous state-of-the-art results on the Pascal3D+ dataset [29]. In particular for their most challenging 24 view classification task we improve the results from 31.1% to 36.1% mAVP.

Cite

CITATION STYLE

APA

Massa, F., Marlet, R., & Aubry, M. (2016). Crafting a multi-task CNN for viewpoint estimation. In British Machine Vision Conference 2016, BMVC 2016 (Vol. 2016-September, pp. 91.1-91.12). British Machine Vision Conference, BMVC. https://doi.org/10.5244/C.30.91

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free