Using an Ensemble of Incrementally Fine-Tuned CNNs for Cross-Domain Object Category Recognition

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When the training data is inadequate, it is difficult to train a deep Convolutional Neural Network (CNN) from scratch with randomized initial weights. Instead, it is common to train a source CNN model on a very large data set beforehand, and then use the learned source CNN model as an initialization to train a target CNN model. In deep learning realm, this procedure is called fine-tuning a CNN. This paper presents an experimental study on how to combine a collection of incrementally fine-tuned CNN models for cross-domain and multi-class object category recognition tasks. A group of fine-tuned CNN models is trained on the target data set by incrementally transferring parameters from a source CNN model trained on a large data set initially. The last two fully-connected (FC) layers of the source CNN model are eliminated, and two New FC layers are added to make the learned new CNN model suitable for the target task. Based on Caltech-101 and Office data sets, the experimental results demonstrate the effectiveness and good performance of the proposed methods. The proposed method is more suitable for the object recognition task when there is inadequate target training data.

Cite

CITATION STYLE

APA

Zhang, X., Yan, F., Zhuang, Y., Hu, H., & Bu, C. (2019). Using an Ensemble of Incrementally Fine-Tuned CNNs for Cross-Domain Object Category Recognition. IEEE Access, 7, 33822–33833. https://doi.org/10.1109/ACCESS.2019.2903550

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free