Skip to main content

Data-driven two-layer visual dictionary structure learning

  • Yu X
  • Yu Z
  • Wu L
  • et al.
3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

© 2019 SPIE and IS & T. An important issue in statistical modeling is to determine the complexity of the model based on the scale of data so as to effectively mitigate the model's overfitting problems without big data. We adopt a data-driven approach to automatically determine the number of components of the model. In order to better extract robust features, we propose a framework of data-driven two-layer structure visual dictionary learning (DTSVDL). It works by dividing the visual dictionary structure learning into two levels: the attribute layer and the detail layer. In the attribute layer, the attributes of the image dataset are learned, and these attributes are obtained by a data-driven Bayesian nonparametric model. Then, in the detail layer, the detailed information over attributes is further explored and refined, and the attributes are weighted by the number of effective observations associated with each attribute. Our proposed approach has three main advantages: (1) the two-layer structure makes our building visual dictionary be more expressive; (2) the number of components in the attribute layer can be determined automatically from the data; (3) the components are automatically determined based on the scale of visual words; therefore, our model can well mitigate the overfitting problem. In addition, by comparing with stacked autoencoders, stacked denoising autoencoders, LeNet-5, speeded-up robust features, and pretrained deep learning model ImageNet-VGG-F algorithms, we find that our approach achieves satisfactory image categorization results on two benchmark datasets. Specifically, higher categorization performance is achieved than by the classical approaches on 15 scene categories and action datasets. We conclude that the resulting DTSVDL possesses a good generality derived from attribute information as well as an excellent distinction derived from detailed information. In other words, the visual dictionary learned by our algorithm is more expressive and discriminatory.

Cite

CITATION STYLE

APA

Yu, X., Yu, Z., Wu, L., Pang, W., & Lin, C. (2019). Data-driven two-layer visual dictionary structure learning. Journal of Electronic Imaging, 28(02), 1. https://doi.org/10.1117/1.jei.28.2.023006

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free