Classifying textures with only 10 visual-words using hidden markov models with dirichlet mixtures

6Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work presents what we think to be the first application of Dirichlet-based Hidden Markov Models (HMM) to real-world data. Initially developed in [5], this model has only been tested on controlled synthetic data, showing promising results for classification tasks. Its capabilities on proportional data are investigated and leveraged for texture classification. Comparison to HMM with Gaussian mixtures and to nearest-neighbor classifiers is conducted and a generalized Bhattacharyya distance for series of histograms is proposed. We show that HMM with Dirichlet mixtures outperforms other tested classifiers. Using the popular bag-of-words approach, the Dirichlet-based HMM proves its ability to discriminate well between 25 textures from challenging data sets using a global dictionary of 10 words only. This seems to represent the smallest dictionary ever used to this purpose and raises the question of the need of hundreds-word dictionaries most often used in the literature for the data sets we have tested. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Epaillard, E., Bouguila, N., & Ziou, D. (2014). Classifying textures with only 10 visual-words using hidden markov models with dirichlet mixtures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8779 LNAI, pp. 20–28). Springer Verlag. https://doi.org/10.1007/978-3-319-11298-5_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free