An exploration of Mixture Models to maximize between class scatter for object classification in large image datasets

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a method for determining the significant features of an image within a maximum likelihood framework by remarkably reducing the semantic gap between high level and low level features. With this concern, we propose a FLD-Mixture Models and analyzed the effect of different distance metrics for Image Retrieval System. In this method, first Expectation Maximization (EM) algorithm method is applied to learn mixture of Gaussian distributions to obtain best possible maximum likelihood clusters. Gaussian Mixture Models is used for clustering data in unsupervised context. Further, Fisher's Linear Discriminant Analysis(FLDA) is applied for K = 4 mixtures to preserve useful discriminatory information in reduced feature space. Finally, six different distance measures are used for classification purpose to obtain an average classification rate. We examined our proposed model on Caltech-101, Caltech-256 & Corel-1k datasets and achieved state-of-the-art classification rates compared to several well known benchmarking techniques on the same datasets. © Springer International Publishing Switzerland 2014.

Cite

CITATION STYLE

APA

Mahantesh, K., Manjunath Aradhya, V. N., & Naveena, C. (2014). An exploration of Mixture Models to maximize between class scatter for object classification in large image datasets. In Advances in Intelligent Systems and Computing (Vol. 264, pp. 451–461). Springer Verlag. https://doi.org/10.1007/978-3-319-04960-1_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free