This paper presents a method for determining the significant features of an image within a maximum likelihood framework by remarkably reducing the semantic gap between high level and low level features. With this concern, we propose a FLD-Mixture Models and analyzed the effect of different distance metrics for Image Retrieval System. In this method, first Expectation Maximization (EM) algorithm method is applied to learn mixture of Gaussian distributions to obtain best possible maximum likelihood clusters. Gaussian Mixture Models is used for clustering data in unsupervised context. Further, Fisher's Linear Discriminant Analysis(FLDA) is applied for K = 4 mixtures to preserve useful discriminatory information in reduced feature space. Finally, six different distance measures are used for classification purpose to obtain an average classification rate. We examined our proposed model on Caltech-101, Caltech-256 & Corel-1k datasets and achieved state-of-the-art classification rates compared to several well known benchmarking techniques on the same datasets. © Springer International Publishing Switzerland 2014.
CITATION STYLE
Mahantesh, K., Manjunath Aradhya, V. N., & Naveena, C. (2014). An exploration of Mixture Models to maximize between class scatter for object classification in large image datasets. In Advances in Intelligent Systems and Computing (Vol. 264, pp. 451–461). Springer Verlag. https://doi.org/10.1007/978-3-319-04960-1_40
Mendeley helps you to discover research relevant for your work.