Selecting optimal number of components in Gaussian Mixture Model (GMM) is of interest to many researchers in the last few decades. Most current approaches are based on information criterion, which was introduced by Akaike (1974) and modified by many other researchers. The standard approach uses the EM algorithm, which fits model parameters to training data and determines log-likelihood functions for increasing number of components. Penalized forms of log-like- lihood function are then used for selecting number of components. Just searching for new or modified forms of penalty function is subject of permanent effort how to improve or do robust these methods for various type of distributed data. Our new technique for selection of optimal number of GMM components is based on Multiple Random Subsampling of training data with Initialization of the EM algorithm (MuRSI). Results of many performed experiments demonstrate the advantages of this method.
CITATION STYLE
Psutka, J. V. (2015). Gaussian mixture model selection using multiple random subsampling with initialization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9256, pp. 678–689). Springer Verlag. https://doi.org/10.1007/978-3-319-23192-1_57
Mendeley helps you to discover research relevant for your work.