Data-driven penalty calibration: A case study for Gaussian mixture model selection

12Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In the companion paper [C. Maugis and B. Michel, A non asymptotic penalized criterion for Gaussian mixture model selection. ESAIM: P&S 15 (2011) 41-68], a penalized likelihood criterion is proposed to select a Gaussian mixture model among a specific model collection. This criterion depends on unknown constants which have to be calibrated in practical situations. A "slope heuristics" method is described and experimented to deal with this practical problem. In a model-based clustering context, the specific form of the considered Gaussian mixtures allows us to detect the noisy variables in order to improve the data clustering and its interpretation. The behavior of our data-driven criterion is highlighted on simulated datasets, a curve clustering example and a genomics application. © 2011 EDP Sciences, SMAI.

Cite

CITATION STYLE

APA

Maugis, C., & Michel, B. (2011). Data-driven penalty calibration: A case study for Gaussian mixture model selection. ESAIM - Probability and Statistics, 15, 320–339. https://doi.org/10.1051/ps/2010002

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free