Big Learning Expectation Maximization

5Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

Abstract

Mixture models serve as one fundamental tool with versatile applications. However, their training techniques, like the popular Expectation Maximization (EM) algorithm, are notoriously sensitive to parameter initialization and often suffer from bad local optima that could be arbitrarily worse than the optimal. To address the long-lasting bad-local-optima challenge, we draw inspiration from the recent ground-breaking foundation models and propose to leverage their underlying big learning principle to upgrade the EM. Specifically, we present the Big Learning EM (BigLearn-EM), an EM upgrade that simultaneously performs joint, marginal, and orthogonally transformed marginal matchings between data and model distributions. Through simulated experiments, we empirically show that the BigLearn-EM is capable of delivering the optimal with high probability; comparisons on benchmark clustering datasets further demonstrate its effectiveness and advantages over existing techniques. The code is available at https://github.com/YulaiCong/Big-Learning-Expectation-Maximization.

Cite

CITATION STYLE

APA

Cong, Y., & Li, S. (2024). Big Learning Expectation Maximization. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 11669–11677). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i10.29050

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free