Sample-efficient learning of mixtures

19Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

We consider PAC learning of probability distributions (a.k.a. density estimation), where we are given an i.i.d. sample generated from an unknown target distribution, and want to output a distribution that is close to the target in total variation distance. Let F be an arbitrary class of probability distributions, and let F k denote the class of k-mixtures of elements of F. Assuming the existence of a method for learning F with sample complexity mF(ε), we provide a method for learning F k with sample complexity O(k log k · mF(ε)/ε 2 ). Our mixture learning algorithm has the property that, if the Flearner is proper and agnostic, then the F k -learner would be proper and agnostic as well. This general result enables us to improve the best known sample complexity upper bounds for a variety of important mixture classes. First, we show that the class of mixtures of k axis-aligned Gaussians in R d is PAC-learnable in the agnostic setting with O(kd/ε 4 ) samples, which is tight in k and d up to logarithmic factors. Second, we show that the class of mixtures of k Gaussians in R d is PAC-learnable in the agnostic setting with sample complexity O(kd 2 /ε 4 ), which improves the previous known bounds of O(k 3 d 2 /ε 4 ) and O(k 4 d 4 /ε 2 ) in its dependence on k and d. Finally, we show that the class of mixtures of k log-concave distributions over R d is PAC-learnable using O(d ( d+5)/ 2 ε − (d+9)/ 2 k) samples.

Cite

CITATION STYLE

APA

Ashtiani, H., Ben-David, S., & Mehrabian, A. (2018). Sample-efficient learning of mixtures. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 2679–2686). AAAI press. https://doi.org/10.1609/aaai.v32i1.11627

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free