Distributed algorithms for computing very large thresholded covariance matrices

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Computation of covariance matrices from observed data is an important problem, as such matrices are used in applications such as principal component analysis (PCA), linear discriminant analysis (LDA), and increasingly in the learning and application of probabilistic graphical models. However, computing an empirical covariance matrix is not always an easy problem. There are two key difficulties associated with computing such a matrix from a very high-dimensional dataset. The first problem is over-fitting. For a p-dimensional covariance matrix, there are p(p - 1)/2 unique, off-diagonal entries in the empirical covariance matrix Ŝ; for large p (say, p > 105), the size n of the dataset is often much smaller than the number of covariances to compute. Over-fitting is a concern in any situation in which the number of parameters learned can greatly exceed the size of the dataset. Thus, there are strong theoretical reasons to expect that for high-dimensional data - even Gaussian data - the empirical covariance matrix is not a good estimate for the true covariance matrix underlying the generative process. The second problem is computational. Computing a covariance matrix takes O(np2) time. For large p (greater than 10,000) and n much greater than p, this is debilitating. In this article, we consider how both of these difficulties can be handled simultaneously. Specifically, a key regularization technique for high-dimensional covariance estimation is thresholding, in which the smallest or least significant entries in the covariance matrix are simply dropped and replaced with the value 0. This suggests an obvious way to address the computational difficulty as well: First, compute the identities of the K entries in the covariance matrix that are actually important in the sense that they will not be removed during thresholding, and then in a second step, compute the values of those entries. This can be done in O(Kn) time. If K ≪ p2 and the identities of the important entries can be computed in reasonable time, then this is a big win. The key technical contribution of this article is the design and implementation of two different distributed algorithms for approximating the identities of the important entries quickly, using sampling. We have implemented these methods and tested them using an 800-core compute cluster. Experiments have been run using real datasets having millions of data points and up to 40,000 dimensions. These experiments show that the proposed methods are both accurate and efficient.

References Powered by Scopus

MapReduce: Simplified data processing on large clusters

12260Citations
N/AReaders
Get full text

Regularized discriminant analysis

1976Citations
N/AReaders
Get full text

On the distribution of the largest eigenvalue in principal components analysis

1498Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Maximally Correlated Principal Component Analysis Based on Deep Parameterization Learning

14Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Gao, Z. J., & Jermaine, C. (2016). Distributed algorithms for computing very large thresholded covariance matrices. ACM Transactions on Knowledge Discovery from Data, 11(2). https://doi.org/10.1145/2935750

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 7

58%

Researcher 3

25%

Lecturer / Post doc 2

17%

Readers' Discipline

Tooltip

Computer Science 7

70%

Mathematics 2

20%

Arts and Humanities 1

10%

Save time finding and organizing research with Mendeley

Sign up for free