Sparse inverse covariance estimation for graph representation of feature structure

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The access to more information provided by modern high-throughput measurement systems has made it possible to investigate finer details of complex systems. However, it also has increased the number of features, and thereby the dimensionality in data, to be processed in data analysis. Higher dimensionality makes it particularly challenging to understand complex systems, by blowing up the number of possible configurations of features we need to consider. Structure learning with the Gaussian Markov random field can provide a remedy, by identifying conditional independence structure of features in a form that is easy to visualize and understand. The learning is based on a convex optimization problem, called the sparse inverse covariance estimation, for which many efficient algorithms have been developed in the past few years. When dimensions are much larger than sample sizes, structure learning requires to consider statistical stability, in which connections to data mining arise in terms of discovering common or rare subgraphs as patterns. The outcome of structure learning can be visualized as graphs, represented accordingly to additional information if required, providing a perceivable way to investigate complex feature spaces.

Cite

CITATION STYLE

APA

Lee, S. (2014). Sparse inverse covariance estimation for graph representation of feature structure. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8401, 227–240. https://doi.org/10.1007/978-3-662-43968-5_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free