Efficient data representations that preserve information

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A fundamental issue in computational learning theory, as well as in biological information processing, is the best possible relationship between model representation complexity and its prediction accuracy. Clearly, we expect more complex models that require longer data representation to be more accurate. Can one provide a quantitative, yet general, formulation of this trade-off? In this talk I will discuss this question from Shannon’s Information Theory perspective. I will argue that this trade-off can be traced back to the basic duality between source and channel coding and is also related to the notion of “coding with side information”. I will review some of the theoretical achievability results for such relevant data representations and discuss our algorithms for extracting them. I will then demonstrate the application of these ideas for the analysis of natural language corpora and speculate on possibly-universal aspects of human language that they reveal. Based on joint works with Ran Bacharach, Gal Chechik, Amir Globerson, Amir Navot, and Noam Slonim.

Cite

CITATION STYLE

APA

Tishby, N. (2003). Efficient data representations that preserve information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2842, p. 16). Springer Verlag. https://doi.org/10.1007/978-3-540-39644-4_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free