Information theoretic learning and kernel methods

14Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter, we discuss important connections between two different approaches to machine learning, namely Renyi entropy-based information theoretic learning and the Mercer kernel methods. We show that Parzen windowing for estimation of probability density functions reveals the connections, enabling the information theoretic criteria to be expressed in terms of mean vectors in a Mercer kernel feature space, or equivalently, in terms of kernel matrices. From this we learn not only that two until now separate paradigms in machine learning are related, it also enables us to interpret and understand methods developed in one paradigm in terms of the other, and to develop new sophisticated machine learning algorithms based on both approaches. © 2009 Springer US.

Cite

CITATION STYLE

APA

Jenssen, R. (2009). Information theoretic learning and kernel methods. In Information Theory and Statistical Learning (pp. 209–230). Springer US. https://doi.org/10.1007/978-0-387-84816-7_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free