Unifying divergence minimization and statistical inference via convex duality

62Citations
Citations of this article
113Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we unify divergence minimization and statistical inference by means of convex duality. In the process of doing so, we prove that the dual of approximate maximum entropy estimation is maximum a posteriori estimation as a special case. Moreover, our treatment leads to stability and convergence bounds for many statistical learning problems. Finally, we show how an algorithm by Zhang can be used to solve this class of optimization problems efficiently. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Altun, Y., & Smola, A. (2006). Unifying divergence minimization and statistical inference via convex duality. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4005 LNAI, pp. 139–153). Springer Verlag. https://doi.org/10.1007/11776420_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free