In this paper we unify divergence minimization and statistical inference by means of convex duality. In the process of doing so, we prove that the dual of approximate maximum entropy estimation is maximum a posteriori estimation as a special case. Moreover, our treatment leads to stability and convergence bounds for many statistical learning problems. Finally, we show how an algorithm by Zhang can be used to solve this class of optimization problems efficiently. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Altun, Y., & Smola, A. (2006). Unifying divergence minimization and statistical inference via convex duality. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4005 LNAI, pp. 139–153). Springer Verlag. https://doi.org/10.1007/11776420_13
Mendeley helps you to discover research relevant for your work.