An optimization approach of deriving bounds between entropy and error from joint distribution: Case study for binary classifications

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In this work, we propose a new approach of deriving the bounds between entropy and error from a joint distribution through an optimization means. The specific case study is given on binary classifications. Two basic types of classification errors are investigated, namely, the Bayesian and non-Bayesian errors. The consideration of non-Bayesian errors is due to the facts that most classifiers result in non-Bayesian solutions. For both types of errors, we derive the closed-form relations between each bound and error components. When Fano's lower bound in a diagram of "Error Probability vs. Conditional Entropy" is realized based on the approach, its interpretations are enlarged by including non-Bayesian errors and the two situations along with independent properties of the variables. A new upper bound for the Bayesian error is derived with respect to the minimum prior probability, which is generally tighter than Kovalevskij's upper bound.

Cite

CITATION STYLE

APA

Hu, B. G., & Xing, H. J. (2016). An optimization approach of deriving bounds between entropy and error from joint distribution: Case study for binary classifications. Entropy, 18(2). https://doi.org/10.3390/e18020059

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free