Constraining type II error: Building intentionally biased classifiers

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In many applications, false positives (type I error) and false negatives (type II) have different impact. In medicine, it is not considered as bad to falsely diagnosticate someone healthy as sick (false positive) as it is to diagnosticate someone sick as healthy (false negative). But we are also willing to accept some rate of false negatives errors in order to make the classification task possible at all. Where the line is drawn is subjective and prone to controversy. Usually, this compromise is given by a cost matrix where an exchange rate between errors is defined. For many reasons, however, it might not be natural to think of this trade-off in terms of relative costs. We explore novel learning paradigms where this trade-off can be given in the form of the amount of false negatives we are willing to tolerate. The classifier then tries to minimize false positives while keeping false negatives within the acceptable bound. Here we consider classifiers based on kernel density estimation, gradient descent modifications and applying a threshold to classifying and ranking scores.

Cite

CITATION STYLE

APA

Cruz, R., Fernandes, K., Pinto Costa, J. F., & Cardoso, J. S. (2017). Constraining type II error: Building intentionally biased classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10306 LNCS, pp. 549–560). Springer Verlag. https://doi.org/10.1007/978-3-319-59147-6_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free