Generalized mean based back-propagation of errors for ambiguity resolution

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Ambiguity in a dataset, characterized by data points having multiple target labels, may occur in many supervised learning applications. Such ambiguity originates naturally or from misinterpretation, faulty encoding, and/or incompleteness of data. However, most applications demand that a data point be assigned a single label. In such cases, the supervised learner must resolve the ambiguity. To effectively perform ambiguity resolution, we propose a new variant of the popular Multi-Layer Perceptron model, called the Generalized Mean Multi-Layer Perceptron (GMMLP). In GMMLP, a novel differentiable error function guides the back-propagation algorithm towards the minimum distant target for each data point. We evaluate the performance of the proposed algorithm against three alternative ambiguity resolvers on 20 new artificial datasets containing ambiguous data points. To further test for scalability and comparison with multi-label classifiers, 18 real datasets are also used to evaluate the new approach.

Cite

CITATION STYLE

APA

Datta, S., Mullick, S. S., & Das, S. (2017). Generalized mean based back-propagation of errors for ambiguity resolution. Pattern Recognition Letters, 94, 22–29. https://doi.org/10.1016/j.patrec.2017.04.019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free