Deep-PUMR: Deep positive and unlabeled learning with manifold regularization

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Training a binary classifier only on positive and unlabeled examples (i.e., the PU learning) is an important yet challenging issue, widely seen in many problems in which it is difficult to obtain negative examples. Existing methods for handling this challenge often perform unsatisfactorily, since they often ignore the relations between positive and unlabeled examples and are also limited to the traditional shallow learning frameworks. Therefore, this work proposes a new approach: Deep Positive and Unlabeled learning with Manifold Regularization (Deep-PUMR), which integrates the manifold regularization with deep neural networks to address the above issues with classic PU learning. Deep-PUMR holds two major advantages: (i) Our method exploits the manifold properties of data distribution to capture the relationship of positive and unlabeled examples; (ii) The adopted deep network enables Deep-PUMR with strong learning ability, especially on large-scale datasets. Extensive experiments on five diverse datasets demonstrate that Deep-PUMR achieves the state-of-the-art performance in comparison with classic PU learning algorithms and risk estimators.

Cite

CITATION STYLE

APA

Chen, X., Liu, F., Tu, E., Cao, L., & Yang, J. (2018). Deep-PUMR: Deep positive and unlabeled learning with manifold regularization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11301 LNCS, pp. 12–20). Springer Verlag. https://doi.org/10.1007/978-3-030-04167-0_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free