Maximum margin partial label learning

100Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

Partial label learning aims to learn from training examples each associated with a set of candidate labels, among which only one label is valid for the training example. The basic strategy to learn from partial label examples is disambiguation, i.e. by trying to recover the ground-truth labeling information from the candidate label set. As one of the popular machine learning paradigms, maximum margin techniques have been employed to solve the partial label learning problem. Existing attempts perform disambiguation by optimizing the margin between the maximum modeling output from candidate labels and that from non-candidate ones. Nonetheless, this formulation ignores considering the margin between the ground-truth label and other candidate labels. In this paper, a new maximum margin formulation for partial label learning is proposed which directly optimizes the margin between the ground-truth label and all other labels. Specifically, the predictive model is learned via an alternating optimization procedure which coordinates the task of ground-truth label identification and margin maximization iteratively. Extensive experiments on artificial as well as real-world datasets show that the proposed approach is highly competitive to other well-established partial label learning approaches.

Cite

CITATION STYLE

APA

Yu, F., & Zhang, M. L. (2017). Maximum margin partial label learning. Machine Learning, 106(4), 573–593. https://doi.org/10.1007/s10994-016-5606-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free