A brief introduction to weakly supervised learning

1.3kCitations
Citations of this article
1.6kReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Supervised learning techniques construct predictive models by learning from a large number of training examples, where each training example has a label indicating its ground-truth output. Though current techniques have achieved great success, it is noteworthy that in many tasks it is difficult to get strong supervision information like fully ground-truth labels due to the high cost of the data-labeling process. Thus, it is desirable for machine-learning techniques to work with weak supervision. This article reviews some research progress of weakly supervised learning, focusing on three typical types of weak supervision: incomplete supervision, where only a subset of training data is given with labels; inexact supervision, where the training data are given with only coarse-grained labels; and inaccurate supervision, where the given labels are not always ground-truth.

Cite

CITATION STYLE

APA

Zhou, Z. H. (2018, January 1). A brief introduction to weakly supervised learning. National Science Review. Oxford University Press. https://doi.org/10.1093/nsr/nwx106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free