The paradigm of data programming, which uses weak supervision in the form of rules/labelling functions, and semi-supervised learning, which augments small amounts of labelled data with a large unlabelled dataset, have shown great promise in several text classification scenarios. In this work, we argue that by not using any labelled data, data programming based approaches can yield sub-optimal performances, particularly when the labelling functions are noisy. The first contribution of this work is an introduction of a framework, SPEAR which is a semi-supervised data programming paradigm that learns a joint model that effectively uses the rules/labelling functions along with semi-supervised loss functions on the feature space. Next, we also study SPEAR-SS which additionally does subset selection on top of the joint semi-supervised data programming objective and selects a set of examples that can be used as the labelled set by SPEAR. The goal of SPEAR-SS is to ensure that the labelled data can complement the labelling functions, thereby benefiting from both data-programming as well as appropriately selected data for human labelling. We demonstrate that by effectively combining semi-supervision, data-programming, and subset selection paradigms, we significantly outperform the current state-of-the-art on seven publicly available datasets.
CITATION STYLE
Maheshwari, A., Chatterjee, O., Killamsetty, K., Ramakrishnan, G., & Iyer, R. (2021). Semi-Supervised Data Programming with Subset Selection. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 4640–4651). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.408
Mendeley helps you to discover research relevant for your work.