WeaNF: Weak Supervision with Normalizing Flows

0Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

A popular approach to decrease the need for costly manual annotation of large data sets is weak supervision, which introduces problems of noisy labels, coverage and bias. Methods for overcoming these problems have either relied on discriminative models, trained with cost functions specific to weak supervision, and more recently, generative models, trying to model the output of the automatic annotation process. In this work, we explore a novel direction of generative modeling for weak supervision: Instead of modeling the output of the annotation process (the labeling function matches), we generatively model the input-side data distributions (the feature space) covered by labeling functions. Specifically, we estimate a density for each weak labeling source, or labeling function, by using normalizing flows. An integral part of our method is the flow-based modeling of multiple simultaneously matching labeling functions, and therefore phenomena such as labeling function overlap and correlations are captured. We analyze the effectiveness and modeling capabilities on various commonly used weak supervision data sets, and show that weakly supervised normalizing flows compare favorably to standard weak supervision baselines.

Cite

CITATION STYLE

APA

Stephan, A., & Roth, B. (2022). WeaNF: Weak Supervision with Normalizing Flows. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 269–279). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.repl4nlp-1.27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free