Abstract
This tutorial targets researchers and practitioners who are interested in ML technologies for NLP from indirect supervision. In particular, we will present a diverse thread of indirect supervision studies that try to answer the following questions: (i) when and how can we provide supervision for a target task T, if all we have is data that corresponds to a “related” task T′? (ii) humans do not use exhaustive supervision; they rely on occasional feedback, and learn from incidental signals from various sources; how can we effectively incorporate such supervision in machine learning? (iii) how can we leverage multi-modal supervision to help NLP? To the end, we will discuss several lines of research that address those challenges, including (i) indirect supervision from T′ that handles T with outputs spanning from a moderate size to an open space, (ii) the use of sparsely occurring and incidental signals, such as partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations-all having statistical associations with the task, (iii) principled ways to measure and understand why these incidental signals can contribute to our target tasks, and (iv) indirect supervision from vision-language signals. We will conclude the tutorial by outlining directions for further investigation.
Cite
CITATION STYLE
Yin, W., Chen, M., Zhou, B., Ning, Q., Chang, K. W., & Roth, D. (2023). Indirectly Supervised Natural Language Processing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 6, pp. 32–40). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-tutorials.5
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.