The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

I propose a paradigm for scientific progress in NLP centered around developing scalable, data-driven theories of linguistic structure. The idea is to collect data in tightly scoped, carefully defined ways which allow for exhaustive annotation of behavioral phenomena of interest, and then use machine learning to construct explanatory theories of these phenomena which can form building blocks for intelligible AI systems. After laying some conceptual groundwork, I describe several investigations into data-driven theories of shallow semantic structure using Question-Answer driven Semantic Role Labeling (QA-SRL), a schema for annotating verbal predicate–argument relations using highly constrained question-answer pairs. While this only scratches the surface of the complex language behaviors of interest in AI, I outline principles for data collection and theoretical modeling which can inform future scientific progress. This note summarizes and draws heavily on my PhD thesis (Michael, 2023).

Cite

CITATION STYLE

APA

Michael, J. (2023). The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP. In BigPicture 2023 - Big Picture Workshop, Proceedings (pp. 40–52). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bigpicture-1.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free