Data convergence in syntactic theory and the role of sentence pairs

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Most acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published in Linguistic Inquiry and then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.

Cite

CITATION STYLE

APA

Juzek, T. S., & Häussler, J. (2020). Data convergence in syntactic theory and the role of sentence pairs. Zeitschrift Fur Sprachwissenschaft, 39(2), 109–147. https://doi.org/10.1515/zfs-2020-2008

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free