Not all claims are created equal: Choosing the right statistical approach to assess hypotheses

14Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.

Abstract

Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p-value computation, which suffers from several known issues. While alternative proposals have been well-debated and adopted in other fields, they remain rarely discussed or used within the NLP community. We address this gap by contrasting various hypothesis assessment techniques, especially those not commonly used in the field (such as evaluations based on Bayesian inference). Since these statistical techniques differ in the hypotheses they can support, we argue that practitioners should first decide their target hypothesis before choosing an assessment method. This is crucial because common fallacies, misconceptions, and misinterpretation surrounding hypothesis assessment methods often stem from a discrepancy between what one would like to claim versus what the method used actually assesses. Our survey reveals that these issues are omnipresent in the NLP research community. As a step forward, we provide best practices and guidelines tailored towards NLP research, as well as an easy-to-use package called HyBayes for Bayesian assessment of hypotheses, complementing existing tools.

Cite

CITATION STYLE

APA

Azer, E. S., Khashabi, D., Sabharwal, A., & Roth, D. (2020). Not all claims are created equal: Choosing the right statistical approach to assess hypotheses. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5715–5725). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.506

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free