Detecting Argumentative Fallacies in the Wild: Problems and Limitations of Large Language Models

17Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Previous work on the automatic identification of fallacies in natural language text has typically approached the problem in constrained experimental setups that make it difficult to understand the applicability and usefulness of the proposals in the real world. In this paper, we present the first analysis of the limitations that these data-driven approaches could show in real situations. For that purpose, we first create a validation corpus consisting of natural language argumentation schemes. Second, we provide new empirical results to the emerging task of identifying fallacies in natural language text. Third, we analyse the errors observed outside of the testing data domains considering the new validation corpus. Finally, we point out some important limitations observed in our analysis that should be taken into account in future research in this topic. Specifically, if we want to deploy these systems in the Wild.

Cite

CITATION STYLE

APA

Ruiz-Dolz, R., & Lawrence, J. (2023). Detecting Argumentative Fallacies in the Wild: Problems and Limitations of Large Language Models. In EMNLP 2023 - 10th Workshop on Argument Mining, ArgMining 2023 - Proceedings (pp. 1–10). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.argmining-1.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free