Cross-Domain Argument Quality Estimation

4Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Argumentation is one of society's foundational pillars, and, sparked by advances in NLP, and the vast availability of text data, automated mining of arguments receives increasing attention. A decisive property of arguments is their strength or quality. While there are works on the automated estimation of argument strength, their scope is narrow: they focus on isolated datasets and neglect the interactions with related argument-mining tasks, such as argument identification and evidence detection. In this work, we close this gap by approaching argument quality estimation from multiple different angles: Grounded on rich results from thorough empirical evaluations, we assess the generalization capabilities of argument quality estimation across diverse domains and the interplay with related argument mining tasks. We find that generalization depends on a sufficient representation of different domains in the training part. In zero-shot transfer and multi-task experiments, we reveal that argument quality is among the more challenging tasks but can improve others. We publish our code at https://github.com/fromm-m/acl-cross-domain-aq.

Cite

CITATION STYLE

APA

Fromm, M., Berrendorf, M., Faerman, E., & Seidl, T. (2023). Cross-Domain Argument Quality Estimation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 13435–13448). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.848

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free