Multi-task Learning in Argument Mining for Persuasive Online Discussions

4Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.

Abstract

We utilize multi-task learning to improve argument mining in persuasive online discussions, in which both micro-level and macro-level argumentation must be taken into consideration. Our models learn to identify argument components and the relations between them at the same time. We also tackle the low-precision which arises from imbalanced relation data by experimenting with SMOTE and XGBoost. Our approaches improve over baselines that use the same pre-trained language model but process the argument component task and two relation tasks separately. Furthermore, our results suggest that the tasks to be incorporated into multi-task learning should be taken into consideration as using all relevant tasks does not always lead to the best performance.

References Powered by Scopus

XGBoost: A scalable tree boosting system

33804Citations
N/AReaders
Get full text

SMOTE: Synthetic minority over-sampling technique

22940Citations
N/AReaders
Get full text

An experimental comparison of classification algorithms for imbalanced credit scoring data sets

534Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing

18Citations
N/AReaders
Get full text

RuArg-2022: Argument Mining Evaluation

8Citations
N/AReaders
Get full text

The future of cognitive strategy-enhanced persuasive dialogue agents: new perspectives and trends

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Tran, N., & Litman, D. (2021). Multi-task Learning in Argument Mining for Persuasive Online Discussions. In 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings (pp. 148–153). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.argmining-1.15

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 11

69%

Researcher 3

19%

Professor / Associate Prof. 1

6%

Lecturer / Post doc 1

6%

Readers' Discipline

Tooltip

Computer Science 12

67%

Linguistics 4

22%

Neuroscience 1

6%

Social Sciences 1

6%

Save time finding and organizing research with Mendeley

Sign up for free