We utilize multi-task learning to improve argument mining in persuasive online discussions, in which both micro-level and macro-level argumentation must be taken into consideration. Our models learn to identify argument components and the relations between them at the same time. We also tackle the low-precision which arises from imbalanced relation data by experimenting with SMOTE and XGBoost. Our approaches improve over baselines that use the same pre-trained language model but process the argument component task and two relation tasks separately. Furthermore, our results suggest that the tasks to be incorporated into multi-task learning should be taken into consideration as using all relevant tasks does not always lead to the best performance.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Tran, N., & Litman, D. (2021). Multi-task Learning in Argument Mining for Persuasive Online Discussions. In 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings (pp. 148–153). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.argmining-1.15