John-Arthur at SemEval-2023 Task 4: Fine-Tuning Large Language Models for Arguments Classification

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

This paper presents the system submissions of the John-Arthur team to the SemEval Task 4 “ValueEval: Identification of Human Values behind Arguments”. The best system of the team was ranked 3rd and the overall rank of the team was 2nd (the first team had the two best systems). John-Arthur team models the ValueEval problem as a multi-class, multi-label text classification problem. The solutions leverage recently proposed large language models that are fine-tuned on the provided datasets. To boost the achieved performance we employ different best practises whose impact on the model performance we evaluate here. The code is publicly available at github and the model on Huggingface hub.

Cite

CITATION STYLE

APA

Balikas, G. (2023). John-Arthur at SemEval-2023 Task 4: Fine-Tuning Large Language Models for Arguments Classification. In 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop (pp. 1428–1432). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.semeval-1.197

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free