JBNU at SemEval-2020 Task 4: BERT and UniLM for Commonsense Validation and Explanation

2Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents our contributions to the SemEval-2020 Task 4 Commonsense Validation and Explanation (ComVE) and includes the experimental results of the two Subtasks B and C of the SemEval-2020 Task 4. Our systems rely on pre-trained language models, i.e., BERT (including its variants) and UniLM, and rank 10th and 7th among 27 and 17 systems on Subtasks B and C, respectively. We analyze the commonsense ability of the existing pretrained language models by testing them on the SemEval-2020 Task 4 ComVE dataset, specifically for Subtasks B and C, the explanation subtasks with multi-choice and sentence generation, respectively.

Cite

CITATION STYLE

APA

Lee, J. H., & Na, S. H. (2020). JBNU at SemEval-2020 Task 4: BERT and UniLM for Commonsense Validation and Explanation. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 527–534). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.65

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free