CSUI at SemEval-2020 Task 4: Commonsense Validation and Explanation by Exploiting Contradiction

2Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes our submissions into the ComVe challenge, the SemEval 2020 Task 4. This evaluation task consists of three sub-tasks that test commonsense comprehension by identifying sentences that do not make sense and explain why they do not. In subtask A, we use Roberta to find which sentence does not make sense. In subtask B, besides using BERT, we also experiment with replacing the dataset with MNLI when selecting the best explanation from the provided options why the given sentence does not make sense. In subtask C, we utilize the MNLI model from subtask B to evaluate the explanation generated by Roberta and GPT-2 by exploiting the contradiction of the sentence and their explanation. Our system submission records a performance of 88.2%, 80.5%, and BLEU 5.5 for those three subtasks, respectively.

Cite

CITATION STYLE

APA

Doxolodeo, K., & Mahendra, R. (2020). CSUI at SemEval-2020 Task 4: Commonsense Validation and Explanation by Exploiting Contradiction. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 614–619). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.78

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free