Abstract
In this paper, we present our submission for SemEval 2020 Task 4 - Commonsense Validation and Explanation (ComVE). The objective of this task was to develop a system that can differentiate statements that make sense from the ones that don't. ComVE comprises of three subtasks to challenge and test a system's capability in understanding commonsense knowledge from various dimensions. Commonsense reasoning is a challenging task in the domain of natural language understanding and systems augmented with it can improve performance in various other tasks such as reading comprehension, and inferencing. We have developed a system that leverages commonsense knowledge from pretrained language models trained on huge corpus such as RoBERTa, GPT2, etc. Our proposed system validates the reasonability of a given statement against the backdrop of commonsense knowledge acquired by these models and generates a logical reason to support its decision. Our system ranked 2nd in subtask C with a BLEU score of 19.3, which by far is the most challenging subtask as it required systems to generate the rationale behind the choice of an unreasonable statement. In subtask A and B, we achieved 96% and 94% accuracy respectively standing at 4th position in both the subtasks.
Cite
CITATION STYLE
Srivastava, V., Sahoo, S. K., Kim, Y. H., Rohit, R. R., Raj, M., & Jaiswal, A. (2020). Team Solomon at SemEval-2020 Task 4: Be Reasonable: Exploiting large-scale language models for commonsense reasoning. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 585–593). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.74
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.