TR at SemEval-2020 Task 4: Exploring the Limits of Language-model-based Common Sense Validation

1Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present our submission for subtask A of the Common Sense Validation and Explanation (ComVE) shared task. We examine the ability of large-scale pre-trained language models to distinguish commonsense from non-commonsense statements. We also explore the utility of external resources that aim to supplement the world knowledge inherent in such language models, including commonsense knowledge graph embedding models, word concreteness ratings, and text-to-image generation models. We find that such resources provide insignificant gains to the performance of fine-tuned language models. We also provide a qualitative analysis of the limitations of the language model fine-tuned to this task.

Cite

CITATION STYLE

APA

Teo, D. (2020). TR at SemEval-2020 Task 4: Exploring the Limits of Language-model-based Common Sense Validation. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 601–608). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.76

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free