Semantic Representation Using Sub-Symbolic Knowledge in Commonsense Reasoning †

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The commonsense question and answering (CSQA) system predicts the right answer based on a comprehensive understanding of the question. Previous research has developed models that use QA pairs, the corresponding evidence, or the knowledge graph as an input. Each method executes QA tasks with representations of pre-trained language models. However, the ability of the pre-trained language model to comprehend completely remains debatable. In this study, adversarial attack experiments were conducted on question-understanding. We examined the restrictions on the question-reasoning process of the pre-trained language model, and then demonstrated the need for models to use the logical structure of abstract meaning representations (AMRs). Additionally, the experimental results demonstrated that the method performed best when the AMR graph was extended with ConceptNet. With this extension, our proposed method outperformed the baseline in diverse commonsense-reasoning QA tasks.

Cite

CITATION STYLE

APA

Oh, D., Lim, J., Park, K., & Lim, H. (2022). Semantic Representation Using Sub-Symbolic Knowledge in Commonsense Reasoning †. Applied Sciences (Switzerland), 12(18). https://doi.org/10.3390/app12189202

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free