Unsupervised deep structured semantic models for commonsense reasoning

11Citations
Citations of this article
129Readers
Mendeley users who have this article in their library.

Abstract

Commonsense reasoning is fundamental to natural language understanding. While traditional methods rely heavily on human-crafted features and knowledge bases, we explore learning commonsense knowledge from a large amount of raw text via unsupervised learning. We propose two neural network models based on the Deep Structured Semantic Models (DSSM) framework to tackle two classic commonsense reasoning tasks, Winograd Schema challenges (WSC) and Pronoun Disambiguation (PDP). Evaluation shows that the proposed models effectively capture contextual information in the sentence and coreference information between pronouns and nouns, and achieve significant improvement over previous state-of-the-art approaches.

References Powered by Scopus

WordNet: A Lexical Database for English

11663Citations
N/AReaders
Get full text

A large annotated corpus for learning natural language inference

2530Citations
N/AReaders
Get full text

Learning deep structured semantic models for web search using clickthrough data

1655Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Contrastive self-supervised learning for commonsense reasoning

37Citations
N/AReaders
Get full text

Learning disentangled representation for cross-modal retrieval with deep mutual information estimation

30Citations
N/AReaders
Get full text

The defeat of the Winograd Schema Challenge

15Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wang, S., Zhang, S., Shen, Y., Liu, X., Liu, J., Gao, J., & Jiang, J. (2019). Unsupervised deep structured semantic models for commonsense reasoning. In NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 882–891). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n19-1094

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 43

72%

Researcher 12

20%

Lecturer / Post doc 3

5%

Professor / Associate Prof. 2

3%

Readers' Discipline

Tooltip

Computer Science 58

79%

Engineering 7

10%

Linguistics 7

10%

Neuroscience 1

1%

Save time finding and organizing research with Mendeley

Sign up for free