Learning Directional Sentence-Pair Embedding for Natural Language Reasoning

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Enabling the models with the ability of reasoning and inference over text is one of the core missions of natural language understanding. Despite deep learning models have shown strong performance on various cross-sentence inference benchmarks, recent work has shown that they are leveraging spurious statistical cues rather than capturing deeper implied relations between pairs of sentences. In this paper, we show that the state-of-the-art language encoding models are especially bad at modeling directional relations between sentences by proposing a new evaluation task: Cause-and-Effect relation prediction task. Back by our curated Causeand-Effect Relation dataset (CER), we also demonstrate that a mutual attention mechanism can guide the model to focus on capturing directional relations between sentences when added to existing transformer-based models. Experiment results show that the proposed approach improves the performance on downstream applications, such as the abductive reasoning task.

Cite

CITATION STYLE

APA

Jiang, Y., Xiao, Z., & Chang, K. W. (2020). Learning Directional Sentence-Pair Embedding for Natural Language Reasoning. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13825–13826). AAAI press.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free