Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text. A recent model, COMET, can generate such implicit commonsense inferences along several dimensions such as pre- and post-conditions, motivations, and mental states of the participants. However, COMET was trained on commonsense inferences of short phrases, and is therefore discourseagnostic. When presented with each sentence of a multisentence narrative, it might generate inferences that are inconsistent with the rest of the narrative. We present the task of discourse-aware commonsense inference. Given a sentence within a narrative, the goal is to generate commonsense inferences along predefined dimensions, while maintaining coherence with the rest of the narrative. Such large-scale paragraph-level annotation is hard to get and costly, so we use available sentence-level annotations to efficiently and automatically construct a distantly supervised corpus. Using this corpus, we train PARA-COMET, a discourseaware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives. PARA-COMET captures both semantic knowledge pertaining to prior world knowledge, and episodic knowledge involving how current events relate to prior and future events in a narrative. Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
CITATION STYLE
Gabriel, S., Bhagavatula, C., Shwartz, V., Le Bras, R., Forbes, M., & Choi, Y. (2021). Paragraph-level Commonsense Transformers with Recurrent Memory. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 14B, pp. 12857–12865). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i14.17521
Mendeley helps you to discover research relevant for your work.