Sketching a Linguistically-Driven Reasoning Dialog Model for Social Talk

0Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

Abstract

The capability of holding social talk (or casual conversation) and making sense of conversational content requires context-sensitive natural language understanding and reasoning, which cannot be handled efficiently by the current popular open-domain dialog systems and chatbots. Heavily relying on corpus-based machine learning techniques to encode and decode context-sensitive meanings, these systems focus on fitting a particular training dataset, but not tracking what is actually happening in a conversation, and therefore easily derail in a new context. This work sketches out a more linguistically-informed architecture to handle social talk in English, in which corpus-based methods form the backbone of the relatively context-insensitive components (e.g. part-of-speech tagging, approximation of lexical meaning and constituent chunking), while symbolic modeling is used for reasoning out the context-sensitive components, which do not have any consistent mapping to linguistic forms. All components are fitted into a Bayesian game-theoretic model to address the interactive and rational aspects of conversation.

Cite

CITATION STYLE

APA

Luu, A. (2022). Sketching a Linguistically-Driven Reasoning Dialog Model for Social Talk. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 153–170). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-srw.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free