The capability of holding social talk (or casual conversation) and making sense of conversational content requires context-sensitive natural language understanding and reasoning, which cannot be handled efficiently by the current popular open-domain dialog systems and chatbots. Heavily relying on corpus-based machine learning techniques to encode and decode context-sensitive meanings, these systems focus on fitting a particular training dataset, but not tracking what is actually happening in a conversation, and therefore easily derail in a new context. This work sketches out a more linguistically-informed architecture to handle social talk in English, in which corpus-based methods form the backbone of the relatively context-insensitive components (e.g. part-of-speech tagging, approximation of lexical meaning and constituent chunking), while symbolic modeling is used for reasoning out the context-sensitive components, which do not have any consistent mapping to linguistic forms. All components are fitted into a Bayesian game-theoretic model to address the interactive and rational aspects of conversation.
CITATION STYLE
Luu, A. (2022). Sketching a Linguistically-Driven Reasoning Dialog Model for Social Talk. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 153–170). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-srw.14
Mendeley helps you to discover research relevant for your work.