Aligning Predictive Uncertainty with Clarification Questions in Grounded Dialog

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Asking for clarification is fundamental to effective collaboration. An interactive artificial agent must know when to ask a human instructor for more information in order to ascertain their goals. Previous work bases the timing of questions on supervised models learned from interactions between humans. Instead of a supervised classification task, we wish to ground the need for questions in the acting agent's predictive uncertainty. In this work, we investigate if ambiguous linguistic instructions can be aligned with uncertainty in neural models. We train an agent using the T5 encoder-decoder architecture to solve the Minecraft Collaborative Building Task and identify uncertainty metrics that achieve better distributional separation between clear and ambiguous instructions. We further show that well-calibrated prediction probabilities benefit the detection of ambiguous instructions. Lastly, we provide a novel empirical analysis on the relationship between uncertainty and dialog history length and highlight an important property that poses a difficulty for detection.

Cite

CITATION STYLE

APA

Naszádi, K., Manggala, P., & Monz, C. (2023). Aligning Predictive Uncertainty with Clarification Questions in Grounded Dialog. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 14988–14998). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.999

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free