Mitigating Topic Bias when Detecting Decisions in Dialogue

2Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

This work revisits the task of detecting decision-related utterances in multi-party dialogue. We explore performance of a traditional approach and a deep learning-based approach based on transformer language models, with the latter providing modest improvements. We then analyze topic bias in the models using topic information obtained by manual annotation. Our finding is that when detecting some types of decisions in our data, models rely more on topic specific words that decisions are about rather than on words that more generally indicate decision making. We further explore this by removing topic information from the train data. We show that this resolves the bias issues to an extent and, surprisingly, sometimes even boosts performance.

Cite

CITATION STYLE

APA

Karan, M., Khare, P., Healey, P., & Purver, M. (2021). Mitigating Topic Bias when Detecting Decisions in Dialogue. In SIGDIAL 2021 - 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 542–547). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.sigdial-1.56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free