Learning improvised chatbots from adversarial modifications of natural language feedback

4Citations
Citations of this article
97Readers
Mendeley users who have this article in their library.

Abstract

The ubiquitous nature of chatbots and their interaction with users generate an enormous amount of data. Can we improve chatbots using this data? A self-feeding chatbot improves itself by asking natural language feedback when a user is dissatisfied with its response and uses this feedback as an additional training sample. However, user feedback in most cases contains extraneous sequences hindering their usefulness as a training sample. In this work, we propose a generative adversarial model that converts noisy feedback into a plausible natural response in a conversation. The generator’s goal is to convert the feedback into a response that answers the user’s previous utterance and to fool the discriminator which distinguishes feedback from natural responses. We show that augmenting original training data with these modified feedback responses improves the original chatbot performance from 69.94% to 75.96% in ranking correct responses on the PERSONACHAT dataset, a large improvement given that the original model is already trained on 131k samples.

Cite

CITATION STYLE

APA

Sreedhar, M. N., Ni, K., & Reddy, S. (2020). Learning improvised chatbots from adversarial modifications of natural language feedback. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 2445–2453). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.221

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free