Adversarial over-sensitivity and over-stability strategies for dialogue models

64Citations
Citations of this article
145Readers
Mendeley users who have this article in their library.

Abstract

We present two categories of model-agnostic adversarial strategies that reveal the weaknesses of several generative, task-oriented dialogue models: Should-Not-Change strategies that evaluate over-sensitivity to small and semantics-preserving edits, as well as Should-Change strategies that test if a model is over-stable against subtle yet semantics-changing modifications. We next perform adversarial training with each strategy, employing a max-margin approach for negative generative examples. This not only makes the target dialogue model more robust to the adversarial inputs, but also helps it perform significantly better on the original inputs. Moreover, training on all strategies combined achieves further improvements, achieving a new state-of-the-art performance on the original task (also verified via human evaluation). In addition to adversarial training, we also address the robustness task at the model-level, by feeding it subword units as both inputs and outputs, and show that the resulting model is equally competitive, requires only 1/4 of the original vocabulary size, and is robust to one of the adversarial strategies (to which the original model is vulnerable) even without adversarial training.

Cite

CITATION STYLE

APA

Niu, T., & Bansal, M. (2018). Adversarial over-sensitivity and over-stability strategies for dialogue models. In CoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings (pp. 486–496). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k18-1047

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free