On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark

36Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Dialogue safety problems severely limit the real-world deployment of neural conversational models and have attracted great research interests recently. However, dialogue safety problems remain under-defined and the corresponding dataset is scarce. We propose a taxonomy for dialogue safety specifically designed to capture unsafe behaviors in human-bot dialogue settings, with focuses on context-sensitive unsafety, which is under-explored in prior works. To spur research in this direction, we compile DIASAFETY, a dataset with rich context-sensitive unsafe examples. Experiments show that existing safety guarding tools fail severely on our dataset. As a remedy, we train a dialogue safety classifier to provide a strong baseline for context-sensitive dialogue unsafety detection. With our classifier, we perform safety evaluations on popular conversational models and show that existing dialogue systems still exhibit concerning context-sensitive safety problems.

Cite

CITATION STYLE

APA

Sun, H., Xu, G., Deng, J., Cheng, J., Zheng, C., Zhou, H., … Huang, M. (2022). On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3906–3923). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.308

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free