Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Off-Policy reinforcement learning has been a driving force for the state-of-the-art conversational AIs leading to more natural human-agent interactions and improving the user satisfaction for goal-oriented agents. However, in large-scale commercial settings, it is often challenging to balance between policy improvements and experience continuity on the broad spectrum of applications handled by such system. In the literature, off-policy evaluation and guard-railing on aggregate statistics has been commonly used to address this problem. In this paper, we propose a method for curating and leveraging high-precision samples sourced from historical regression incident reports to validate, safe-guard, and improve policies prior to the online deployment. We conducted extensive experiments using data from a real-world conversational system and actual regression incidents. The proposed method is currently deployed in our production system to protect customers against broken experiences and enable long-term policy improvements.

Cite

CITATION STYLE

APA

Ahuja, S., Kachuee, M., Sheikholeslami, F., Liu, W., & Do, J. (2023). Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 361–367). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free