Moral Transparency as a Mitigator of Moral Bias in Conversational User Interfaces

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

From straightforward interactions to full-fledged open-ended dialogues, Conversational User Interfaces (CUIs) are designed to support end-user goals and follow their requests. As CUIs become more capable, investigating how to restrict or limit their ability to carry out user requests becomes increasingly critical. Currently, such intentionally constrained user interactions are accompanied by a generic explanation (e.g., "I'm sorry, but as an AI language model, I cannot say..."). We describe the role of moral bias in such user restrictions as a potential source of conflict between CUI users' autonomy and system characterisation as generated by CUI designers. Just as the users of CUIs have diverging moral viewpoints, so do CUI designers - which either intentionally or unintentionally affects how CUIs communicate. Mitigating user moral biases and making the moral viewpoints of CUI designers apparent is a critical path forward in CUI design. We describe how moral transparency in CUIs can support this goal, as exemplified through intelligent disobedience. Finally, we discuss the risks and rewards of moral transparency in CUIs and outline research opportunities to inform the design of future CUIs.

Cite

CITATION STYLE

APA

Wester, J., Lee, M., & Van Berkel, N. (2023). Moral Transparency as a Mitigator of Moral Bias in Conversational User Interfaces. In Proceedings of the 5th International Conference on Conversational User Interfaces, CUI 2023. Association for Computing Machinery, Inc. https://doi.org/10.1145/3571884.3603752

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free