The design of current natural language-oriented robot architectures enables certain architectural components to circumvent moral reasoning capabilities. One example of this is reflexive generation of clarification requests as soon as referential ambiguity is detected in a human utterance. As shown in previous research, this can lead robots to (1) miscommunicate their moral dispositions and (2) weaken human perception or application of moral norms within their current context. We present a solution to these problems by performing moral reasoning on each potential disambiguation of an ambiguous human utterance and responding accordingly, rather than immediately and naively requesting clarification. We implement our solution in the Distributed Integrated Cognition Affect and Reflection robot architecture, which, to our knowledge, is the only current robot architecture with both moral reasoning and clarification request generation capabilities. We then evaluate our method with a human subjects experiment, the results of which indicate that our approach successfully ameliorates the two identified concerns.
CITATION STYLE
Jackson, R. B., & Williams, T. (2022). Enabling Morally Sensitive Robotic Clarification Requests. ACM Transactions on Human-Robot Interaction, 11(2). https://doi.org/10.1145/3503795
Mendeley helps you to discover research relevant for your work.