For social robots to succeed in human environments, they must respond in effective yet appropriate ways when humans violate social and moral norms, e.g., when humans give them unethical commands. Humans expect robots to be competent and proportional in their norm violation responses, and there are a wide range of strategies robots could use to tune the politeness of their utterances to achieve effective, yet appropriate responses. Yet it is not obvious whether all such strategies are suitable for robots to use. In this work, we assess a robot's use of human-like Face Theoretic linguistic politeness strategies. Our results show that while people expect robots to modulate the politeness of their responses, they do not expect them to strictly mimic human linguistic behaviors. Specifically, linguistic politeness strategies that use direct, formal language are perceived as more effective and more appropriate than strategies that use indirect, informal language.
CITATION STYLE
Mott, T., Fanganello, A., & Williams, T. (2024). What a Thing to Say! Which Linguistic Politeness Strategies Should Robots Use in Noncompliance Interactions? In ACM/IEEE International Conference on Human-Robot Interaction (pp. 501–510). IEEE Computer Society. https://doi.org/10.1145/3610977.3634943
Mendeley helps you to discover research relevant for your work.