The altruistic robot: Do what i want, not just what i say

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As autonomous robots expand their application beyond research labs and production lines, they must work in more flexible and less well defined environments. To escape the requirement for exhaustive instruction and stipulated preference ordering, a robot’s operation must involve choices between alternative actions, guided by goals. We describe a robot that learns these goals from humans by considering the timeliness and context of instructions and rewards as evidence of the contours and gradients of an unknown human utility function. In turn, this underlies a choice-theory based rational preference relationship. We examine how the timing of requests, and contexts in which they arise, can lead to actions that pre-empt requests using methods we term contemporaneous entropy learning and context sensitive learning. We provide experiments on these two methods to demonstrate their usefulness in guiding a robot’s actions.

Cite

CITATION STYLE

APA

Billingsley, R., Billingsley, J., Gärdenfors, P., Peppas, P., Prade, H., Skillicorn, D., & Williams, M. A. (2017). The altruistic robot: Do what i want, not just what i say. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10564 LNAI, pp. 149–162). Springer Verlag. https://doi.org/10.1007/978-3-319-67582-4_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free