This article presents a novel methodology for a robot to autonomously induce models of its actions and sensors called ASAMI (autonomous sensor and actuator model induction). While previous approaches to model learning rely on an independent source of training data, we show how a robot can induce action and sensor models without any well-calibrated feedback. Specifically, the only inputs to the ASAMI learning process are the data the robot would naturally have access to: its raw sensations and knowledge of its own action selections. From the perspective of developmental robotics, our robots goal is to obtain self-consistent internal models, rather than to perform any externally defined tasks. Furthermore, the target function of each model-learning process comes from within the system, namely the most current version of another internal system model. Concretely realizing this model-learning methodology presents a number of challenges, and we introduce a broad class of settings in which solutions to these challenges are presented. ASAMI is fully implemented and tested, and empirical results validate our approach in a robotic testbed domain using a Sony Aibo ERS-7 robot.
CITATION STYLE
Stronger, D., & Stone, P. (2006, June 1). Towards autonomous sensor and actuator model induction on a mobile robot. Connection Science. https://doi.org/10.1080/09540090600768690
Mendeley helps you to discover research relevant for your work.