Tool-body assimilation model considering grasping motion through deep learning

Citations of this article
Mendeley users who have this article in their library.


We propose a tool-body assimilation model that considers grasping during motor babbling for using tools. A robot with tool-use skills can be useful in human–robot symbiosis because this allows the robot to expand its task performing abilities. Past studies that included tool-body assimilation approaches were mainly focused on obtaining the functions of the tools, and demonstrated the robot starting its motions with a tool pre-attached to the robot. This implies that the robot would not be able to decide whether and where to grasp the tool. In real life environments, robots would need to consider the possibilities of tool-grasping positions, and then grasp the tool. To address these issues, the robot performs motor babbling by grasping and nongrasping the tools to learn the robot's body model and tool functions. In addition, the robot grasps various parts of the tools to learn different tool functions from different grasping positions. The motion experiences are learned using deep learning. In model evaluation, the robot manipulates an object task without tools, and with several tools of different shapes. The robot generates motions after being shown the initial state and a target image, by deciding whether and where to grasp the tool. Therefore, the robot is capable of generating the correct motion and grasping decision when the initial state and a target image are provided to the robot.




Takahashi, K., Kim, K., Ogata, T., & Sugano, S. (2017). Tool-body assimilation model considering grasping motion through deep learning. Robotics and Autonomous Systems, 91, 115–127.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free