It is conceivable to use a teleoperated robot as a method for exploring a disaster environment quickly while avoiding secondary disaster. According to conventional researches, the image captured behind from the teleoperated robot is useful as information provided to an operator for operating a teleoperated robot. In this research, a teleoperated method to provide the image from behind is proposed by using an autonomous robot. This method allows one image to include both the teleoperated robot itself and the environment around the teleoperated robot. Here, it is important to develop an algorithm that can navigate the autonomous robot to the position where the camera image can be obtained so that the operability is enhanced for the teleoperated robot at the front. This paper describes a method for determining the movement position for the autonomous robot in consideration of each robot's position and obstacles around the robot. A method for changing three modes (Follow/Back/Wait) is also described according to the situation. Finally, the movement position of the autonomous robot and the captured images actually provided by each robot are shown through the experimental results using real mobile robots to verify the usefulness of the proposed method.
CITATION STYLE
Maeyama, S., Okuno, T., & Watanabe, K. (2016). View Point Decision Algorithm for an Autonomous Robot to Provide Support Images in the Operability of a Teleoperated Robot. SICE Journal of Control, Measurement, and System Integration, 9(1), 33–41. https://doi.org/10.9746/jcmsi.9.33
Mendeley helps you to discover research relevant for your work.