Motion planning of a mobile robot using reinforcement learning

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

In a previous paper, we proposed a solution to navigation of a mobile robot. In our approach, we formulated the following two problems at each time step as discrete optimization problems: 1) estimation of position and direction of a robot, and 2)action decision. While the results of our simulation showed the effectiveness of our approach, the values of weights in the objective functions were given by a heuristic method. This paper presents a theoretical method using reinforcement learning for adjusting the weight parameters in the objective function that includes pieces of heuristic knowledge on the action decision. In our reinforcement learning, the expectation of a reward given to a robot's trajectory is defined as the value function to maximize. The robot's trajectories are generated stochastically because we used a probabilistic policy for determining actions of a robot to search for the global optimal trajectory. However, this decision process is not a Markov decision process because the objective function includes an action at the previous time. Thus, Q-learning, which is a conventional method of reinforcement learning, cannot be applied to this problem. In this paper, we applied Williams's episodic REINFORCE approach to the action decision and derived a learning rule for the weight parameters of the objective function. Moreover, we applied the stochastic hill-climbing method to maximizing the value function to reduce computation time. The learning rule was verified by our experiment.

Cite

CITATION STYLE

APA

Igarashi, H. (2001). Motion planning of a mobile robot using reinforcement learning. Transactions of the Japanese Society for Artificial Intelligence, 16(6), 501–509. https://doi.org/10.1527/tjsai.16.501

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free