Jianjun Ni, Xinyun Li, Mingang Hua and Simon X. Yang
Path planning, Q-learning, bioinspired neural network, mobile robot
Mobile robot path planning is a key technology in the field of robotic research and applications. Q-learning algorithm is one of the most effective methods to solve the problem of path planning in unknown environments, which is a type of reinforcement learning methods. However, there are two main problems in the general Q-learning algorithm for robot path planning. One is the convergence speed problem, and the other one is the equilibrium problem between exploration and exploitation. To solve these problems, an improved Q-learning algorithm based on a bioinspired neural network (BNN) is proposed for robot path planning in this paper. In the proposed approach, a novel BNN model is used as the reward function for Q-learning algorithm, which can reduce the effect of reward function on the convergence speed. To ensure that an optimal path can be obtained for robot, a joint action selection strategy based on the tabu search and the simulated annealing is proposed. Furthermore, a dynamic learning rate is proposed for the Q-learning-based path planning method, which makes the proposed approach adapt to various environments effectively. Finally, some simulation experiments are conducted and the results show that the proposed approach is capable of dealing with the path planning problem for robots in unknown environments efficiently.
Important Links:
Go Back