REINFORCEMENT LEARNING AND EGA-BASED TRAJECTORY PLANNING FOR DUAL ROBOTS

Yi Liu, Ming Cong, Hang Dong, and Dong Liu

Keywords

Reinforcement learning, trajectory planning, Markov decision process, dualrobot cooperation, elitist genetic algorithm

Abstract

In robot drilling processes, generating a smooth drilling trajectory is an important issue to guarantee well-drilling performance. This paper proposed a Markov reinforcement learning model and an improved genetic algorithm optimization model to solve such problems. Compared with several common global optimization algorithms, the proposed Markov decision process (MDP) surrogated greedy policy is more effective and accurate in dealing such sequential small-scale decision-making problems under uncertainties. The proposed MDP model is used to generate drilling trajectory in Cartesian space, where quintic splines were applied on motion planning of the tool centre point. Inverse kinematics in the joint space is applied to generate a high smooth trajectory. The damped reciprocals method is used to avoid the singularities generated in motion. The minimumtime motion planning has been discussed based on the combination of elitist genetic algorithm (EGA) and inverse kinematics. At the same time, the kinetic constraints of the axes were set during the movement of the robot manipulators. Simulation results for the 6-DOF serial robots also demonstrate good motion performance and the effectiveness on account of EGA.

Important Links:



Go Back