DATA-EFFICIENT MODEL-BASED REINFORCEMENT LEARNING FOR ROBOT CONTROL

Ming Sun, Yue Gao, Wei Liu, and Shaoyuan Li

Keywords

Model-based reinforcement learning, sparse regression, systemidentification

Abstract

Reinforcement learning (RL) methods train the agent to accomplish a wide variety of tasks automatically via trial and error but usually rely on the long-term interaction with the environment, making it impractical to be applied to the robots directly. This paper utilizes the system identification method for model-based reinforcement learning, which can be directly implemented on real robots without training in the simulation environment. Usually, the model for the robot control system is a physical system determined by explicit equations, and our model-based RL method utilizes sparse regression to identify the model. Then a policy network is trained by interacting with the learned model. Several experiments are conducted in simulation and on the real robot. By comparing with other model-based RL methods, our methods have the benefits of interpretability and adaptability and can be directly applied to the real robots with data efficiency.

Important Links:



Go Back