M. Huber (USA)
Hierarchical Learning, Reinforcement Learning, Robot Control
Autonomous systems operating in the real world have to be able to learn new tasks and to adapt to changing environmental conditions. In many situations, this has to occur without intervention of an operator or outside teacher. While reinforcement learning represents a good formalism to achieve this, its need for large amounts of exploratory actions often make it impracticable for on-line learning on complex systems. The control approach presented in this paper addresses this problem by utilizing closed-loop policies as actions which permit temporal and state space ab stractions in the Markov Decision Process (MDP) underlying the learning algorithm. This dramatically reduces the complexity of the reinforcement learning task. Further more, it permits to treat learned control policies as simple control actions and therefore facilitates the transfer of skills across tasks and the hierarchical construction of increasingly complex control policies from previously acquired skills. To demonstrate the potential of this approach and of skill transfer, it is used here to learn incrementally more complex locomotion gaits on a four-legged robot platform.
Important Links:
Go Back