J. Li, B. Wang (PR China), and T. Duckett (UK)
Data collection, Robot learning, Teleoperation, Visualguided demonstration
Robot learning from demonstration (LfD) requires data collection for mapping the sensory states to motion action, which plays a significant role in the learning efficiency and effectiveness. In this paper we present a data collection framework that allows a human demonstrator to teleoperate or to visually guide a mobile robot for the required behav iors, while the sensory-motor examples are simultaneously gathered. In the teleoperation mode, the human demon strator can teleoperate the robot through a GUI that con sists of the velocity control and sensory-motor recording commands with the monitoring windows for sonar, laser and visual image. In the visual-guided mode, the human demonstrator uses a green can as the command stick that is tracked by a pan-tilt-zoom (PTZ) camera. The framework is implemented on a Peoplebot robot. Experiments show that both demonstration modes of the framework provide an user-friendly interface of data collection for the subse quent learning process of the robot.
Important Links:
Go Back