GENERATION OF OBSTACLE AVOIDANCE BASED ON IMAGE FEATURES AND EMBODIMENT

Yuichi Kobayashi, Taichi Okamoto, and Masaki Onishi

Keywords

Image feature extraction, embodiment, robot motion learning, obstacle avoidance

Abstract

It is important for robots that operate in human-centered environments to build image processing in a bottom-up manner. The acquisition of information from the image has been actively investigated in developmental robotics, but its extension to motion generation has not been sufficiently discussed. This paper proposes a method to autonomously achieve image features extraction that is suitable for motion generation while moving in an unknown environment. Obstacle avoidance is taken as an exemplar task, where the robot autonomously finds “what is the body" and “what is related to collision" in the image solely through its experience. Image features that are related to the robot’s body are acquired by the clustering of scale invariant feature transform features based on the synchronousness between motion in the image and motor command. Using the extracted body-relevant features, a state-transition model is generated as an image Jacobian. Based on a learning model of adaptive addition of a state-transition model, collision-relevant features are detected and accumulated. Features that emerge when the robot cannot move are acquired as collision-relevant features. The proposed framework is evaluated with real images of the manipulator and an obstacle during obstacle avoidance.

Important Links:

Go Back