VISUAL SERVOING IN VIRTUALISED ENVIRONMENTS BASED ON OPTICAL FLOW LEARNING AND CONSTRAINED OPTIMISATION, 1-10.

Takuya Iwasaki, Solvi Arnold, and Kimitoshi Yamazaki

Keywords

Visual servoing, neural network, virtual environment

Abstract

In this paper, we describe a visual servoing method for object picking. We propose a new architecture for generating robotic manipulator motions approaching a target object for grasping. The architecture consists of two convolutional neural networks (CNNs), one generating goal-directed motion and one collision avoidance motion. The networks’ outputs are combined, along with additional constraints, such as motion ranges of the joints, by means of quadratic programming (QP). One issue with learning-based approaches is that large amounts of training data are required. We devise an operation strategy that reduces the amount of training data required using a physics simulator. This method enables visual servoing that is unaffected by texture and colour variation in real environments. We show the effectiveness of the proposed method in experiments using simple shapes as target objects.

Important Links:

Go Back