Identification of High-Level Object Manipulation Operations from Multimodal Input

A. Barchunova, M. Franzius, M. Pardowitz, and H. Ritter (Germany)

Keywords

Signal Processing, Sensor Multimodality, Recognition of Interaction

Abstract

Object manipulation constitutes a large part of our daily hand movements. Recognition of such movements by a robot in an interactive scenario is an issue that is rapidly gaining attention. In this paper we present an approach to identification of a class of high-level manual object manip ulations. Experiments have shown that the naive approach based on classification of low-level sensor data yields poor performance. In this paper we introduce a two-stage proce dure that considerably improves the identification perfor mance. In the first stage of the procedure we estimate an intermediate representation by applying a linear preproces sor to the multimodal low-level sensor data. This mapping calculates shape, orientation and weight estimators of the interaction object. In the second stage we generate a clas sifier that is trained to identify high-level object manipula tions given the intermediate representation based on shape, orientation and weight. The devices used in our procedure are: Immersion CyberGlove II enhanced with five tactile sensors on the fingertips (TouchGlove), nine tactile sen sors to measure the change of the object’s weight and a VICON multicamera system for trajectory recording. We have achieved the following recognition rates for 3600 data samples representing a sequence of manual object manipu lations: 100% correct labelling of “holding”, 97% of “pouring”, 81% of “squeezing” and 65% of “tilting”.

Important Links:



Go Back