SA-SEMNET: A NETWORK FUSED TACTILE INFORMATION AND SEMANTIC ATTRIBUTES FOR OBJECT RECOGNITION

Shengjie Qiu, Baojiang Li, Haiyan Ye, and Haiyan Wang

Keywords

Haptic information, semantic information, deep learning, attention mechanism, object recognition

Abstract

Tactile sensation plays a crucial role in robot understanding and interaction with the surrounding environment. Currently, most robots use small high-resolution sensors to collect information. However, this method has the following limitation: it requires multiple touches to collect information. To solve this problem, a tactile sensor array covering the entire hand can be used as a substitute. In addition, the fusion of multimodal information can enhance the classification ability of single-modal tactile information. Based on this, this study proposes an SA-SEMNet neural network algorithm framework that combines spatial attention and semantic information, enabling robots equipped with low-resolution tactile sensor arrays to recognise everyday objects. First, the semantic features of each object adjective are given and fed into a convolutional neural network along with the tactile information of the object, which incorporates an attention mechanism. Then, the neural network provides category scores and semantic attribute scores, and after weighted fusion of the two loss functions, the final classification score is given. The algorithm was tested on the Stretchable Tactile Glove dataset with an average recognition accuracy of 95.32%. This algorithm can be applied in fields, such as tactile recognition and biomimetic prosthetic hands.

Important Links:

Go Back