ROBOT GRASPING AND MANIPULATION COMBINING VISION AND TOUCH, 181-194.

Zihao Ding, Guodong Chen, Zhenhua Wang, and Lining Sun

Keywords

Robot grasping, neural network, tactile prior knowledge, visual-tactile fusion, optimal self-search, force/position control

Abstract

Humans tend to instinctively integrate information from various senses such as vision and touch to accomplish dynamic adjustments when grasping and manipulating objects. The current robot manipulation tasks largely depend on visual guidance, resulting in the inability to cope with the contact activities demanding precise control. Vision and touch are separate processes in recent fusion methods, which are far from what happens in the human brain. The fusion is also unable to solve the problem of damage to the objects during initial contact. Therefore, a pre-grasp network based on the fusion of visual detection and tactile prior knowledge is proposed in this paper, which combines visual image and tactile experience to reach fast pre-grasp for robots. Then, using the optimal self-search of the time step, a tactile network is built to automatically adjust the time step and output particular grasp hardness and grasping state for the object. Finally, the dexterous robot hand can be constantly controlled for steady grasping utilising the force/position control algorithm. Experiments show that this method is useful for the robot to complete the stable grasp and manipulation of diļ¬€erent objects.

Important Links:



Go Back