GRASPING TASK PLANNING ALGORITHM FOR DEXTEROUS HAND BASED ON SCENE UNDERSTANDING AND SEMANTIC INFORMATION

Zhekai Zhang, Baojiang Li, Bin Wang, Liang Li, Haiyan Wang, Chenhan Zhang

Keywords

Dexterous hand, grasping, task planning algorithm, scene understanding, semantic information

Abstract

Dexterous grasping operation is a key ability for intelligent agents to cope with complex and changing environments. However, most of the existing dexterous grasping operations are not suitable for unstructured environments with irregular arrangement of items, complex environmental conditions, and diverse task requirements, which greatly limits the wide application of intelligent agents in daily life and industrial production. To solve these problems, this paper proposes a new approach called scene semantic dexterous grasping (SS-DG), which consists of three modules: environment perception, task planning, and motion execution. The environment- aware module scans the current scene items and determines their positions and poses; the task-planning module extracts keywords from the user’s natural language commands, queries the coordinates of the objects, and decomposes the commands into a sequence of executable sub-tasks; and the motion-execution module matches the task plans with the predefined motions and executes them. The SS-DG can grasp objects of different shapes and sizes in everyday environments, and performs deftly in both simple tasks and complex tasks. Experiments have shown that SS-DG achieves over 80% task success rate in everyday unstructured environments, and maintains an average success rate of 70% even with significant changes in factors such as scene and ambient lighting.

Important Links:



Go Back