Robotic Grasping Based on 6D Pose Estimation from Single View
-
Graphical Abstract
-
Abstract
In real-world scenarios, the diversity of object types and random placement can lead to difficulties in object recognition for intelligent robots, resulting in a low success rate in grasping. A method for robot grasping in complex situations such as occlusion, multiple targets of the same type, and stacking is proposed to address this issue. A single view 6D pose estimation network with encoder decoder structure is designed based on channel attention mechanism ECA and residual network ResNet; A 6D pose estimation and grasping training dataset is generated using a synthetic dataset production method; The robot grasping control module controls the UR5 robot to achieve intelligent grasping based on the output of the 6D pose estimation network and the results of hand eye calibration. The experimental results on Linemod, YCB-Video, and the synthesized dataset show that the average grasping success rate of our method reaches 95%.
-
-