李正明, 章金龙. 基于深度学习的抓取目标姿态检测与定位[J]. 信息与控制, 2020, 49(2): 147-153. DOI: 10.13976/j.cnki.xk.2020.9212
引用本文: 李正明, 章金龙. 基于深度学习的抓取目标姿态检测与定位[J]. 信息与控制, 2020, 49(2): 147-153. DOI: 10.13976/j.cnki.xk.2020.9212
LI Zhengming, ZHANG Jinlong. Detection and Positioning of Grab Target Based on Deep Learning[J]. INFORMATION AND CONTROL, 2020, 49(2): 147-153. DOI: 10.13976/j.cnki.xk.2020.9212
Citation: LI Zhengming, ZHANG Jinlong. Detection and Positioning of Grab Target Based on Deep Learning[J]. INFORMATION AND CONTROL, 2020, 49(2): 147-153. DOI: 10.13976/j.cnki.xk.2020.9212

基于深度学习的抓取目标姿态检测与定位

Detection and Positioning of Grab Target Based on Deep Learning

  • 摘要: 机器人对抓取目标进行高准确的姿态检测与定位依然是一个开放性的难题.本文提出了一种基于卷积神经网络对抓取目标快速姿态检测与精确定位的方法.该方法采用Faster R-CNN Inception-V2网络模型,在网络中将抓取目标的姿态角度采用分类标签方式输出,位置坐标采用回归方法,对Cornell公开数据集重新标注并训练端到端模型.模型在实例检测和对象检测测试集上分别取得96.18%和96.32%的准确率,对于每一幅图像的处理时间小于0.06 s.实验结果表明模型能够实时地对图像中单个或多个抓取目标进行快速地姿态检测与定位,准确度高并具有很强的鲁棒性和稳定性.

     

    Abstract: A robot's high-accuracy attitude detection and location of the grab target is still an open problem. We propose a method based on the convolutional neural network, which uses the Faster R-CNN Inception-V2 network model, for high-accuracy and fast attitude detection and location of the grab target. In the network, the attitude angle of the grab target is determined using a classification label, and the position coordinates are obtained using a regression method. The Cornell public datasets are relabeled, and the end-to-end models are trained. The model achieves an accuracy of 96.18% and 96.32% on the instance and object detection test sets, respectively, and the processing time for each image is less than 0.06 s. The experimental results show that the model can achieve high-accuracy and fast attitude detection and location of single or multiple grab targets in animage in real time and has strong robustness and stability.

     

/

返回文章
返回