结合语义信息的视觉里程计动态特征点剔除方法

Dynamic Feature Points Elimination Method for Visual Odometry Combined with Semantic Information

  • 摘要: 针对视觉同时定位与地图构建(SLAM)算法的视觉里程计在动态场景中受动态物体干扰致使帧间特征点误匹配,导致相机位姿估计误差大、定位精度低、鲁棒性差等问题,提出一种结合语义信息的视觉里程计动态特征点剔除方法。采用改进的YOLOv5目标检测网络为视觉里程计提供物体的语义信息,然后结合对极几何约束的运动一致性检测算法确定目标检测边界框中的动态物体,从而实现动态特征点的有效剔除,最后,仅利用静态特征完成位姿估计与定位。在TUM数据集上对比实验结果表明,其绝对轨迹误差(ATE)、平移和旋转相对位姿误差(RPE)的均方根误差(RMSE)值与ORB-SLAM2相比分别降低了97.71%、95.10%和91.97%,验证了所提出的方法显著降低了动态场景下的位姿估计误差,提高了系统的准确性和鲁棒性。

     

    Abstract: Aiming at the problem that the visual odometry of visual SLAM (simultaneous localization and Mapping) algorithm is disturbed by dynamic objects in dynamic environment, which leads to the mismatch of feature points between frames, resulting in large camera pose estimation error, low localization accuracy and poor robustness. In this paper, a dynamic feature points elimination method for visual odometry combined with semantic information is proposed. The improved YOLOv5 object detection network is used to provide semantic information of objects for the visual odometry, and then the dynamic object in the detection boundary box is determined by combining the motion consistency detection algorithm with the epipolar geometry constraint, so as to achieve the effective elimination of the dynamic feature points, and only the static features are used to complete the pose estimation and location. The comparison results on the TUM dataset show that the RMSE values of the absolute trajectory error (ATE), the relative pose error (RPE) of translation and rotation are reduced by 97.71%, 95.10% and 91.97%, respectively, compared with ORB-SLAM2. It is verified that the proposed method significantly reduces the pose estimation error in dynamic environment, and improves the accuracy and robustness of the system.

     

/

返回文章
返回