Dynamic Feature Points Elimination Method for Visual Odometry Combined with Semantic Information
-
-
Abstract
Aiming at the problem that the visual odometry of visual SLAM (simultaneous localization and Mapping) algorithm is disturbed by dynamic objects in dynamic environment, which leads to the mismatch of feature points between frames, resulting in large camera pose estimation error, low localization accuracy and poor robustness. In this paper, a dynamic feature points elimination method for visual odometry combined with semantic information is proposed. The improved YOLOv5 object detection network is used to provide semantic information of objects for the visual odometry, and then the dynamic object in the detection boundary box is determined by combining the motion consistency detection algorithm with the epipolar geometry constraint, so as to achieve the effective elimination of the dynamic feature points, and only the static features are used to complete the pose estimation and location. The comparison results on the TUM dataset show that the RMSE values of the absolute trajectory error (ATE), the relative pose error (RPE) of translation and rotation are reduced by 97.71%, 95.10% and 91.97%, respectively, compared with ORB-SLAM2. It is verified that the proposed method significantly reduces the pose estimation error in dynamic environment, and improves the accuracy and robustness of the system.
-
-