Abstract:
To address the problem that fast-moving cameras can cause optical-flow tracking failures due to motion blurring and low-textured scenes, we design a monocular visual odometry algorithm based on points, lines, and edge features, whereas traditional algorithm contain only point or line features. First, we extend the popular semi-direct approach to monocular visual odometry known as point-line semi-direct monocular visual odometry (PLSVO) to include edge segments, thereby obtaining a more robust system capable of dealing with both low-textured and -structured environments. We use a keyframe extraction strategy to improve the localization accuracy of the algorithm and we use initialization optimization to speed up the convergence of the map points. Lastly, we optimize the pose and map points to solve four problems, i.e., image alignment of edge feature points, individual feature alignment, and pose and structure refinements. We thoroughly evaluate our method on the EuRoC and TUM datasets. The experimental results show that the proposed algorithm performs better than the PLSVO in terms of both tracking accuracy and robustness.