Abstract:
In this study, we use an algorithm for treating visual features as environmental observations, using a visual navigation technology for mobile robot navigation. 2D visual features are recognized using the gray-value variance method, and their coordinates are transformed from the image-plane frame to a world frame, based on the mapping relation between 2D and 3D space. The procedure results in a measurement model, which is integrated into a Bayesian data fusion framework. To reduce the error stemming from linearization, we propose an iterative observation updating strategy. By iteratively rectifying the initial state of the filtering update routine, we improve the accuracy of the estimated joint posterior and the estimate quality of the robot pose and environmental primitive. Furthermore, we carried out a field test, covering a 505 m trajectory in a practical environment using a mobile robot platform mounted with computer vision, and demonstrated that the proposed algorithm outperforms the traditional method.