Abstract:
We introduce a novel method for point cloud registration that effectively estimates the transformation between two partially overlapping point clouds. By adopting advanced correspondence learning techniques and emphasizing the spatial consistency of superpoints, the accuracy of coarse matching is significantly enhanced. Additionally, the concept of saliency scores is introduced and innovatively applied to extracting correspondence relationships, thereby improving the accuracy of transformation estimation. Moreover, a series of novel loss function combinations, including coarse matching loss, fine matching loss, and saliency score loss, have been designed to optimize model performance and enhance registration accuracy. Experimental results demonstrate the excellent performance of this method in point cloud registration tasks, effectively improving the inlier ratio, feature matching recall rate, and registration recall rate. The method exhibits exceptional robustness in various noise environments and shows advantages under different overlap conditions. Ablation studies further confirm the critical role of geometric consistency and saliency scores in enhancing registration precision.