We propose a framework that integrates a ground-to-satellite (G2S) cross-view registration method with visual SLAM for autonomous driving. Loop closure, which is crucial for reducing localization error, is not always available in autonomous driving. Conversely, G2S registration avoids error accumulation but lacks robustness. Our goal is to combine the advantages of both methods to achieve precise vision-based vehicle localization.
Our approach is straightforward: after obtaining the trajectory through visual SLAM, we implement a two-stage process to select valid G2S results: a coarse check using spatial correlation bounds, followed by a fine check using visual odometry consistency. The validated G2S results are then fused with the visual odometry from SLAM by solving a scaled pose graph.
Experimental results demonstrate that our proposed method achieves high trajectory accuracy even without loop closure.
@article{zhang2024increasing,
title={Increasing SLAM Pose Accuracy by Ground-to-Satellite Image Registration},
author={Zhang, Yanhao and Shi, Yujiao and Wang, Shan and Vora, Ankit and Perincherry, Akhil and Chen, Yongbo and Li, Hongdong},
journal={arXiv preprint arXiv:2404.09169},
year={2024}
}
Please do not hesitate to contact me at yanhaozhang1991@gmail.com, or yanhao.zhang@uts.edu.au for any questions.