|
|
|
| Sparse Feature Matching Algorithm for UAV Visual Localization |
| WANG Chenji1, JIN Xin2, HAN Linsen2, HUANG Wenjun1, LUO Hangzai1 |
1. Northwest University, Xi’an 710127, China;
2. AVIC Manufacturing Technology Institute, Beijing 100024, China |
|
|
|
|
Abstract Currently, unmanned aerial vehicles (UAVs) primarily rely on the global navigation satellite system (GNSS) for navigation and localization. However, in scenarios where satellite signals are weak or interfered with, the completion of UAV missions is severely affected, and the safe flight of UAVs could even be jeopardized. To address this issue, this paper proposes a visual localization algorithm to ensure the safe and long-term flight of UAVs in GNSS-denied environments. The algorithm calculates the UAV's geographic coordinates by matching aerial images captured by the UAV with geotagged satellite maps. Firstly, a satellite map preprocessing method is designed to reduce the computational load during UAV flights. Secondly, the learned perceptual image patch similarity (LPIPS) metric is used for initial retrieval. Finally, image matching and offset estimation are performed by combining the deep learning-based SuperPoint sparse feature extraction algorithm and the LightGlue feature matching algorithm, thus finally achieving UAV visual localization. The proposed method is tested on the ALTO dataset, achieving a 17.2% improvement over the current state-of-the-art methods in terms of the R@1 metric, which demonstrates its feasibility and advancement.
|
|
|
|
|
| PACS: TP751 |
|
|
|
|
|
|