|
|
An Improved Binocular Visual Inertial Navigation and Positioning Algorithm Based on Point – Line Fusion |
DING Xiao1, YAN Yuxiang2, ZHANG Yongjian1, LAN Weiqi2, BAI Xiaoliang2 |
1. AVIC Chengdu Aircraft Industrial (Group) Co., Ltd., Chengdu 610092, China;
2. Northwestern Polytechnical University, Xi’an 710072, China |
|
|
Abstract In the assembly process of complex products based on augmented reality, the location of augmented reality equipment is the core of the real-time integration of virtual guidance information and assembly site. The traditional positioning method based on label or pre-built offline assembly matrix model has the problems of low compatibility between assembly task and visual positioning, and poor robustness and stability of single visual positioning. The fusion of vision and inertial measurement unit (IMU) can improve the positioning accuracy and robustness, and effectively improve the assembly quality and efficiency of complex products. In this paper, a positioning algorithm based on the fusion of binocular vision and IMU is proposed. This algorithm extracts image features through point and line features. The binocular camera and IMU are jointly calibrated, and the point and line features are extracted and matched by ORB and LSD line end fusion. The pose error fusion model of visual-inertial fusion based on point and line features is established by means of tight coupling of vision and IMU. By comparing the experiments of VINS-Fusion, PL-VIO algorithms and IPL-VIO algorithm improved in this paper, the absolute displacement and rotation error of the IPL-VIO algorithm in structured scene is smaller than that of the original algorithm, and the structured scene information is more abundant, which can be applied in AR assembly field with weak texture, and provide stable and reliable pose data for augmented reality assembly platform.
|
|
|
|
|
|
|
|