In the assembly process of complex products based on augmented reality, the location of augmented reality equipment is the core of the real-time integration of virtual guidance information and assembly site. The traditional positioning method based on label or pre-built offline assembly matrix model has the problems of low compatibility between assembly task and visual positioning, and poor robustness and stability of single visual positioning. The fusion of vision and inertial measurement unit (IMU) can improve the positioning accuracy and robustness, and effectively improve the assembly quality and efficiency of complex products. In this paper, a positioning algorithm based on the fusion of binocular vision and IMU is proposed. This algorithm extracts image features through point and line features. The binocular camera and IMU are jointly calibrated, and the point and line features are extracted and matched by ORB and LSD line end fusion. The pose error fusion model of visual-inertial fusion based on point and line features is established by means of tight coupling of vision and IMU. By comparing the experiments of VINS-Fusion, PL-VIO algorithms and IPL-VIO algorithm improved in this paper, the absolute displacement and rotation error of the IPL-VIO algorithm in structured scene is smaller than that of the original algorithm, and the structured scene information is more abundant, which can be applied in AR assembly field with weak texture, and provide stable and reliable pose data for augmented reality assembly platform.
丁晓,晏玉祥,张永建,兰卫旗,白晓亮. 一种改进点线特征融合的双目视觉惯性定位算法[J]. 航空制造技术, 2023, 66(10): 85-92.
DING Xiao, YAN Yuxiang, ZHANG Yongjian, LAN Weiqi, BAI Xiaoliang. An Improved Binocular Visual Inertial Navigation and Positioning Algorithm Based on Point – Line Fusion[J]. Aeronautical Manufacturing Technology, 2023, 66(10): 85-92.