Journal of Astronautics ›› 2015, Vol. 36 ›› Issue (10): 1178-1186.doi: 10.3873/j.issn.1000-1328.2015.10.012

Previous Articles     Next Articles

Relative Pose Estimation of Noncooperative Target Based on Fusion of Monocular Vision and Scannerless 3D LIDAR

HAO Gang tao, DU Xiao ping, ZHAO Ji guang,SONG Jian jun   

  1. 1.Graduate School, Academy of Equipment, Beijing 101416, China;
    2. Dept. Aerospace Command, Academy of Equipment, Beijing 101416, China;
    3. Dept. Scientific Research, Academy of Equipment, Beijing 101416, China; 4. Troops 95806 of PLA,Beijing 100076, China
  • Received:2014-10-30 Revised:2015-03-23 Online:2015-10-15 Published:2015-10-25

Abstract:

Only one vision sensor in the traditional way is incompetent for the navigation of complicated noncooperative operation. In order to solve this problem, a scale-ambiguous relative pose estimation method based on the fusion of monocular vision and scannerless 3D LIDAR is proposed. Firstly, the imaging geometrical relationship of two cameras is used to map the scannerless range measurement onto the monocular texture image. Secondly, on the basis of establishing the SLAM (Simultaneous Localization and Mapping, SLAM) Bayes filter estimation model, the scale-ambiguous relative pose estimation algorithm based on EKF (Extended Kalman Filter)-UKF (Unscented Kalman Filter)-PF (Particle Filter) combination filter is presented. Thirdly, a global scale factor estimation algorithm based on the fusion image is proposed to estimate the scale factor, which can be solved by a simple linear filter algorithm. Some simulations based on 2D/3D images generated in OpenGL not only demonstrate both good accuracy and robustness of the proposed approach but also show that the position estimation error is approximately proportional to the scale estimation error.

Key words: Noncooperative target, Relative pose, Monocular vision, Scannerless 3D LIDAR, Image fusion

CLC Number: