宇航学报 ›› 2020, Vol. 41 ›› Issue (5): 560-568.doi: 10.3873/j.issn.1000-1328.2020.05.006

• 制导、导航、控制与电子 • 上一篇    下一篇

一种卷积神经网络非合作目标姿态测量方法

徐云飞,张笃周,王立,华宝成,石永强,贺盈波   

  1. 北京控制工程研究所空间光电测量与感知实验室,北京 100190
  • 收稿日期:2019-12-13 修回日期:2020-02-20 出版日期:2020-05-15 发布日期:2020-05-25
  • 基金资助:
    “十三五”装备预研共用技术和领域基金项目(41412070204)

A Non Cooperative Target Attitude Measurement Method Based on Convolutional Neural Network

XU Yun fei, ZHANG Du zhou, WANG Li, HUA Bao cheng, SHI Yong qiang, HE Ying bo   

  1. Laboratory of Space Photoelectric Measurement and Perception, Beijing Institute of Control Engineering, Beijing 100190, China
  • Received:2019-12-13 Revised:2020-02-20 Online:2020-05-15 Published:2020-05-25

摘要: 针对空间非合作目标姿态测量问题,提出一种基于卷积神经网络的非合作目标姿态视觉测量方法。该方法先设计特征提取网络并利用公开数据集进行预训练,用少量实际目标图像进行迁移学习,实现非合作目标图像高层抽象特征的自动提取;再设计基于回归模型的姿态映射网络,建立图像高层特征与三轴姿态角之间的非线性关系,实现非合作目标的姿态测量。实验验证了两类特征提取网络测量精度和参数量大小,测量精度可达 0.711° (1σ),表明了“单目相机+卷积神经网络”方法的可行性。

关键词: 卷积神经网络, 空间非合作目标, 单目视觉, 姿态测量

Abstract: Aiming at the attitude measurement of a space non-cooperative target, a vision measurement method of non-cooperative target based on convolutional neural network is designed. Firstly, a feature extraction network is pre-trained on open dataset, then a few actual target images are used for transfer learning, thus realizing an automatic extraction of the high-level abstract features of the non-cooperative target images. Secondly, a regression model based attitude mapping network is designed, which builds a nonlinear relationship between the high-level features of image and three-axis attitude angle to realize the attitude measurement of the non-cooperative target. The measurement accuracy and parameter size of the two kinds of feature extraction networks are verified by experiments, with the networks’ accuracy reaching 0.711 degrees (1σ), the feasibility of method, “monocular camera+convolutional neural network”, is proved.

Key words: Convolutional neural network, Space non-cooperative target, Monocular vison, Attitude measurement

中图分类号: