<p>We propose a learning-type anchors-driven real-time pose estimation
method for the autolanding fixed-wing unmanned aerial vehicle (UAV). The
proposed method enables online tracking of both position and attitude by the
ground stereo vision system in the Global Navigation Satellite System denied
environments. A pipeline of convolutional neural network (CNN)-based UAV
anchors detection and anchors-driven UAV pose estimation are employed. To realize robust and accurate anchors
detection, we design and implement a Block-CNN architecture to reduce the
impact of the outliers. With the basis of the anchors, monocular and stereo
vision-based filters are established to update the UAV position and attitude.
To expand the training dataset without extra outdoor experiments, we develop a
parallel system containing the outdoor and simulated systems with the same
configuration. Simulated and outdoor experiments are performed to demonstrate
the remarkable pose estimation accuracy improvement compared with the
conventional Perspective-N-Points solution. In addition, the experiments also
validate the feasibility of the proposed architecture and algorithm in terms of
the accuracy and real-time capability requirements for fixed-wing autolanding
UAVs.</p>