A closed-loop, ACT-R approach to modeling approach and landing with and without synthetic vision system (SVS) technology

2004 ◽  
Author(s):  
Michael D. Byrne ◽  
Alex Kirlik ◽  
Michael D. Fleetwood ◽  
David G. Huss ◽  
Alex Kosorukoff ◽  
...  
Author(s):  
Michael D. Byrne ◽  
Alex Kirlik ◽  
Michael D. Fleetwood ◽  
David G. Huss ◽  
Alex Kosorukoff ◽  
...  

Author(s):  
Miguel Lozano ◽  
Rafael Lucia ◽  
Fernando Barber ◽  
Fran Grimaldo ◽  
Antonio Lucas ◽  
...  

2002 ◽  
Author(s):  
Norah K. Link ◽  
Ronald V. Kruk ◽  
David McKay ◽  
Sion A. Jennings ◽  
Greg Craig

1999 ◽  
Author(s):  
Andrew K. Barrows ◽  
Keith W. Alter ◽  
Chad W. Jennings ◽  
J. D. Powell

2011 ◽  
Author(s):  
Hiroka Tsuda ◽  
Kohei Funabiki ◽  
Tomoko Iijima ◽  
Kazuho Tawada ◽  
Takashi Yoshida

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3802 ◽  
Author(s):  
Ahmed F. Fadhil ◽  
Raghuveer Kanneganti ◽  
Lalit Gupta ◽  
Henry Eberle ◽  
Ravi Vaidyanathan

Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.


Sign in / Sign up

Export Citation Format

Share Document