Use of an Enhanced Flight Vision System (EFVS) for Taxiing in Low-Visibility Environments

Author(s):  
Dennis B. Beringer ◽  
Andrea Sparko ◽  
Joseph M. Jaworski
Keyword(s):  
2015 ◽  
Vol 3 ◽  
pp. 2373-2380 ◽  
Author(s):  
Lynda J. Kramer ◽  
Randall E. Bailey ◽  
Kyle K. Ellis
Keyword(s):  

Author(s):  
Dennis B. Beringer ◽  
Kelene A. Fercho

Twelve Part 121 operations commercial-carrier crews completed low-visibility takeoffs at Memphis International Airport using an Enhanced Flight Vision System (EFVS). A 2x2x2x3 factorial design with runway visual range (RVR; 500 and 700 feet), runway edge lighting (high intensity or medium intensity) and two levels of EFVS (either captain’s head-up display only or with additional first officer’s head-down repeater) was used along with supplemental sample points and several baseline trials representing current-authorization conditions. Tasks included normal takeoffs, EFVS failure (both continue and reject trials), and engine failure (reject). There were no significant main effects of display or infrastructure in the main design (500, 700 RVR), and pilot performances in the experimental trials with EFVS were not markedly different from the baseline (current authorization) trials. All crews were able to stop the aircraft successfully on the runway during rejected takeoffs. Pilots uniformly believed they could successfully complete takeoffs or reject them in lower visibilities with EFVS as compared with using the head-up display without EFVS, which was sup-ported by observed performance.


Author(s):  
Charles J.C. Lloyd ◽  
William F. Reinhart

Head-up displays (HUDs) represent the leading candidate display technology for inclusion in an enhanced or synthetic vision system (EVS or SVS) for commercial transport aircraft. One common EVS concept assumes the raster display of raw or processed sensor (radar or IR) data. However, experience with the use of raster rather than stroke display modes has been largely limited to the presentation of images captured by IR sensitive and image-intensified cameras during night flying conditions when the luminance of the forward scene over which the image will be superimposed is much lower than in daytime. The objective of this work is to generate a specification for minimum HUD raster image modulation assuming real-world luminance values typically found in low-visibility, daylight flight. Six Honeywell pilots rated the image quality and utility of flight video as presented through a military-style HUD in a transport cockpit mockup. Flight video came from daylight FLIR and daylight CCD cameras. The luminance of the forward scene against which the HUD image was superimposed was varied among nine levels ranging from 5 fL to 10,000 fL. The results indicate that HUD raster luminance must be approximately 50% external scene luminance to promote good pilot awareness of general terrain. To maintain good utility and visibility of standard, high-contrast runway markings, runway center line, and runway edges, HUD raster luminance must be approximately 15% of the forward scene luminance.


2007 ◽  
Author(s):  
Carlo Tiana ◽  
Robert Hennessy ◽  
Keith Alter ◽  
Chad Jennings

Author(s):  
Lynda J. Kramer ◽  
Kyle K. E. Ellis ◽  
Randall E. Bailey ◽  
Steven P. Williams ◽  
Kurt Severance ◽  
...  
Keyword(s):  

2004 ◽  
Author(s):  
Michael D. Byrne ◽  
Alex Kirlik ◽  
Michael D. Fleetwood ◽  
David G. Huss ◽  
Alex Kosorukoff ◽  
...  

2020 ◽  
pp. 1-12
Author(s):  
Changxin Sun ◽  
Di Ma

In the research of intelligent sports vision systems, the stability and accuracy of vision system target recognition, the reasonable effectiveness of task assignment, and the advantages and disadvantages of path planning are the key factors for the vision system to successfully perform tasks. Aiming at the problem of target recognition errors caused by uneven brightness and mutations in sports competition, a dynamic template mechanism is proposed. In the target recognition algorithm, the correlation degree of data feature changes is fully considered, and the time control factor is introduced when using SVM for classification,At the same time, this study uses an unsupervised clustering method to design a classification strategy to achieve rapid target discrimination when the environmental brightness changes, which improves the accuracy of recognition. In addition, the Adaboost algorithm is selected as the machine learning method, and the algorithm is optimized from the aspects of fast feature selection and double threshold decision, which effectively improves the training time of the classifier. Finally, for complex human poses and partially occluded human targets, this paper proposes to express the entire human body through multiple parts. The experimental results show that this method can be used to detect sports players with multiple poses and partial occlusions in complex backgrounds and provides an effective technical means for detecting sports competition action characteristics in complex backgrounds.


2018 ◽  
Vol 1 (2) ◽  
pp. 17-23
Author(s):  
Takialddin Al Smadi

This survey outlines the use of computer vision in Image and video processing in multidisciplinary applications; either in academia or industry, which are active in this field.The scope of this paper covers the theoretical and practical aspects in image and video processing in addition of computer vision, from essential research to evolution of application.In this paper a various subjects of image processing and computer vision will be demonstrated ,these subjects are spanned from the evolution of mobile augmented reality (MAR) applications, to augmented reality under 3D modeling and real time depth imaging, video processing algorithms will be discussed to get higher depth video compression, beside that in the field of mobile platform an automatic computer vision system for citrus fruit has been implemented ,where the Bayesian classification with Boundary Growing to detect the text in the video scene. Also the paper illustrates the usability of the handed interactive method to the portable projector based on augmented reality.   © 2018 JASET, International Scholars and Researchers Association


Sign in / Sign up

Export Citation Format

Share Document