Real-time Accurate Runway Detection based on Airborne Multi-sensors Fusion

2017 ◽  
Vol 67 (5) ◽  
pp. 542 ◽  
Author(s):  
Lei Zhang ◽  
Yue Cheng ◽  
Zhengjun Zhai

<p>Existing methods of runway detection are more focused on image processing for remote sensing images based on computer vision techniques. However, these algorithms are too complicated and time-consuming to meet the demand for real-time airborne application. This paper proposes a novel runway detection method based on airborne multi-sensors data fusion which works in a coarse-to-fine hierarchical architecture. At the coarse layer, a vision projection model from world coordinate system to image coordinate system is built by fusing airborne navigation data and forward-looking sensing images, then a runway region of interest (ROI) is extracted from a whole image by the model. Furthermore, EDLines which is a real-time line segments detector is applied to extract straight line segments from ROI at the fine layer, and fragmented line segments generated by EDLines are linked into two long runway lines. Finally, some unique runway features (e.g. vanishing point and runway direction) are used to recognise airport runway. The proposed method is tested on an image dataset provided by a flight simulation system. The experimental results show that the method has advantages in terms of speed, recognition rate and false alarm rate.</p>

Author(s):  
Sunghyun Kim ◽  
Won-hyung Lee

Kinect is a device that has been widely used in many areas since it was released in 2010. Kinect SDK was announced in 2011 and used in many other areas than its original purpose, which was a controller for gaming. In particular, it has been used by a number of artists in digital media art since it is inexpensive and has a fast recognition rate. However, there is a problem. Kinect create 3D coordinates with a single 2D RGB image for x, y value - single depth image for z value. And this creates a significant limitation on the installation for interactivity of media art. Because the Cartesian XY coordinate and the spherical Z coordinate system are used in combination, depth error depending on the distance is generated, which makes real-time rotation recognition and coordinate correction difficult above coordinate system. This paper proposes a real-time calibration method of Kinect recognition range expansion for useful application in the digital media art area. The proposed method can recognize the viewer accurately by calibrating a coordinate in any direction in front of the viewer. 3,400 datasets witch acquire from experiment were measured as five stances: the 1m attention stance, 1m hands-up stance, 2m attention stance, 2m hands-up stance, and 2m hands-half-up stance, which were taken and recorded every 0.5 sec. The experimental results showed that the accuracy rate was improved about 11.5% compared with front measurement data according to Kinect reference installation method.


2009 ◽  
Vol 29 (5) ◽  
pp. 1359-1361
Author(s):  
Tong ZHANG ◽  
Zhao LIU ◽  
Ning OUYANG

2008 ◽  
Vol 3 (1) ◽  
pp. 106-115 ◽  
Author(s):  
Ting Zhang ◽  
Yuanxin Ouyang ◽  
Yang He

The RFID is not only a feasible, novel, and cost-effective candidate for daily object identification but it is also considered as a significant tool to provide traceable visibility along different stages of the aviation supply chain. In the air baggage handing application, the RFID tags are used to enhance the ability for baggage tracking, dispatching and conveyance so as to improve the management efficiency and the users’ satisfaction. We surveyed current related work and introduce the IATA RP1740c protocol used for the standard to recognize the baggage tags. One distributed aviation baggage traceable application is designed based on the RFID networks. We describe the RFID-based baggage tracking experiment in the BCIA (Beijing Capital International Airport). In this experiment the tags are sealed in the printed baggage label and the RFID readers are fixed in the certain interested positions of the BHS in the Terminal 2. We measure the accurate recognition rate and monitor the baggage’s real-time situation on the monitor’s screen. Through the analysis of the measured results within two months we emphasize the advantage of the adoption of RFID tags in this high noisy BHS environment. The economical benefits achieved by the extensive deployment of RFID in the baggage handing system are also outlined.


Author(s):  
Qing E Wu ◽  
Zhiwu Chen ◽  
Ruijie Han ◽  
Cunxiang Yang ◽  
Yuhao Du ◽  
...  

To carry out an effective recognition for palmprint, this paper presents an algorithm of image segmentation of region of interest (ROI), extracts the ROI of a palmprint image and studies the composing features of palmprint. This paper constructs a coordinate by making use of characteristic points in the palm geometric contour, improves the algorithm of ROI extraction and provides a positioning method of ROI. Moreover, this paper uses the wavelet transform to divide up ROI, extracts the energy feature of wavelet, gives an approach of matching and recognition to improve the correctness and efficiency of existing main recognition approaches, and compares it with existing main approaches of palmprint recognition by experiments. The experiment results show that the approach in this paper has the better recognition effect, the faster matching speed, and the higher recognition rate which is improved averagely by 2.69% than those of the main recognition approaches.


2014 ◽  
Vol 59 (4) ◽  
pp. 1-18 ◽  
Author(s):  
Ioannis Goulos ◽  
Vassilios Pachidis ◽  
Pericles Pilidis

This paper presents a mathematical model for the simulation of rotor blade flexibility in real-time helicopter flight dynamics applications that also employs sufficient modeling fidelity for prediction of structural blade loads. A matrix/vector-based formulation is developed for the treatment of elastic blade kinematics in the time domain. A novel, second-order-accurate, finite-difference scheme is employed for the approximation of the blade motion derivatives. The proposed method is coupled with a finite-state induced-flow model, a dynamic wake distortion model, and an unsteady blade element aerodynamics model. The integrated approach is deployed to investigate trim controls, stability and control derivatives, nonlinear control response characteristics, and structural blade loads for a hingeless rotor helicopter. It is shown that the developed methodology exhibits modeling accuracy comparable to that of non-real-time comprehensive rotorcraft codes. The proposed method is suitable for real-time flight simulation, with sufficient fidelity for simultaneous prediction of oscillatory blade loads.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 555
Author(s):  
Jui-Sheng Chou ◽  
Chia-Hsuan Liu

Sand theft or illegal mining in river dredging areas has been a problem in recent decades. For this reason, increasing the use of artificial intelligence in dredging areas, building automated monitoring systems, and reducing human involvement can effectively deter crime and lighten the workload of security guards. In this investigation, a smart dredging construction site system was developed using automated techniques that were arranged to be suitable to various areas. The aim in the initial period of the smart dredging construction was to automate the audit work at the control point, which manages trucks in river dredging areas. Images of dump trucks entering the control point were captured using monitoring equipment in the construction area. The obtained images and the deep learning technique, YOLOv3, were used to detect the positions of the vehicle license plates. Framed images of the vehicle license plates were captured and were used as input in an image classification model, C-CNN-L3, to identify the number of characters on the license plate. Based on the classification results, the images of the vehicle license plates were transmitted to a text recognition model, R-CNN-L3, that corresponded to the characters of the license plate. Finally, the models of each stage were integrated into a real-time truck license plate recognition (TLPR) system; the single character recognition rate was 97.59%, the overall recognition rate was 93.73%, and the speed was 0.3271 s/image. The TLPR system reduces the labor force and time spent to identify the license plates, effectively reducing the probability of crime and increasing the transparency, automation, and efficiency of the frontline personnel’s work. The TLPR is the first step toward an automated operation to manage trucks at the control point. The subsequent and ongoing development of system functions can advance dredging operations toward the goal of being a smart construction site. By intending to facilitate an intelligent and highly efficient management system of dredging-related departments by providing a vehicle LPR system, this paper forms a contribution to the current body of knowledge in the sense that it presents an objective approach for the TLPR system.


Author(s):  
Zachary Baum

Purpose: Augmented reality overlay systems can be used to project a CT image directly onto a patient during procedures. They have been actively trialed for computer-guided procedures, however they have not become commonplace in practice due to restrictions of previous systems. Previous systems have not been handheld, and have had complicated calibration procedures. We put forward a handheld tablet-based system for assisting with needle interventions. Methods: The system consists of a tablet display and a 3-D printed reusable and customizable frame. A simple and accurate calibration method was designed to align the patient to the projected image. The entire system is tracked via camera, with respect to the patient, and the projected image is updated in real time as the system is moved around the region of interest. Results: The resulting system allowed for 0.99mm mean position error in the plane of the image, and a mean position error of 0.61mm out of the plane of the image. This accuracy was thought to be clinically acceptable for tool using computer-guidance in several procedures that involve musculoskeletal needle placements. Conclusion: Our calibration method was developed and tested using the designed handheld system. Our results illustrate the potential for the use of augmented reality handheld systems in computer-guided needle procedures. 


2018 ◽  
Vol 10 (10) ◽  
pp. 1544 ◽  
Author(s):  
Changjiang Liu ◽  
Irene Cheng ◽  
Anup Basu

We present a new method for real-time runway detection embedded in synthetic vision and an ROI (Region of Interest) based level set method. A virtual runway from synthetic vision provides a rough region of an infrared runway. A three-thresholding segmentation is proposed following Otsu’s binarization method to extract a runway subset from this region, which is used to construct an initial level set function. The virtual runway also gives a reference area of the actual runway in an infrared image, which helps us design a stopping criterion for the level set method. In order to meet the needs of real-time processing, the ROI based level set evolution framework is implemented in this paper. Experimental results show that the proposed algorithm is efficient and accurate.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker

The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.


Sign in / Sign up

Export Citation Format

Share Document