scholarly journals Automatic Detection of Wrecked Airplanes from UAV Images

Author(s):  
Anhar Risnumawan ◽  
Muhammad Ilham Perdana ◽  
Alif Habib Hidayatulloh ◽  
A. Khoirul Rizal ◽  
Indra Adji Sulistijono ◽  
...  

Searching the accident site of a missing airplane is the primary step taken by the search and rescue team before rescuing the victims. However, due to the vast exploration area, lack of technology, no access road, and rough terrain make the search process nontrivial and thus causing much delay in handling the victims. Therefore, this paper aims to develop an automatic wrecked airplane detection system using visual information taken from aerial images such as from a camera. A new deep network is proposed to distinguish robustly the wrecked airplane that has high pose, scale, color variation, and high deformable object. The network leverages the last layers to capture more abstract and semantics information for robust wrecked airplane detection. The network is intertwined by adding more extra layers connected at the end of the layers. To reduce missing detection which is crucial for wrecked airplane detection, an image is then composed into five patches going feed-forwarded to the net in a convolutional manner. Experiments show very well that the proposed method successfully reaches AP=91.87%, and we believe it could bring many benefits for the search and rescue team for accelerating the searching of wrecked airplanes and thus reducing the number of victims.

2021 ◽  
Vol 13 (23) ◽  
pp. 4903
Author(s):  
Tomasz Niedzielski ◽  
Mirosława Jurecka ◽  
Bartłomiej Miziński ◽  
Wojciech Pawul ◽  
Tomasz Motyl

Recent advances in search and rescue methods include the use of unmanned aerial vehicles (UAVs), to carry out aerial monitoring of terrains to spot lost individuals. To date, such searches have been conducted by human observers who view UAV-acquired videos or images. Alternatively, lost persons may be detected by automated algorithms. Although some algorithms are implemented in software to support search and rescue activities, no successful rescue case using automated human detectors has been reported on thus far in the scientific literature. This paper presents a report from a search and rescue mission carried out by Bieszczady Mountain Rescue Service near the village of Cergowa in SE Poland, where a 65-year-old man was rescued after being detected via use of SARUAV software. This software uses convolutional neural networks to automatically locate people in close-range nadir aerial images. The missing man, who suffered from Alzheimer’s disease (as well as a stroke the previous day) spent more than 24 h in open terrain. SARUAV software was allocated to support the search, and its task was to process 782 nadir and near-nadir JPG images collected during four photogrammetric flights. After 4 h 31 min of the analysis, the system successfully detected the missing person and provided his coordinates (uploading 121 photos from a flight over a lost person; image processing and verification of hits lasted 5 min 48 s). The presented case study proves that the use of an UAV assisted by SARUAV software may quicken the search mission.


2021 ◽  
pp. 1-18
Author(s):  
R.S. Rampriya ◽  
Sabarinathan ◽  
R. Suganya

In the near future, combo of UAV (Unmanned Aerial Vehicle) and computer vision will play a vital role in monitoring the condition of the railroad periodically to ensure passenger safety. The most significant module involved in railroad visual processing is obstacle detection, in which caution is obstacle fallen near track gage inside or outside. This leads to the importance of detecting and segment the railroad as three key regions, such as gage inside, rails, and background. Traditional railroad segmentation methods depend on either manual feature selection or expensive dedicated devices such as Lidar, which is typically less reliable in railroad semantic segmentation. Also, cameras mounted on moving vehicles like a drone can produce high-resolution images, so segmenting precise pixel information from those aerial images has been challenging due to the railroad surroundings chaos. RSNet is a multi-level feature fusion algorithm for segmenting railroad aerial images captured by UAV and proposes an attention-based efficient convolutional encoder for feature extraction, which is robust and computationally efficient and modified residual decoder for segmentation which considers only essential features and produces less overhead with higher performance even in real-time railroad drone imagery. The network is trained and tested on a railroad scenic view segmentation dataset (RSSD), which we have built from real-time UAV images and achieves 0.973 dice coefficient and 0.94 jaccard on test data that exhibits better results compared to the existing approaches like a residual unit and residual squeeze net.


Author(s):  
Cai Luo ◽  
Andre Possani Espinosa ◽  
Danu Pranantha ◽  
Alessandro De Gloria

Author(s):  
S. Mikrut

The UAV technology seems to be highly future-oriented due to its low costs as compared to traditional aerial images taken from classical photogrammetry aircrafts. The AGH University of Science and Technology in Cracow - Department of Geoinformation, Photogrammetry and Environmental Remote Sensing focuses mainly on geometry and radiometry of recorded images. Various scientific research centres all over the world have been conducting the relevant research for years. The paper presents selected aspects of processing digital images made with the UAV technology. It provides on a practical example a comparison between a digital image taken from an airborne (classical) height, and the one made from an UAV level. In his research the author of the paper is trying to find an answer to the question: to what extent does the UAV technology diverge today from classical photogrammetry, and what are the advantages and disadvantages of both methods? The flight plan was made over the Tokarnia Village Museum (more than 0.5 km<sup>2</sup>) for two separate flights: the first was made by an UAV - System FT-03A built by FlyTech Solution Ltd. The second was made with the use of a classical photogrammetric Cesna aircraft furnished with an airborne photogrammetric camera (Ultra Cam Eagle). Both sets of photographs were taken with pixel size of about 3 cm, in order to have reliable data allowing for both systems to be compared. The project has made aerotriangulation independently for the two flights. The DTM was generated automatically, and the last step was the generation of an orthophoto. The geometry of images was checked under the process of aerotriangulation. To compare the accuracy of these two flights, control and check points were used. RMSE were calculated. The radiometry was checked by a visual method and using the author's own algorithm for feature extraction (to define edges with subpixel accuracy). After initial pre-processing of data, the images were put together, and shown side by side. Buildings and strips on the road were selected from whole data for the comparison of edges and details. The details on UAV images were not worse than those on classical photogrammetric ones. One might suppose that geometrically they also were correct. The results of aerotriangulation prove these facts, too. Final results from aerotriangulation were on the level of RMS = 1 pixel (about 3 cm). In general it can be said that photographs from UAVs are not worse than classic ones. In the author's opinion, geometric and radiometric qualities are at a similar level for this kind of area (a small village). This is a very significant result as regards mapping. It means that UAV data can be used in mapping production.


Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


2021 ◽  
Vol 13 (18) ◽  
pp. 3787
Author(s):  
Carlo Iapige De De Gaetani ◽  
Francesco Ioli ◽  
Livio Pinto

Alpine glaciers are strongly suffering the consequences of the temperature rising and monitoring them over long periods is of particular interest for climate change tracking. A wide range of techniques can be successfully applied to survey and monitor glaciers with different spatial and temporal resolutions. However, going back in time to retrace the evolution of a glacier is still a challenging task. Historical aerial images, e.g., those acquired for regional cartographic purposes, are extremely valuable resources for studying the evolution and movement of a glacier in the past. This work analyzed the evolution of the Belvedere Glacier by means of structure from motion techniques applied to digitalized historical aerial images combined with more recent digital surveys, either from aerial platforms or UAVs. This allowed the monitoring of an Alpine glacier with high resolution and geometrical accuracy over a long span of time, covering the period 1977-2019. In this context, digital surface models of the area at different epochs were computed and jointly analyzed, retrieving the morphological dynamics of the Belvedere Glacier. The integration of datasets dating back to earlier times with those referring to surveys carried out with more modern technologies exploits at its full potential the information that at first glance could be thought obsolete, proving how historical photogrammetric datasets are a remarkable heritage for glaciological studies.


2019 ◽  
Vol 11 (13) ◽  
pp. 1584 ◽  
Author(s):  
Yang Chen ◽  
Won Suk Lee ◽  
Hao Gan ◽  
Natalia Peres ◽  
Clyde Fraisse ◽  
...  

Strawberry growers in Florida suffer from a lack of efficient and accurate yield forecasts for strawberries, which would allow them to allocate optimal labor and equipment, as well as other resources for harvesting, transportation, and marketing. Accurate estimation of the number of strawberry flowers and their distribution in a strawberry field is, therefore, imperative for predicting the coming strawberry yield. Usually, the number of flowers and their distribution are estimated manually, which is time-consuming, labor-intensive, and subjective. In this paper, we develop an automatic strawberry flower detection system for yield prediction with minimal labor and time costs. The system used a small unmanned aerial vehicle (UAV) (DJI Technology Co., Ltd., Shenzhen, China) equipped with an RGB (red, green, blue) camera to capture near-ground images of two varieties (Sensation and Radiance) at two different heights (2 m and 3 m) and built orthoimages of a 402 m2 strawberry field. The orthoimages were automatically processed using the Pix4D software and split into sequential pieces for deep learning detection. A faster region-based convolutional neural network (R-CNN), a state-of-the-art deep neural network model, was chosen for the detection and counting of the number of flowers, mature strawberries, and immature strawberries. The mean average precision (mAP) was 0.83 for all detected objects at 2 m heights and 0.72 for all detected objects at 3 m heights. We adopted this model to count strawberry flowers in November and December from 2 m aerial images and compared the results with a manual count. The average deep learning counting accuracy was 84.1% with average occlusion of 13.5%. Using this system could provide accurate counts of strawberry flowers, which can be used to forecast future yields and build distribution maps to help farmers observe the growth cycle of strawberry fields.


2010 ◽  
Vol 19 (01) ◽  
pp. 173-189
Author(s):  
SEUNG-HUN YOO ◽  
CHANG-SUNG JEONG

Graphics processing unit (GPU) has surfaced as a high-quality platform for computer vision-related systems. In this paper, we propose a straightforward system consisting of a registration and a fusion method over GPU, which generates good results at high speed, compared to non-GPU-based systems. Our GPU-accelerated system utilizes existing methods through converting the methods into the GPU-based platform. The registration method uses point correspondences to find a registering transformation estimated with the incremental parameters in a coarse-to-fine way, while the fusion algorithm uses multi-scale methods to fuse the results from the registration stage. We evaluate performance with the same methods that are executed over both CPU-only and GPU-mounted environment. The experiment results present convincing evidences of the efficiency of our system, which is tested on a few pairs of aerial images taken by electro-optical and infrared sensors to provide visual information of a scene for environmental observatories.


Author(s):  
Marco Alésio Figueiredo Pereira ◽  
Bruno Lippo Barbieiro ◽  
Marciano Carneiro ◽  
Masato Kobiyama

The junction angles in fluvial channels are determined from complex erosion and deposition processes, resulting from river-flow dynamics, bed and margin morphology, and so on. Knowledge regarding these angles is important in order to better understand the existing conditions in a basin. In this sense, the objective of the present study was to determine the junction angles on fluvial channels, called α, β and γ, applying the law of cosines. Georeferenced Google Earth Pro images and UAV images were used. Then, the values calculated from the georeferenced aerial images were compared with the values calculated from the minimum energy principle. To visualize and understand the obtained angles, the Junction Angles Diagram was used. The obtained result shows that the methodology using georeferenced aerial images have good performance for determining junction angles on fluvial channels.


Sign in / Sign up

Export Citation Format

Share Document