scholarly journals Deep Learning-Based Object Detection for Unmanned Aerial Systems (UASs)-Based Inspections of Construction Stormwater Practices

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2834
Author(s):  
Billur Kazaz ◽  
Subhadipto Poddar ◽  
Saeed Arabi ◽  
Michael A. Perez ◽  
Anuj Sharma ◽  
...  

Construction activities typically create large amounts of ground disturbance, which can lead to increased rates of soil erosion. Construction stormwater practices are used on active jobsites to protect downstream waterbodies from offsite sediment transport. Federal and state regulations require routine pollution prevention inspections to ensure that temporary stormwater practices are in place and performing as intended. This study addresses the existing challenges and limitations in the construction stormwater inspections and presents a unique approach for performing unmanned aerial system (UAS)-based inspections. Deep learning-based object detection principles were applied to identify and locate practices installed on active construction sites. The system integrates a post-processing stage by clustering results. The developed framework consists of data preparation with aerial inspections, model training, validation of the model, and testing for accuracy. The developed model was created from 800 aerial images and was used to detect four different types of construction stormwater practices at 100% accuracy on the Mean Average Precision (MAP) with minimal false positive detections. Results indicate that object detection could be implemented on UAS-acquired imagery as a novel approach to construction stormwater inspections and provide accurate results for site plan comparisons by rapidly detecting the quantity and location of field-installed stormwater practices.

AI ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 166-179 ◽  
Author(s):  
Ziyang Tang ◽  
Xiang Liu ◽  
Hanlin Chen ◽  
Joseph Hupy ◽  
Baijian Yang

Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for identification of ground-based hazards such as spot fires and fire lines, and to communicate this information with fire fighting crews. Current technology relies on visual interpretation of UAS imagery, with little to no computer-assisted automatic detection. With the help of big labeled data and the significant increase of computing power, deep learning has seen great successes on object detection with fixed patterns, such as people and vehicles. However, little has been done for objects, such as spot fires, with amorphous and irregular shapes. Additional challenges arise when data are collected via UAS as high-resolution aerial images or videos; an ample solution must provide reasonable accuracy with low delays. In this paper, we examined 4K ( 3840 × 2160 ) videos collected by UAS from a controlled burn and created a set of labeled video sets to be shared for public use. We introduce a coarse-to-fine framework to auto-detect wildfires that are sparse, small, and irregularly-shaped. The coarse detector adaptively selects the sub-regions that are likely to contain the objects of interest while the fine detector passes only the details of the sub-regions, rather than the entire 4K region, for further scrutiny. The proposed two-phase learning therefore greatly reduced time overhead and is capable of maintaining high accuracy. Compared against the real-time one-stage object backbone of YoloV3, the proposed methods improved the mean average precision(mAP) from 0 . 29 to 0 . 67 , with an average inference speed of 7.44 frames per second. Limitations and future work are discussed with regard to the design and the experiment results.


2021 ◽  
Author(s):  
Sujata Butte ◽  
Aleksandar Vakanski ◽  
Kasia Duellman ◽  
Haotian Wang ◽  
Amin Mirkouei

Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 583 ◽  
Author(s):  
Khang Nguyen ◽  
Nhut T. Huynh ◽  
Phat C. Nguyen ◽  
Khanh-Duy Nguyen ◽  
Nguyen D. Vo ◽  
...  

Unmanned aircraft systems or drones enable us to record or capture many scenes from the bird’s-eye view and they have been fast deployed to a wide range of practical domains, i.e., agriculture, aerial photography, fast delivery and surveillance. Object detection task is one of the core steps in understanding videos collected from the drones. However, this task is very challenging due to the unconstrained viewpoints and low resolution of captured videos. While deep-learning modern object detectors have recently achieved great success in general benchmarks, i.e., PASCAL-VOC and MS-COCO, the robustness of these detectors on aerial images captured by drones is not well studied. In this paper, we present an evaluation of state-of-the-art deep-learning detectors including Faster R-CNN (Faster Regional CNN), RFCN (Region-based Fully Convolutional Networks), SNIPER (Scale Normalization for Image Pyramids with Efficient Resampling), Single-Shot Detector (SSD), YOLO (You Only Look Once), RetinaNet, and CenterNet for the object detection in videos captured by drones. We conduct experiments on VisDrone2019 dataset which contains 96 videos with 39,988 annotated frames and provide insights into efficient object detectors for aerial images.


Author(s):  
Jiajia Liao ◽  
Yujun Liu ◽  
Yingchao Piao ◽  
Jinhe Su ◽  
Guorong Cai ◽  
...  

AbstractRecent advances in camera-equipped drone applications increased the demand for visual object detection algorithms with deep learning for aerial images. There are several limitations in accuracy for a single deep learning model. Inspired by ensemble learning can significantly improve the generalization ability of the model in the machine learning field, we introduce a novel integration strategy to combine the inference results of two different methods without non-maximum suppression. In this paper, a global and local ensemble network (GLE-Net) was proposed to increase the quality of predictions by considering the global weights for different models and adjusting the local weights for bounding boxes. Specifically, the global module assigns different weights to models. In the local module, we group the bounding boxes that corresponding to the same object as a cluster. Each cluster generates a final predict box and assigns the highest score in the cluster as the score of the final predict box. Experiments on benchmarks VisDrone2019 show promising performance of GLE-Net compared with the baseline network.


2021 ◽  
Author(s):  
Mirela Beloiu ◽  
Dimitris Poursanidis ◽  
Samuel Hoffmann ◽  
Nektarios Chrysoulakis ◽  
Carl Beierkuhnlein

<p>Recent advances in deep learning techniques for object detection and the availability of high-resolution images facilitate the analysis of both temporal and spatial vegetation patterns in remote areas. High-resolution satellite imagery has been used successfully to detect trees in small areas with homogeneous rather than heterogeneous forests, in which single tree species have a strong contrast compared to their neighbors and landscape. However, no research to date has detected trees at the treeline in the remote and complex heterogeneous landscape of Greece using deep learning methods. We integrated high-resolution aerial images, climate data, and topographical characteristics to study the treeline dynamic over 70 years in the Samaria National Park on the Mediterranean island of Crete, Greece. We combined mapping techniques with deep learning approaches to detect and analyze spatio-temporal dynamics in treeline position and tree density. We use visual image interpretation to detect single trees on high-resolution aerial imagery from 1945, 2008, and 2015. Using the RGB aerial images from 2008 and 2015 we test a Convolution Neural Networks (CNN)-object detection approach (SSD) and a CNN-based segmentation technique (U-Net). Based on the mapping and deep learning approach, we have not detected a shift in treeline elevation over the last 70 years, despite warming, although tree density has increased. However, we show that CNN approach accurately detects and maps tree position and density at the treeline. We also reveal that the treeline elevation on Crete varies with topography. Treeline elevation decreases from the southern to the northern study sites. We explain these differences between study sites by the long-term interaction between topographical characteristics and meteorological factors. The study highlights the feasibility of using deep learning and high-resolution imagery as a promising technique for monitoring forests in remote areas.</p>


2017 ◽  
Author(s):  
Lars W. Sommer ◽  
Tobias Schuchert ◽  
Jürgen Beyerer

2021 ◽  
Vol 35 (2) ◽  
pp. 04020061
Author(s):  
Tanzim Nasiruddin Khilji ◽  
Luana Lopes Amaral Loures ◽  
Ehsan Rezazadeh Azar

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 604
Author(s):  
Carlos A. M. Correia ◽  
Fabio A. A. Andrade ◽  
Agnar Sivertsen ◽  
Ihannah Pinto Guedes ◽  
Milena Faria Pinto ◽  
...  

Optical image sensors are the most common remote sensing data acquisition devices present in Unmanned Aerial Systems (UAS). In this context, assigning a location in a geographic frame of reference to the acquired image is a necessary task in the majority of the applications. This process is denominated direct georeferencing when ground control points are not used. Despite it applies simple mathematical fundamentals, the complete direct georeferencing process involves much information, such as camera sensor characteristics, mounting measurements, attitude and position of the UAS, among others. In addition, there are many rotations and translations between the different reference frames, among many other details, which makes the whole process a considerable complex operation. Another problem is that manufacturers and software tools may use different reference frames posing additional difficulty when implementing the direct georeferencing. As this information is spread among many sources, researchers may face difficulties on having a complete vision of the method. In fact, there is absolutely no paper in the literature that explain this process in a comprehensive way. In order to supply this implicit demand, this paper presents a comprehensive method for direct georeferencing of aerial images acquired by cameras mounted on UAS, where all required information, mathematical operations and implementation steps are explained in detail. Finally, in order to show the practical use of the method and to prove its accuracy, both simulated and real flights were performed, where objects of the acquired images were georeferenced.


Sign in / Sign up

Export Citation Format

Share Document