scholarly journals Deep Learning Based Wildfire Event Object Detection from 4K Aerial Images Acquired by UAS

AI ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 166-179 ◽  
Author(s):  
Ziyang Tang ◽  
Xiang Liu ◽  
Hanlin Chen ◽  
Joseph Hupy ◽  
Baijian Yang

Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for identification of ground-based hazards such as spot fires and fire lines, and to communicate this information with fire fighting crews. Current technology relies on visual interpretation of UAS imagery, with little to no computer-assisted automatic detection. With the help of big labeled data and the significant increase of computing power, deep learning has seen great successes on object detection with fixed patterns, such as people and vehicles. However, little has been done for objects, such as spot fires, with amorphous and irregular shapes. Additional challenges arise when data are collected via UAS as high-resolution aerial images or videos; an ample solution must provide reasonable accuracy with low delays. In this paper, we examined 4K ( 3840 × 2160 ) videos collected by UAS from a controlled burn and created a set of labeled video sets to be shared for public use. We introduce a coarse-to-fine framework to auto-detect wildfires that are sparse, small, and irregularly-shaped. The coarse detector adaptively selects the sub-regions that are likely to contain the objects of interest while the fine detector passes only the details of the sub-regions, rather than the entire 4K region, for further scrutiny. The proposed two-phase learning therefore greatly reduced time overhead and is capable of maintaining high accuracy. Compared against the real-time one-stage object backbone of YoloV3, the proposed methods improved the mean average precision(mAP) from 0 . 29 to 0 . 67 , with an average inference speed of 7.44 frames per second. Limitations and future work are discussed with regard to the design and the experiment results.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2834
Author(s):  
Billur Kazaz ◽  
Subhadipto Poddar ◽  
Saeed Arabi ◽  
Michael A. Perez ◽  
Anuj Sharma ◽  
...  

Construction activities typically create large amounts of ground disturbance, which can lead to increased rates of soil erosion. Construction stormwater practices are used on active jobsites to protect downstream waterbodies from offsite sediment transport. Federal and state regulations require routine pollution prevention inspections to ensure that temporary stormwater practices are in place and performing as intended. This study addresses the existing challenges and limitations in the construction stormwater inspections and presents a unique approach for performing unmanned aerial system (UAS)-based inspections. Deep learning-based object detection principles were applied to identify and locate practices installed on active construction sites. The system integrates a post-processing stage by clustering results. The developed framework consists of data preparation with aerial inspections, model training, validation of the model, and testing for accuracy. The developed model was created from 800 aerial images and was used to detect four different types of construction stormwater practices at 100% accuracy on the Mean Average Precision (MAP) with minimal false positive detections. Results indicate that object detection could be implemented on UAS-acquired imagery as a novel approach to construction stormwater inspections and provide accurate results for site plan comparisons by rapidly detecting the quantity and location of field-installed stormwater practices.


2021 ◽  
Author(s):  
Mirela Beloiu ◽  
Dimitris Poursanidis ◽  
Samuel Hoffmann ◽  
Nektarios Chrysoulakis ◽  
Carl Beierkuhnlein

<p>Recent advances in deep learning techniques for object detection and the availability of high-resolution images facilitate the analysis of both temporal and spatial vegetation patterns in remote areas. High-resolution satellite imagery has been used successfully to detect trees in small areas with homogeneous rather than heterogeneous forests, in which single tree species have a strong contrast compared to their neighbors and landscape. However, no research to date has detected trees at the treeline in the remote and complex heterogeneous landscape of Greece using deep learning methods. We integrated high-resolution aerial images, climate data, and topographical characteristics to study the treeline dynamic over 70 years in the Samaria National Park on the Mediterranean island of Crete, Greece. We combined mapping techniques with deep learning approaches to detect and analyze spatio-temporal dynamics in treeline position and tree density. We use visual image interpretation to detect single trees on high-resolution aerial imagery from 1945, 2008, and 2015. Using the RGB aerial images from 2008 and 2015 we test a Convolution Neural Networks (CNN)-object detection approach (SSD) and a CNN-based segmentation technique (U-Net). Based on the mapping and deep learning approach, we have not detected a shift in treeline elevation over the last 70 years, despite warming, although tree density has increased. However, we show that CNN approach accurately detects and maps tree position and density at the treeline. We also reveal that the treeline elevation on Crete varies with topography. Treeline elevation decreases from the southern to the northern study sites. We explain these differences between study sites by the long-term interaction between topographical characteristics and meteorological factors. The study highlights the feasibility of using deep learning and high-resolution imagery as a promising technique for monitoring forests in remote areas.</p>


1989 ◽  
Author(s):  
Mohan M. Trivedi ◽  
Amol G. Bokil ◽  
Mourad B. Takla ◽  
George B. Maksymonko ◽  
J. Thomas Broach

2021 ◽  
Author(s):  
Sujata Butte ◽  
Aleksandar Vakanski ◽  
Kasia Duellman ◽  
Haotian Wang ◽  
Amin Mirkouei

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3341 ◽  
Author(s):  
Hilal Tayara ◽  
Kil Chong

Object detection in very high-resolution (VHR) aerial images is an essential step for a wide range of applications such as military applications, urban planning, and environmental management. Still, it is a challenging task due to the different scales and appearances of the objects. On the other hand, object detection task in VHR aerial images has improved remarkably in recent years due to the achieved advances in convolution neural networks (CNN). Most of the proposed methods depend on a two-stage approach, namely: a region proposal stage and a classification stage such as Faster R-CNN. Even though two-stage approaches outperform the traditional methods, their optimization is not easy and they are not suitable for real-time applications. In this paper, a uniform one-stage model for object detection in VHR aerial images has been proposed. In order to tackle the challenge of different scales, a densely connected feature pyramid network has been proposed by which high-level multi-scale semantic feature maps with high-quality information are prepared for object detection. This work has been evaluated on two publicly available datasets and outperformed the current state-of-the-art results on both in terms of mean average precision (mAP) and computation time.


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 583 ◽  
Author(s):  
Khang Nguyen ◽  
Nhut T. Huynh ◽  
Phat C. Nguyen ◽  
Khanh-Duy Nguyen ◽  
Nguyen D. Vo ◽  
...  

Unmanned aircraft systems or drones enable us to record or capture many scenes from the bird’s-eye view and they have been fast deployed to a wide range of practical domains, i.e., agriculture, aerial photography, fast delivery and surveillance. Object detection task is one of the core steps in understanding videos collected from the drones. However, this task is very challenging due to the unconstrained viewpoints and low resolution of captured videos. While deep-learning modern object detectors have recently achieved great success in general benchmarks, i.e., PASCAL-VOC and MS-COCO, the robustness of these detectors on aerial images captured by drones is not well studied. In this paper, we present an evaluation of state-of-the-art deep-learning detectors including Faster R-CNN (Faster Regional CNN), RFCN (Region-based Fully Convolutional Networks), SNIPER (Scale Normalization for Image Pyramids with Efficient Resampling), Single-Shot Detector (SSD), YOLO (You Only Look Once), RetinaNet, and CenterNet for the object detection in videos captured by drones. We conduct experiments on VisDrone2019 dataset which contains 96 videos with 39,988 annotated frames and provide insights into efficient object detectors for aerial images.


2015 ◽  
Vol 3 (2) ◽  
pp. 58-67 ◽  
Author(s):  
Jan Rudolf Karl Lehmann ◽  
Keturah Zoe Smithson ◽  
Torsten Prinz

Remote sensing techniques have become an increasingly important tool for surveying archaeological sites. However, budgeting issues in archaeological research often limit the application of satellite or airborne imagery. Unmanned aerial systems (UAS) provide a flexible, quick, and more economical alternative to commonly used remote sensing techniques. In this study, the buried features of the archaeological site of the Kleinburlo monastery, near Münster, Germany, were identified using high-resolution color–infrared (CIR) images collected from a UAS platform. Based on these CIR images, a modified normalised difference vegetation index (NDVIblue) was calculated, showing reflectance spectra of vegetation anomalies caused by water stress. In the presented study, the vegetation growing on top of the buried walls was better nourished than the surrounding plants because very wet conditions over the days previous to data collection caused higher levels of water stress in the surrounding water-drenched land. This difference in water stress was a good indicator for detecting archaeological remains.


2020 ◽  
Vol 12 (10) ◽  
pp. 1549 ◽  
Author(s):  
Jamie L. Dyer ◽  
Robert J. Moorhead ◽  
Lee Hathcock

The need for accurate and spatially detailed hydrologic information is critical due to the microscale influences on the severity and distribution of flooding, and new and/or updated approaches in observations of river systems are required that are in line with the current push towards microscale numerical simulations. In response, the aim of this project is to define and illustrate the hydrologic response of river flooding relative to microscale surface properties by using an unmanned aerial system (UAS) with dedicated imaging, sensor, and communication packages for data collection. As part of a larger project focused on increasing situational awareness during flood events, a fixed-wing UAS was used to overfly areas near Greenwood, MS before and during a flood event in February 2019 to provide high-resolution visible and infrared imagery for analysis of hydrologic features. The imagery obtained from these missions provide direct examples of fine-scale surface features that can alter water level and discharge, such as built structures (i.e., levees and bridges), natural storage features (low-lying agricultural fields), and areas of natural resistance (inundated forests). This type of information is critical in defining where and how to incorporate high-resolution information into hydrologic models and also provides an invaluable dataset for eventual verification of hydrologic simulations through inundation mapping.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2484 ◽  
Author(s):  
Weixing Zhang ◽  
Chandi Witharana ◽  
Weidong Li ◽  
Chuanrong Zhang ◽  
Xiaojiang Li ◽  
...  

Traditional methods of detecting and mapping utility poles are inefficient and costly because of the demand for visual interpretation with quality data sources or intense field inspection. The advent of deep learning for object detection provides an opportunity for detecting utility poles from side-view optical images. In this study, we proposed using a deep learning-based method for automatically mapping roadside utility poles with crossarms (UPCs) from Google Street View (GSV) images. The method combines the state-of-the-art DL object detection algorithm (i.e., the RetinaNet object detection algorithm) and a modified brute-force-based line-of-bearing (LOB, a LOB stands for the ray towards the location of the target [UPC at here] from the original location of the sensor [GSV mobile platform]) measurement method to estimate the locations of detected roadside UPCs from GSV. Experimental results indicate that: (1) both the average precision (AP) and the overall accuracy (OA) are around 0.78 when the intersection-over-union (IoU) threshold is greater than 0.3, based on the testing of 500 GSV images with a total number of 937 objects; and (2) around 2.6%, 47%, and 79% of estimated locations of utility poles are within 1 m, 5 m, and 10 m buffer zones, respectively, around the referenced locations of utility poles. In general, this study indicates that even in a complex background, most utility poles can be detected with the use of DL, and the LOB measurement method can estimate the locations of most UPCs.


Sign in / Sign up

Export Citation Format

Share Document