aerial image
Recently Published Documents


TOTAL DOCUMENTS

1076
(FIVE YEARS 375)

H-INDEX

30
(FIVE YEARS 8)

Drones ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 19
Author(s):  
Mirela Kundid Vasić ◽  
Vladan Papić

Recent results in person detection using deep learning methods applied to aerial images gathered by Unmanned Aerial Vehicles (UAVs) have demonstrated the applicability of this approach in scenarios such as Search and Rescue (SAR) operations. In this paper, the continuation of our previous research is presented. The main goal is to further improve detection results, especially in terms of reducing the number of false positive detections and consequently increasing the precision value. We present a new approach that, as input to the multimodel neural network architecture, uses sequences of consecutive images instead of only one static image. Since successive images overlap, the same object of interest needs to be detected in more than one image. The correlation between successive images was calculated, and detected regions in one image were translated to other images based on the displacement vector. The assumption is that an object detected in more than one image has a higher probability of being a true positive detection because it is unlikely that the detection model will find the same false positive detections in multiple images. Based on this information, three different algorithms for rejecting detections and adding detections from one image to other images in the sequence are proposed. All of them achieved precision value about 80% which is increased by almost 20% compared to the current state-of-the-art methods.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 464
Author(s):  
Upesh Nepal ◽  
Hossein Eslamiat

In-flight system failure is one of the major safety concerns in the operation of unmanned aerial vehicles (UAVs) in urban environments. To address this concern, a safety framework consisting of following three main tasks can be utilized: (1) Monitoring health of the UAV and detecting failures, (2) Finding potential safe landing spots in case a critical failure is detected in step 1, and (3) Steering the UAV to a safe landing spot found in step 2. In this paper, we specifically look at the second task, where we investigate the feasibility of utilizing object detection methods to spot safe landing spots in case the UAV suffers an in-flight failure. Particularly, we investigate different versions of the YOLO objection detection method and compare their performances for the specific application of detecting a safe landing location for a UAV that has suffered an in-flight failure. We compare the performance of YOLOv3, YOLOv4, and YOLOv5l while training them by a large aerial image dataset called DOTA in a Personal Computer (PC) and also a Companion Computer (CC). We plan to use the chosen algorithm on a CC that can be attached to a UAV, and the PC is used to verify the trends that we see between the algorithms on the CC. We confirm the feasibility of utilizing these algorithms for effective emergency landing spot detection and report their accuracy and speed for that specific application. Our investigation also shows that the YOLOv5l algorithm outperforms YOLOv4 and YOLOv3 in terms of accuracy of detection while maintaining a slightly slower inference speed.


2021 ◽  
Vol 12 ◽  
Author(s):  
Rebecca E. Rhodes ◽  
Hannah P. Cowley ◽  
Jay G. Huang ◽  
William Gray-Roncal ◽  
Brock A. Wester ◽  
...  

Aerial images are frequently used in geospatial analysis to inform responses to crises and disasters but can pose unique challenges for visual search when they contain low resolution, degraded information about color, and small object sizes. Aerial image analysis is often performed by humans, but machine learning approaches are being developed to complement manual analysis. To date, however, relatively little work has explored how humans perform visual search on these tasks, and understanding this could ultimately help enable human-machine teaming. We designed a set of studies to understand what features of an aerial image make visual search difficult for humans and what strategies humans use when performing these tasks. Across two experiments, we tested human performance on a counting task with a series of aerial images and examined the influence of features such as target size, location, color, clarity, and number of targets on accuracy and search strategies. Both experiments presented trials consisting of an aerial satellite image; participants were asked to find all instances of a search template in the image. Target size was consistently a significant predictor of performance, influencing not only accuracy of selections but the order in which participants selected target instances in the trial. Experiment 2 demonstrated that the clarity of the target instance and the match between the color of the search template and the color of the target instance also predicted accuracy. Furthermore, color also predicted the order of selecting instances in the trial. These experiments establish not only a benchmark of typical human performance on visual search of aerial images but also identify several features that can influence the task difficulty level for humans. These results have implications for understanding human visual search on real-world tasks and when humans may benefit from automated approaches.


Author(s):  
Dongsheng Liu ◽  
Ling Han

Extraction of agricultural parcels from high-resolution satellite imagery is an important task in precision agriculture. Here, we present a semi-automatic approach for agricultural parcel detection that achieves high accuracy and efficiency. Unlike the techniques presented in previous literatures, this method is pixel based, and it exploits the properties of a spectral angle mapper (SAM) to develop customized operators to accurately derive the parcels. The main steps of the method are sample selection, textural analysis, spectral homogenization, SAM, thresholding, and region growth. We have systematically evaluated the algorithm proposed on a variety of images from Gaofen-1 wide field of view (GF-1 WFV), Resource 1-02C (ZY1-02C), and Gaofen-2 (GF-2) to aerial image; the accuracies are 99.09% of GF-1 WFV, 84.42% of ZY1-02C, 96.51% and 92.18% of GF-2, and close to 100% of aerial image; these results demonstrated its accuracy and robustness.


Author(s):  
Junjie Chen ◽  
Donghai Liu

Abstract Foreign objects (e.g., livestock, rafting, and vehicles) intruded into inter-basin channels pose threats to water quality and water supply safety. Timely detection of the foreign objects and acquiring relevant information (e.g., quantities, geometry, and types) is a premise to enforce proactive measures to control potential loss. Large-scale water channels usually span a long distance and hence are difficult to be efficiently covered by manual inspection. Applying unmanned aerial vehicles for inspection can provide time-sensitive aerial images, from which intrusion incidents can be visually pinpointed. To automate the processing of such aerial images, this paper aims to propose a method based on computer vision to detect, extract, and classify foreign objects in water channels. The proposed approach includes four steps, i.e., aerial image preprocessing, abnormal region detection, instance extraction, and foreign object classification. Experiments demonstrate the efficacy of the approach, which can recognize three typical foreign objects (i.e., livestock, rafting, and vehicle) with a robust performance. The proposed approach can raise early awareness of intrusion incidents in water channels for water quality assurance.


Sign in / Sign up

Export Citation Format

Share Document