uav images
Recently Published Documents


TOTAL DOCUMENTS

604
(FIVE YEARS 360)

H-INDEX

24
(FIVE YEARS 9)

2022 ◽  
Vol 14 (2) ◽  
pp. 382
Author(s):  
Yafei Jing ◽  
Yuhuan Ren ◽  
Yalan Liu ◽  
Dacheng Wang ◽  
Linjun Yu

Efficiently and automatically acquiring information on earthquake damage through remote sensing has posed great challenges because the classical methods of detecting houses damaged by destructive earthquakes are often both time consuming and low in accuracy. A series of deep-learning-based techniques have been developed and recent studies have demonstrated their high intelligence for automatic target extraction for natural and remote sensing images. For the detection of small artificial targets, current studies show that You Only Look Once (YOLO) has a good performance in aerial and Unmanned Aerial Vehicle (UAV) images. However, less work has been conducted on the extraction of damaged houses. In this study, we propose a YOLOv5s-ViT-BiFPN-based neural network for the detection of rural houses. Specifically, to enhance the feature information of damaged houses from the global information of the feature map, we introduce the Vision Transformer into the feature extraction network. Furthermore, regarding the scale differences for damaged houses in UAV images due to the changes in flying height, we apply the Bi-Directional Feature Pyramid Network (BiFPN) for multi-scale feature fusion to aggregate features with different resolutions and test the model. We took the 2021 Yangbi earthquake with a surface wave magnitude (Ms) of 6.4 in Yunan, China, as an example; the results show that the proposed model presents a better performance, with the average precision (AP) being increased by 9.31% and 1.23% compared to YOLOv3 and YOLOv5s, respectively, and a detection speed of 80 FPS, which is 2.96 times faster than YOLOv3. In addition, the transferability test for five other areas showed that the average accuracy was 91.23% and the total processing time was 4 min, while 100 min were needed for professional visual interpreters. The experimental results demonstrate that the YOLOv5s-ViT-BiFPN model can automatically detect damaged rural houses due to destructive earthquakes in UAV images with a good performance in terms of accuracy and timeliness, as well as being robust and transferable.


Plant Methods ◽  
2022 ◽  
Vol 18 (1) ◽  
Author(s):  
Lili Li ◽  
Jiangwei Qiao ◽  
Jian Yao ◽  
Jie Li ◽  
Li Li

Abstract Background Freezing injury is a devastating yet common damage that occurs to winter rapeseed during the overwintering period which directly reduces the yield and causes heavy economic loss. Thus, it is an important and urgent task for crop breeders to find the freezing-tolerant rapeseed materials in the process of breeding. Existing large-scale freezing-tolerant rapeseed material recognition methods mainly rely on the field investigation conducted by the agricultural experts using some professional equipments. These methods are time-consuming, inefficient and laborious. In addition, the accuracy of these traditional methods depends heavily on the knowledge and experience of the experts. Methods To solve these problems of existing methods, we propose a low-cost freezing-tolerant rapeseed material recognition approach using deep learning and unmanned aerial vehicle (UAV) images captured by a consumer UAV. We formulate the problem of freezing-tolerant material recognition as a binary classification problem, which can be solved well using deep learning. The proposed method can automatically and efficiently recognize the freezing-tolerant rapeseed materials from a large number of crop candidates. To train the deep learning network, we first manually construct the real dataset using the UAV images of rapeseed materials captured by the DJI Phantom 4 Pro V2.0. Then, five classic deep learning networks (AlexNet, VGGNet16, ResNet18, ResNet50 and GoogLeNet) are selected to perform the freezing-tolerant rapeseed material recognition. Result and conclusion The accuracy of the five deep learning networks used in our work is all over 92%. Especially, ResNet50 provides the best accuracy (93.33$$\%$$ % ) in this task. In addition, we also compare deep learning networks with traditional machine learning methods. The comparison results show that the deep learning-based methods significantly outperform the traditional machine learning-based methods in our task. The experimental results show that it is feasible to recognize the freezing-tolerant rapeseed using UAV images and deep learning.


Author(s):  
Moxuan Ren ◽  
Jianan Li ◽  
Liqiang Song ◽  
Hui Li ◽  
Tingfa Xu
Keyword(s):  

2022 ◽  
pp. 1-1
Author(s):  
Chaofeng Ren ◽  
Haixing Shang ◽  
Zhengdong Zha ◽  
Fuqiang Zhang ◽  
Yuchi Pu

Agronomy ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 102
Author(s):  
José A. Martínez-Casasnovas ◽  
Leire Sandonís-Pozo ◽  
Alexandre Escolà ◽  
Jaume Arnó ◽  
Jordi Llorens

One of the challenges in orchard management, in particular of hedgerow tree plantations, is the delineation of management zones on the bases of high-precision data. Along this line, the present study analyses the applicability of vegetation indices derived from UAV images to estimate the key structural and geometric canopy parameters of an almond orchard. In addition, the classes created on the basis of the vegetation indices were assessed to delineate potential management zones. The structural and geometric orchard parameters (width, height, cross-sectional area and porosity) were characterized by means of a LiDAR sensor, and the vegetation indices were derived from a UAV-acquired multispectral image. Both datasets summarized every 0.5 m along the almond tree rows and were used to interpolate continuous representations of the variables by means of geostatistical analysis. Linear and canonical correlation analyses were carried out to select the best performing vegetation index to estimate the structural and geometric orchard parameters in each cross-section of the tree rows. The results showed that NDVI averaged in each cross-section and normalized by its projected area achieved the highest correlations and served to define potential management zones. These findings expand the possibilities of using multispectral images in orchard management, particularly in hedgerow plantations.


2021 ◽  
Vol 14 (1) ◽  
pp. 150
Author(s):  
Jie You ◽  
Ruirui Zhang ◽  
Joonwhoan Lee

Pine wilt is a devastating disease that typically kills affected pine trees within a few months. In this paper, we confront the problem of detecting pine wilt disease. In the image samples that have been used for pine wilt disease detection, there is high ambiguity due to poor image resolution and the presence of “disease-like” objects. We therefore created a new dataset using large-sized orthophotographs collected from 32 cities, 167 regions, and 6121 pine wilt disease hotspots in South Korea. In our system, pine wilt disease was detected in two stages: n the first stage, the disease and hard negative samples were collected using a convolutional neural network. Because the diseased areas varied in size and color, and as the disease manifests differently from the early stage to the late stage, hard negative samples were further categorized into six different classes to simplify the complexity of the dataset. Then, in the second stage, we used an object detection model to localize the disease and “disease-like” hard negative samples. We used several image augmentation methods to boost system performance and avoid overfitting. The test process was divided into two phases: a patch-based test and a real-world test. During the patch-based test, we used the test-time augmentation method to obtain the average prediction of our system across multiple augmented samples of data, and the prediction results showed a mean average precision of 89.44% in five-fold cross validation, thus representing an increase of around 5% over the alternative system. In the real-world test, we collected 10 orthophotographs in various resolutions and areas, and our system successfully detected 711 out of 730 potential disease spots.


Agriculture ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 26
Author(s):  
Di Zhang ◽  
Feng Pan ◽  
Qi Diao ◽  
Xiaoxue Feng ◽  
Weixing Li ◽  
...  

With the development of unmanned aerial vehicle (UAV), obtaining high-resolution aerial images has become easier. Identifying and locating specific crops from aerial images is a valuable task. The location and quantity of crops are important for agricultural insurance businesses. In this paper, the problem of locating chili seedling crops in large-field UAV images is processed. Two problems are encountered in the location process: a small number of samples and objects in UAV images are similar on a small scale, which increases the location difficulty. A detection framework based on a prototypical network to detect crops in UAV aerial images is proposed. In particular, a method of subcategory slicing is applied to solve the problem, in which objects in aerial images have similarities at a smaller scale. The detection framework is divided into two parts: training and detection. In the training process, crop images are sliced into subcategories, and then these subcategory patch images and background category images are used to train the prototype network. In the detection process, a simple linear iterative clustering superpixel segmentation method is used to generate candidate regions in the UAV image. The location method uses a prototypical network to recognize nine patch images extracted simultaneously. To train and evaluate the proposed method, we construct an evaluation dataset by collecting the images of chilies in a seedling stage by an UAV. We achieve a location accuracy of 96.46%. This study proposes a seedling crop detection framework based on few-shot learning that does not require the use of labeled boxes. It reduces the workload of manual annotation and meets the location needs of seedling crops.


Information ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 2
Author(s):  
Danilo Avola ◽  
Luigi Cinque ◽  
Angelo Di Mambro ◽  
Anxhelo Diko ◽  
Alessio Fagioli ◽  
...  

In recent years, small-scale Unmanned Aerial Vehicles (UAVs) have been used in many video surveillance applications, such as vehicle tracking, border control, dangerous object detection, and many others. Anomaly detection can represent a prerequisite of many of these applications thanks to its ability to identify areas and/or objects of interest without knowing them a priori. In this paper, a One-Class Support Vector Machine (OC-SVM) anomaly detector based on customized Haralick textural features for aerial video surveillance at low-altitude is presented. The use of a One-Class SVM, which is notoriously a lightweight and fast classifier, enables the implementation of real-time systems even when these are embedded in low-computational small-scale UAVs. At the same time, the use of textural features allows a vision-based system to detect micro and macro structures of an analyzed surface, thus allowing the identification of small and large anomalies, respectively. The latter aspect plays a key role in aerial video surveillance at low-altitude, i.e., 6 to 15 m, where the detection of common items, e.g., cars, is as important as the detection of little and undefined objects, e.g., Improvised Explosive Devices (IEDs). Experiments obtained on the UAV Mosaicking and Change Detection (UMCD) dataset show the effectiveness of the proposed system in terms of accuracy, precision, recall, and F1-score, where the model achieves a 100% precision, i.e., never misses an anomaly, but at the expense of a reasonable trade-off in its recall, which still manages to reach up to a 71.23% score. Moreover, when compared to classical Haralick textural features, the model obtains significantly higher performances, i.e., ≈20% on all metrics, further demonstrating the approach effectiveness.


Sign in / Sign up

Export Citation Format

Share Document