Xiaolangdi reservoir with UAV and satellite multispectral images

Author(s):  
Honglei Zhu ◽  
Yanwei Huang ◽  
Yingchen Li ◽  
Fei Yu ◽  
Guoyuan Zhang ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1994
Author(s):  
Qian Ma ◽  
Wenting Han ◽  
Shenjin Huang ◽  
Shide Dong ◽  
Guang Li ◽  
...  

This study explores the classification potential of a multispectral classification model for farmland with planting structures of different complexity. Unmanned aerial vehicle (UAV) remote sensing technology is used to obtain multispectral images of three study areas with low-, medium-, and high-complexity planting structures, containing three, five, and eight types of crops, respectively. The feature subsets of three study areas are selected by recursive feature elimination (RFE). Object-oriented random forest (OB-RF) and object-oriented support vector machine (OB-SVM) classification models are established for the three study areas. After training the models with the feature subsets, the classification results are evaluated using a confusion matrix. The OB-RF and OB-SVM models’ classification accuracies are 97.09% and 99.13%, respectively, for the low-complexity planting structure. The equivalent values are 92.61% and 99.08% for the medium-complexity planting structure and 88.99% and 97.21% for the high-complexity planting structure. For farmland with fragmentary plots and a high-complexity planting structure, as the planting structure complexity changed from low to high, both models’ overall accuracy levels decreased. The overall accuracy of the OB-RF model decreased by 8.1%, and that of the OB-SVM model only decreased by 1.92%. OB-SVM achieves an overall classification accuracy of 97.21%, and a single-crop extraction accuracy of at least 85.65%. Therefore, UAV multispectral remote sensing can be used for classification applications in highly complex planting structures.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2648
Author(s):  
Muhammad Aamir ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
Muhammad Zeeshan Azam ◽  
...  

Natural disasters not only disturb the human ecological system but also destroy the properties and critical infrastructures of human societies and even lead to permanent change in the ecosystem. Disaster can be caused by naturally occurring events such as earthquakes, cyclones, floods, and wildfires. Many deep learning techniques have been applied by various researchers to detect and classify natural disasters to overcome losses in ecosystems, but detection of natural disasters still faces issues due to the complex and imbalanced structures of images. To tackle this problem, we propose a multilayered deep convolutional neural network. The proposed model works in two blocks: Block-I convolutional neural network (B-I CNN), for detection and occurrence of disasters, and Block-II convolutional neural network (B-II CNN), for classification of natural disaster intensity types with different filters and parameters. The model is tested on 4428 natural images and performance is calculated and expressed as different statistical values: sensitivity (SE), 97.54%; specificity (SP), 98.22%; accuracy rate (AR), 99.92%; precision (PRE), 97.79%; and F1-score (F1), 97.97%. The overall accuracy for the whole model is 99.92%, which is competitive and comparable with state-of-the-art algorithms.


Water ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1333
Author(s):  
Giuseppe Francesco Cesare Lama ◽  
Mariano Crimaldi ◽  
Vittorio Pasquino ◽  
Roberta Padulano ◽  
Giovanni Battista Chirico

Estimating the main hydrodynamic features of real vegetated water bodies is crucial to assure a balance between their hydraulic conveyance and environmental quality. Riparian vegetation stands have a high impact on vegetated channels. The present work has the aim to integrate riparian vegetation’s reflectance indices and hydrodynamics of real vegetated water flows to assess the impact of riparian vegetation morphometry on bulk drag coefficients distribution along an abandoned vegetated drainage channel fully covered by 9–10 m high Arundo donax (commonly known as giant reed) stands, starting from flow average velocities measurements at 30 cross-sections identified along the channel. A map of riparian vegetation cover was obtained through digital processing of Unnamed Aerial Vehicle (UAV)-acquired multispectral images, which represent a fast way to observe riparian plants’ traits in hardly accessible areas such as vegetated water bodies in natural conditions. In this study, the portion of riparian plants effectively interacting with flow was expressed in terms of ground-based Leaf Area Index measurements (LAI), which easily related to UAV-based Normalized Difference Vegetation Index (NDVI). The comparative analysis between Arundo donax stands NDVI and LAI map enabled the analysis of the impact of UAV-acquired multispectral imagery on bulk drag predictions along the vegetated drainage channel.


2021 ◽  
Vol 13 (5) ◽  
pp. 956
Author(s):  
Florian Mouret ◽  
Mohanad Albughdadi ◽  
Sylvie Duthoit ◽  
Denis Kouamé ◽  
Guillaume Rieu ◽  
...  

This paper studies the detection of anomalous crop development at the parcel-level based on an unsupervised outlier detection technique. The experimental validation is conducted on rapeseed and wheat parcels located in Beauce (France). The proposed methodology consists of four sequential steps: (1) preprocessing of synthetic aperture radar (SAR) and multispectral images acquired using Sentinel-1 and Sentinel-2 satellites, (2) extraction of SAR and multispectral pixel-level features, (3) computation of parcel-level features using zonal statistics and (4) outlier detection. The different types of anomalies that can affect the studied crops are analyzed and described. The different factors that can influence the outlier detection results are investigated with a particular attention devoted to the synergy between Sentinel-1 and Sentinel-2 data. Overall, the best performance is obtained when using jointly a selection of Sentinel-1 and Sentinel-2 features with the isolation forest algorithm. The selected features are co-polarized (VV) and cross-polarized (VH) backscattering coefficients for Sentinel-1 and five Vegetation Indexes for Sentinel-2 (among us, the Normalized Difference Vegetation Index and two variants of the Normalized Difference Water). When using these features with an outlier ratio of 10%, the percentage of detected true positives (i.e., crop anomalies) is equal to 94.1% for rapeseed parcels and 95.5% for wheat parcels.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4520
Author(s):  
Luis Lopes Chambino ◽  
José Silvestre Silva ◽  
Alexandre Bernardino

Facial recognition is a method of identifying or authenticating the identity of people through their faces. Nowadays, facial recognition systems that use multispectral images achieve better results than those that use only visible spectral band images. In this work, a novel architecture for facial recognition that uses multiple deep convolutional neural networks and multispectral images is proposed. A domain-specific transfer-learning methodology applied to a deep neural network pre-trained in RGB images is shown to generalize well to the multispectral domain. We also propose a skin detector module for forgery detection. Several experiments were planned to assess the performance of our methods. First, we evaluate the performance of the forgery detection module using face masks and coverings of different materials. A second study was carried out with the objective of tuning the parameters of our domain-specific transfer-learning methodology, in particular which layers of the pre-trained network should be retrained to obtain good adaptation to multispectral images. A third study was conducted to evaluate the performance of support vector machines (SVM) and k-nearest neighbor classifiers using the embeddings obtained from the trained neural network. Finally, we compare the proposed method with other state-of-the-art approaches. The experimental results show performance improvements in the Tufts and CASIA NIR-VIS 2.0 multispectral databases, with a rank-1 score of 99.7% and 99.8%, respectively.


Sign in / Sign up

Export Citation Format

Share Document