scholarly journals Asbestos—Cement Roofing Identification Using Remote Sensing and Convolutional Neural Networks (CNNs)

2020 ◽  
Vol 12 (3) ◽  
pp. 408
Author(s):  
Małgorzata Krówczyńska ◽  
Edwin Raczko ◽  
Natalia Staniszewska ◽  
Ewa Wilk

Due to the pathogenic nature of asbestos, a statutory ban on asbestos-containing products has been in place in Poland since 1997. In order to protect human health and the environment, it is crucial to estimate the quantity of asbestos–cement products in use. It has been evaluated that about 90% of them are roof coverings. Different methods are used to estimate the amount of asbestos–cement products, such as the use of indicators, field inventory, remote sensing data, and multi- and hyperspectral images; the latter are used for relatively small areas. Other methods are sought for the reliable estimation of the quantity of asbestos-containing products, as well as their spatial distribution. The objective of this paper is to present the use of convolutional neural networks for the identification of asbestos–cement roofing on aerial photographs in natural color (RGB) and color infrared (CIR) compositions. The study was conducted for the Chęciny commune. Aerial photographs, each with the spatial resolution of 25 cm in RGB and CIR compositions, were used, and field studies were conducted to verify data and to develop a database for Convolutional Neural Networks (CNNs) training. Network training was carried out using the TensorFlow and R-Keras libraries in the R programming environment. The classification was carried out using a convolutional neural network consisting of two convolutional blocks, a spatial dropout layer, and two blocks of fully connected perceptrons. Asbestos–cement roofing products were classified with the producer’s accuracy of 89% and overall accuracy of 87% and 89%, depending on the image composition used. Attempts have been made at the identification of asbestos–cement roofing. They focus primarily on the use of hyperspectral data and multispectral imagery. The following classification algorithms were usually employed: Spectral Angle Mapper, Support Vector Machine, object classification, Spectral Feature Fitting, and decision trees. Previous studies undertaken by other researchers showed that low spectral resolution only allowed for a rough classification of roofing materials. The use of one coherent method would allow data comparison between regions. Determining the amount of asbestos–cement products in use is important for assessing environmental exposure to asbestos fibres, determining patterns of disease, and ultimately modelling potential solutions to counteract threats.

2021 ◽  
Author(s):  
Dario Spiller ◽  
Luigi Ansalone ◽  
Nicolas Longépé ◽  
James Wheeler ◽  
Pierre Philippe Mathieu

<p>Over the last few years, wildfires have become more severe and destructive, having extreme consequences on local and global ecosystems. Fire detection and accurate monitoring of risk areas is becoming increasingly important. Satellite remote sensing offers unique opportunities for mapping, monitoring, and analysing the evolution of wildfires, providing helpful contributions to counteract dangerous situations.</p><p>Among the different remote sensing technologies, hyper-spectral (HS) imagery presents nonpareil features in support to fire detection. In this study, HS images from the Italian satellite PRISMA (PRecursore IperSpettrale della Missione Applicativa) will be used. The PRISMA satellite, launched on 22 March 2019, holds a hyperspectral and panchromatic  payload which is able to acquire images with a worldwide coverage. The hyperspectral camera works in the spectral range of 0.4–2.5 µm, with 66 and 173 channels in the VNIR (Visible and Near InfraRed) and SWIR (Short-Wave InfraRed) regions, respectively. The average spectral resolution is less than 10 nm on the entire range with an accuracy of ±0.1 nm, while the ground sampling distance of PRISMA images is about 5 m and 30 m for panchromatic and hyperspectral camera, respectively.</p><p>This work will investigate how PRISMA HS images can be used to support fire detection and related crisis management. To this aim, deep learning methodologies will be investigated, as 1D convolutional neural networks to perform spectral analysis of the data or 3D convolutional neural networks to perform spatial and spectral analyses at the same time. Semantic segmentation of input HS data will be discussed, where an output image with metadata will be associated to each pixels of the input image. The overall goal of this work is to highlight how PRISMA hyperspectral data can contribute to remote sensing and Earth-observation data analysis with regard to natural hazard and risk studies focusing specially on wildfires, also considering the benefits with respect to standard multi-spectral imagery or previous hyperspectral sensors such as Hyperion.</p><p>The contributions of this work to the state of the art are the following:</p><ul><li>Demonstrating the advantages of using PRISMA HS data over using multi-spectral data.</li> <li>Discussing the potentialities of deep learning methodologies based on 1D and 3D convolutional neural networks to catch spectral (and spatial for the 3D case) dependencies, which is crucial when dealing with HS images.</li> <li>Discussing the possibility and benefit to integrate HS-based approach in future monitoring systems in case of wildfire alerts and disasters.</li> <li>Discussing the opportunity to design and develop future missions for HS remote sensing specifically dedicated for fire detection with on-board analysis.</li> </ul><p>To conclude, this work will raise awareness in the potentialities of using PRISMA HS data for disasters monitoring with specialized focus on wildfires.</p>


Author(s):  
M. Brandmeier ◽  
Y. Chen

<p><strong>Abstract.</strong> Deep learning has been used successfully in computer vision problems, e.g. image classification, target detection and many more. We use deep learning in conjunction with ArcGIS to implement a model with advanced convolutional neural networks (CNN) for lithological mapping in the Mount Isa region (Australia). The area is ideal for spectral remote sensing as there is only sparse vegetation and besides freely available Sentinel-2 and ASTER data, several geophysical datasets are available from exploration campaigns. By fusing the data and thus covering a wide spectral range as well as capturing geophysical properties of rocks, we aim at improving classification accuracies and support geological mapping. We also evaluate the performance of the sensors on their own compared to a joint use as the Sentinel-2 satellites are relatively new and as of now there exist only few studies for geological applications. We developed an end-to-end deep learning model using Keras and Tensorflow that consists of several convolutional, pooling and deconvolutional layers. Our model was inspired by the family of U-Net architectures, where low-level feature maps (encoders) are concatenated with high-level ones (decoders), which enables precise localization. This type of network architecture was especially designed to effectively solve pixel-wise classification problems, which is appropriate for lithological classification. We spatially resampled and fused the multi-sensor remote sensing data with different bands and geophysical data into image cubes as input for our model. Pre-processing was done in ArcGIS and the final, fine-tuned model was imported into a toolbox to be used on further scenes directly in the GIS environment. The tool classifies each pixel of the multiband imagery into different types of rocks according to a defined probability threshold. Results highlight the power of using Sentinel-2 in conjunction with ASTER data with accuracies of 75% in comparison to only 70% and 73% for ASTER or Sentinel-2 data alone. These results are similar but examining the different classes shows that there are significant improvements for classes such as dolerite or carbonate sediments that are not that widely distributed in the area. Adding geophysical datasets reduced accuracies to 60%, probably due to an order of magnitude difference in spatial resolution. In comparison, Random Forest (RF) and Support Vector Machines (SVMs) that were trained on the same data only achieve accuracies of 46 % and 36 % respectively. Most insecurity is due to labelling errors and labels with mixed lithologies. However, results show that the U-Netmodel is a powerful alternative to other classifiers for medium-resolution multispectral data.</p>


Author(s):  
R. Vidhya ◽  
D. Vijayasekaran ◽  
M. Ahamed Farook ◽  
S. Jai ◽  
M. Rohini ◽  
...  

Mangrove ecosystem plays a crucial role in costal conservation and provides livelihood supports to humans. It is seriously affected by the various climatic and anthropogenic induced changes. The continuous monitoring is imperative to protect this fragile ecosystem. In this study, the mangrove area and health status has been extracted from Hyperspectral remote sensing data (EO- 1Hyperion) using support vector machine classification (SVM). The principal component transformation (PCT) technique is used to perform the band reduction in Hyperspectral data. The soil adjusted vegetation Indices (SAVI) were used as additional parameters. The mangroves are classified into three classes degraded, healthy and sparse. The SVM classification is generated overall accuracy of 73 % and kappa of 0.62. The classification results were compared with the results of spectral angle mapper classification (SAM). The SAVI also included in SVM classification and the accuracy found to be improved to 82 %. The sparse and degraded mangrove classes were well separated. The results indicate that the mapping of mangrove health is accurate when the machine learning classifier like SVM combined with different indices derived from hyperspectral remote sensing data.


2020 ◽  
Vol 13 (1) ◽  
pp. 65
Author(s):  
Jingtao Li ◽  
Yonglin Shen ◽  
Chao Yang

Due to the increasing demand for the monitoring of crop conditions and food production, it is a challenging and meaningful task to identify crops from remote sensing images. The state-of the-art crop classification models are mostly built on supervised classification models such as support vector machines (SVM), convolutional neural networks (CNN), and long- and short-term memory neural networks (LSTM). Meanwhile, as an unsupervised generative model, the adversarial generative network (GAN) is rarely used to complete classification tasks for agricultural applications. In this work, we propose a new method that combines GAN, CNN, and LSTM models to classify crops of corn and soybeans from remote sensing time-series images, in which GAN’s discriminator was used as the final classifier. The method is feasible on the condition that the training samples are small, and it fully takes advantage of spectral, spatial, and phenology features of crops from satellite data. The classification experiments were conducted on crops of corn, soybeans, and others. To verify the effectiveness of the proposed method, comparisons with models of SVM, SegNet, CNN, LSTM, and different combinations were also conducted. The results show that our method achieved the best classification results, with the Kappa coefficient of 0.7933 and overall accuracy of 0.86. Experiments in other study areas also demonstrate the extensibility of the proposed method.


2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Alexander Knyshov ◽  
Samantha Hoang ◽  
Christiane Weirauch

Abstract Automated insect identification systems have been explored for more than two decades but have only recently started to take advantage of powerful and versatile convolutional neural networks (CNNs). While typical CNN applications still require large training image datasets with hundreds of images per taxon, pretrained CNNs recently have been shown to be highly accurate, while being trained on much smaller datasets. We here evaluate the performance of CNN-based machine learning approaches in identifying three curated species-level dorsal habitus datasets for Miridae, the plant bugs. Miridae are of economic importance, but species-level identifications are challenging and typically rely on information other than dorsal habitus (e.g., host plants, locality, genitalic structures). Each dataset contained 2–6 species and 126–246 images in total, with a mean of only 32 images per species for the most difficult dataset. We find that closely related species of plant bugs can be identified with 80–90% accuracy based on their dorsal habitus alone. The pretrained CNN performed 10–20% better than a taxon expert who had access to the same dorsal habitus images. We find that feature extraction protocols (selection and combination of blocks of CNN layers) impact identification accuracy much more than the classifying mechanism (support vector machine and deep neural network classifiers). While our network has much lower accuracy on photographs of live insects (62%), overall results confirm that a pretrained CNN can be straightforwardly adapted to collection-based images for a new taxonomic group and successfully extract relevant features to classify insect species.


Sign in / Sign up

Export Citation Format

Share Document