scholarly journals Combining radiative transfer calculations and a neural network for the remote sensing of volcanic ash using MSG/SEVIRI

2021 ◽  
Author(s):  
Luca Bugliaro ◽  
Dennis Piontek ◽  
Stephan Kox ◽  
Marius Schmidl ◽  
Bernhard Mayer ◽  
...  

Abstract. After the eruption of volcanoes all over the world the monitoring of the dispersion of ash in the atmosphere is an important task for satellite remote sensing since ash represents a threat to air traffic. In this work we present a novel method that uses thermal observations of the SEVIRI imager aboard the geostationary Meteosat Second Generation satellite to detect ash clouds and determine their mass column concentration and top height during day and night. This approach requires the compilation of an extensive data set of synthetic SEVIRI observations to train an artificial neural network. This is done by means of the RTSIM tool that combines atmospheric, surface and ash properties and runs automatically a large number of radiative transfer calculations for the entire SEVIRI disk. The resulting algorithm is called VADUGS (Volcanic Ash Detection Using Geostationary Satellites) and has been evaluated against independent radiative transfer simulations. VADUGS detects ash contaminated pixels with a probability of detection of 0.84 and a false alarm rate of 0.05. Ash column concentrations are provided by VADUGS with correlations up to 0.5, a scatter up to 0.6 g m−2 for concentrations smaller than 2.0 g m−2 and small overestimations in the range 5–50 % for moderate viewing angles 35–65°, but up to 300 % for satellite viewing zenith angles close to 90° or 0°. Ash top heights are mainly underestimated, with the smallest underestimation of −9 % for viewing zenith angles between 40° and 50°. Absolute errors are smaller than 70 % and with high correlation coefficients up to 0.7 for ash clouds with high mass column concentrations. A comparison against spaceborne lidar observations by CALIPSO/CALIOPconfirms these results. VADUGS is run operationally at the German Weather Service and this application is presented as well.

2020 ◽  
Vol 38 (4A) ◽  
pp. 510-514
Author(s):  
Tay H. Shihab ◽  
Amjed N. Al-Hameedawi ◽  
Ammar M. Hamza

In this paper to make use of complementary potential in the mapping of LULC spatial data is acquired from LandSat 8 OLI sensor images are taken in 2019.  They have been rectified, enhanced and then classified according to Random forest (RF) and artificial neural network (ANN) methods. Optical remote sensing images have been used to get information on the status of LULC classification, and extraction details. The classification of both satellite image types is used to extract features and to analyse LULC of the study area. The results of the classification showed that the artificial neural network method outperforms the random forest method. The required image processing has been made for Optical Remote Sensing Data to be used in LULC mapping, include the geometric correction, Image Enhancements, The overall accuracy when using the ANN methods 0.91 and the kappa accuracy was found 0.89 for the training data set. While the overall accuracy and the kappa accuracy of the test dataset were found 0.89 and 0.87 respectively.


2021 ◽  
Vol 13 (23) ◽  
pp. 4743
Author(s):  
Wei Yuan ◽  
Wenbo Xu

The segmentation of remote sensing images by deep learning technology is the main method for remote sensing image interpretation. However, the segmentation model based on a convolutional neural network cannot capture the global features very well. A transformer, whose self-attention mechanism can supply each pixel with a global feature, makes up for the deficiency of the convolutional neural network. Therefore, a multi-scale adaptive segmentation network model (MSST-Net) based on a Swin Transformer is proposed in this paper. Firstly, a Swin Transformer is used as the backbone to encode the input image. Then, the feature maps of different levels are decoded separately. Thirdly, the convolution is used for fusion, so that the network can automatically learn the weight of the decoding results of each level. Finally, we adjust the channels to obtain the final prediction map by using the convolution with a kernel of 1 × 1. By comparing this with other segmentation network models on a WHU building data set, the evaluation metrics, mIoU, F1-score and accuracy are all improved. The network model proposed in this paper is a multi-scale adaptive network model that pays more attention to the global features for remote sensing segmentation.


Author(s):  
K. H. Lee ◽  
K. T. Lee

The paper presents currently developing method of volcanic ash detection and retrieval for the Geostationary Korea Multi-Purpose Satellite (GK-2A). With the launch of GK-2A, aerosol remote sensing including dust, smoke, will begin a new era of geostationary remote sensing. The Advanced Meteorological Imager (AMI) onboard GK-2A will offer capabilities for volcanic ash remote sensing similar to those currently provided by the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite. Based on the physical principles for the current polar and geostationary imagers are modified in the algorithm. Volcanic ash is estimated in detection processing from visible and infrared channel radiances, and the comparison of satellite-observed radiances with those calculated from radiative transfer model. The retrievals are performed operationally every 15 min for volcanic ash for pixel sizes of 2 km. The algorithm currently under development uses a multichannel approach to estimate the effective radius, aerosol optical depth (AOD) simultaneously, both over water and land. The algorithm has been tested with proxy data generated from existing satellite observations and forward radiative transfer simulations. Operational assessment of the algorithm will be made after the launch of GK-2A scheduled in 2018.


2021 ◽  
Vol 87 (8) ◽  
pp. 577-591
Author(s):  
Fengpeng Li ◽  
Jiabao Li ◽  
Wei Han ◽  
Ruyi Feng ◽  
Lizhe Wang

Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.


2016 ◽  
Vol 9 (5) ◽  
pp. 2335-2344 ◽  
Author(s):  
Gutemberg Borges França ◽  
Manoel Valdonel de Almeida ◽  
Alessana C. Rosette

Abstract. This paper presents a novel model, based on neural network techniques, to produce short-term and local-specific forecasts of significant instability for flights in the terminal area of Galeão Airport, Rio de Janeiro, Brazil. Twelve years of data were used for neural network training/validation and test. Data are originally from four sources: (1) hourly meteorological observations from surface meteorological stations at five airports distributed around the study area; (2) atmospheric profiles collected twice a day at the meteorological station at Galeão Airport; (3) rain rate data collected from a network of 29 rain gauges in the study area; and (4) lightning data regularly collected by national detection networks. An investigation was undertaken regarding the capability of a neural network to produce early warning signs – or as a nowcasting tool – for significant instability events in the study area. The automated nowcasting model was tested using results from five categorical statistics, indicated in parentheses in forecasts of the first, second, and third hours, respectively, namely proportion correct (0.99, 0.97, and 0.94), BIAS (1.10, 1.42, and 2.31), the probability of detection (0.79, 0.78, and 0.67), false-alarm ratio (0.28, 0.45, and 0.73), and threat score (0.61, 0.47, and 0.25). Possible sources of error related to the test procedure are presented and discussed. The test showed that the proposed model (or neural network) can grab the physical content inside the data set, and its performance is quite encouraging for the first and second hours to nowcast significant instability events in the study area.


2021 ◽  
Vol 13 (2) ◽  
pp. 294
Author(s):  
Meng Chen ◽  
Jianjun Wu ◽  
Leizhen Liu ◽  
Wenhui Zhao ◽  
Feng Tian ◽  
...  

At present, convolutional neural networks (CNN) have been widely used in building extraction from remote sensing imagery (RSI), but there are still some bottlenecks. On the one hand, there are so many parameters in the previous network with complex structure, which will occupy lots of memories and consume much time during training process. On the other hand, low-level features extracted by shallow layers and abstract features extracted by deep layers of artificial neural network cannot be fully fused, which leads to an inaccurate building extraction from RSI. To alleviate these disadvantages, a dense residual neural network (DR-Net) was proposed in this paper. DR-Net uses a deeplabv3+Net encoder/decoder backbone, in combination with densely connected convolution neural network (DCNN) and residual network (ResNet) structure. Compared with deeplabv3+net (containing about 41 million parameters) and BRRNet (containing about 17 million parameters), DR-Net contains about 9 million parameters; So, the number of parameters reduced a lot. The experimental results for both the WHU Building Dataset and Massachusetts Building Dataset, DR-Net show better performance in building extraction than other two state-of-the-art methods. Experiments on WHU building data set showed that Intersection over Union (IoU) increased by 2.4% and F1 score increased by 1.4%; in terms of Massachusetts Building Dataset, IoU increased by 3.8% and F1 score increased by 2.9%.


Author(s):  
Chippy Babu

Remote sensing image retrieval (RSIR) may be a fundamental task in remote sensing. Most content-based image retrieval (CBRSIR) approaches take an easy distance as similarity criteria. A retrieval method supported weighted distance and basic features of Convolutional Neural Network (CNN) is proposed during this letter. the strategy contains two stages. First, in offline stage, the pretrained CNN will be fine-tuned by some labelled images from our target data set, then accustomed extract CNN features, and labelled the pictures within the retrieval data set. Second, in online stage, we extract features of the query image by using fine-tuned CNN model and calculate the load of every image class and apply them to calculate the space between the query image and also the retrieved images. Experiments and methods are conducted on two Remote Sensing Image Retrieval data sets. Compared with the state-of the-art methods, the proposed method significantly improves retrieval performance.


2016 ◽  
Author(s):  
C. J. Cox ◽  
P. M. Rowe ◽  
S. P. Neshyba ◽  
V. P. Walden

Abstract. Retrievals of cloud microphysical and macrophysical properties from ground-based and satellite-based infrared remote sensing instruments are critical for understanding clouds. However, retrieval uncertainties are difficult to quantify without a standard for comparison. This is particularly true over the polar regions where surface-based data for a cloud climatology are sparse, yet clouds represent a major source of uncertainty in weather and climate models. We describe a synthetic high-spectral resolution infrared data set that is designed to facilitate validation and development of cloud retrieval algorithms for surface- and satellite-based remote sensing instruments. Since the data set is calculated using pre-defined cloudy atmospheres, the properties of the cloud and atmospheric state are known a priori. The atmospheric state used for the simulations is drawn from radiosonde measurements made at the North Slope of Alaska (NSA) Atmospheric Radiation Measurement (ARM) site at Barrow, Alaska (71.325° N, 156.615° W), a location that is generally representative of the western Arctic. The cloud properties for each simulation are selected from statistical distributions derived from past field measurements. Upwelling (at 60 km) and downwelling (at the surface) infrared spectra are simulated for 222 cloudy cases from 50–3000 cm−1 (3.3 to 200 μm) at monochromatic (line-by-line) resolution at a spacing of ~ 0.01 cm−1 using the Line-by-line Radiative Transfer Model (LBLRTM) and the discrete-ordinate-method radiative transfer code (DISORT). These spectra are freely available for interested researchers from the ACADIS data repository (doi:10.5065/D61J97TT).


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1467
Author(s):  
Yuyao Huang ◽  
Yizhou Li ◽  
Yuan Liu ◽  
Runyu Jing ◽  
Menglong Li

Single-cell ATAC-seq (scATAC-seq), as the updating of ATAC-seq, provides a novel method for probing open chromatin sites. Currently, research of scATAC-seq is faced with the problem of high dimensionality and the inherent sparsity of the generated data. Recently, several works proposed the use of an autoencoder–decoder, a symmetry neural network architecture, and non-negative matrix factorization methods to characterize the high-dimensional data. To evaluate the performance of multiple methods, in this work, we performed a multiple comparison for characterizing scATAC-seq based on four kinds of auto-encoders known as a symmetry neural network, and two kinds of matrix factorization methods. Different sizes of latent features were used to generate the UMAP plots and for further K-means clustering. Using a gold-standard data set, we practically explored the performance among the methods and the number of latent features in a comprehensive way. Finally, we briefly discuss the underlying difficulties and future directions for scATAC-seq characterizing. As a result, the method designed for handling the sparsity outperforms other tools in the generated dataset.


Sign in / Sign up

Export Citation Format

Share Document