scholarly journals A scSE-LinkNet Deep Learning Model for Daytime Sea Fog Detection

2021 ◽  
Vol 13 (24) ◽  
pp. 5163
Author(s):  
Xiaofei Guo ◽  
Jianhua Wan ◽  
Shanwei Liu ◽  
Mingming Xu ◽  
Hui Sheng ◽  
...  

Sea fog is a precarious weather disaster affecting transportation on the sea. The accuracy of the threshold method for sea fog detection is limited by time and region. In comparison, the deep learning method learns features of objects through different network layers and can therefore accurately extract fog data and is less affected by temporal and spatial factors. This study proposes a scSE-LinkNet model for daytime sea fog detection that leverages residual blocks to encoder feature maps and attention module to learn the features of sea fog data by considering spectral and spatial information of nodes. With the help of satellite radar data from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), a ground sample database was extracted from Moderate Resolution Imaging Spectroradiometer (MODIS) L1B data. The scSE-LinkNet was trained on the training set, and quantitative evaluation was performed on the test set. Results showed the probability of detection (POD), false alarm rate (FAR), critical success index (CSI), and Heidke skill scores (HSS) were 0.924, 0.143, 0.800, and 0.864, respectively. Compared with other neural networks (FCN, U-Net, and LinkNet), the CSI of scSE-LinkNet was improved, with a maximum increase of nearly 8%. Moreover, the sea fog detection results were consistent with the measured data and CALIOP products.

2020 ◽  
Vol 12 (9) ◽  
pp. 1521
Author(s):  
Han-Sol Ryu ◽  
Sungwook Hong

Many previous studies have attempted to distinguish fog from clouds using low-orbit and geostationary satellite observations from visible (VIS) to longwave infrared (LWIR) bands. However, clouds and fog have often been misidentified because of their similar spectral features. Recently, advanced meteorological geostationary satellites with improved spectral, spatial, and temporal resolutions, including Himawari-8/9, GOES-16/17, and GeoKompsat-2A, have become operational. Accordingly, this study presents an improved algorithm for detecting daytime sea fog using one VIS and one near-infrared (NIR) band of the Advanced Himawari Imager (AHI) of the Himawari-8 satellite. We propose a regression-based relationship for sea fog detection using a combination of the Normalized Difference Snow Index (NDSI) and reflectance at the green band of the AHI. Several case studies, including various foggy and cloudy weather conditions in the Yellow Sea for three years (2017–2019), have been performed. The results of our algorithm showed a successful detection of sea fog without any cloud mask information. The pixel-level comparison results with the sea fog detection based on the shortwave infrared (SWIR) band (3.9 μm) and the brightness temperature difference between SWIR and LWIR bands of the AHI showed high statistical scores for probability of detection (POD), post agreement (PAG), critical success index (CSI), and Heidke skill score (HSS). Consequently, the proposed algorithms for daytime sea fog detection can be effective in daytime, particularly twilight, conditions, for many satellites equipped with VIS and NIR bands.


2021 ◽  
Author(s):  
Chao Lu ◽  
Fansheng Chen ◽  
Xiaofeng Su ◽  
Dan Zeng

Abstract Infrared technology is a widely used in precision guidance and mine detection since it can capture the heat radiated outward from the target object. We use infrared (IR) thermography to get the infrared image of the buried obje cts. Compared to the visible images, infrared images present poor resolution, low contrast, and fuzzy visual effect, which make it difficult to segment the target object, specifically in the complex backgrounds. In this condition, traditional segmentation methods cannot perform well in infrared images since they are easily disturbed by the noise and non-target objects in the images. With the advance of deep convolutional neural network (CNN), the deep learning-based methods have made significant improvements in semantic segmentation task. However, few of them research Infrared image semantic segmentation, which is a more challenging scenario compared to visible images. Moreover, the lack of an Infrared image dataset is also a problem for current methods based on deep learning. We raise a multi-scale attentional feature fusion (MS-AFF) module for infrared image semantic segmentation to solve this problem. Precisely, we integrate a series of feature maps from different levels by an atrous spatial pyramid structure. In this way, the model can obtain rich representation ability on the infrared images. Besides, a global spatial information attention module is employed to let the model focus on the target region and reduce disturbance in infrared images' background. In addition, we propose an infrared segmentation dataset based on the infrared thermal imaging system. Extensive experiments conducted in the infrared image segmentation dataset show the superiority of our method.


2021 ◽  
Vol 13 (7) ◽  
pp. 1246
Author(s):  
Kyle B. Larson ◽  
Aaron R. Tuor

Cheatgrass (Bromus tectorum) invasion is driving an emerging cycle of increased fire frequency and irreversible loss of wildlife habitat in the western US. Yet, detailed spatial information about its occurrence is still lacking for much of its presumably invaded range. Deep learning (DL) has demonstrated success for remote sensing applications but is less tested on more challenging tasks like identifying biological invasions using sub-pixel phenomena. We compare two DL architectures and the more conventional Random Forest and Logistic Regression methods to improve upon a previous effort to map cheatgrass occurrence at >2% canopy cover. High-dimensional sets of biophysical, MODIS, and Landsat-7 ETM+ predictor variables are also compared to evaluate different multi-modal data strategies. All model configurations improved results relative to the case study and accuracy generally improved by combining data from both sensors with biophysical data. Cheatgrass occurrence is mapped at 30 m ground sample distance (GSD) with an estimated 78.1% accuracy, compared to 250-m GSD and 71% map accuracy in the case study. Furthermore, DL is shown to be competitive with well-established machine learning methods in a limited data regime, suggesting it can be an effective tool for mapping biological invasions and more broadly for multi-modal remote sensing applications.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maiki Higa ◽  
Shinya Tanahara ◽  
Yoshitaka Adachi ◽  
Natsumi Ishiki ◽  
Shin Nakama ◽  
...  

AbstractIn this report, we propose a deep learning technique for high-accuracy estimation of the intensity class of a typhoon from a single satellite image, by incorporating meteorological domain knowledge. By using the Visual Geometric Group’s model, VGG-16, with images preprocessed with fisheye distortion, which enhances a typhoon’s eye, eyewall, and cloud distribution, we achieved much higher classification accuracy than that of a previous study, even with sequential-split validation. Through comparison of t-distributed stochastic neighbor embedding (t-SNE) plots for the feature maps of VGG with the original satellite images, we also verified that the fisheye preprocessing facilitated cluster formation, suggesting that our model could successfully extract image features related to the typhoon intensity class. Moreover, gradient-weighted class activation mapping (Grad-CAM) was applied to highlight the eye and the cloud distributions surrounding the eye, which are important regions for intensity classification; the results suggest that our model qualitatively gained a viewpoint similar to that of domain experts. A series of analyses revealed that the data-driven approach using only deep learning has limitations, and the integration of domain knowledge could bring new breakthroughs.


2010 ◽  
Vol 27 (3) ◽  
pp. 409-427 ◽  
Author(s):  
Kun Tao ◽  
Ana P. Barros

Abstract The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L ≫ l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (∼25-km grid spacing) to the same resolution as the NCEP stage IV products (∼4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied.


2013 ◽  
Vol 6 (1) ◽  
pp. 1269-1310 ◽  
Author(s):  
T. Zinner ◽  
C. Forster ◽  
E. de Coning ◽  
H.-D. Betz

Abstract. In this manuscript, recent changes to the DLR METEOSAT thunderstorm TRacking And Monitoring algorithm (Cb-TRAM) are presented as well as a validation of Cb-TRAM against the European ground-based LIghtning NETwork data (LINET) of Nowcast GmbH and Lightning Detection Network (LDN) data of the South African Weather Service (SAWS). The validation is conducted along the well known skill scores probability of detection (POD) and false alarm ratio (FAR) on the basis of METEOSAT/SEVIRI pixels as well as on the basis of thunderstorm objects. The values obtained demonstrate the limits of Cb-TRAM in specific as well as the limits of satellite methods in general which are based on thermal emission and solar reflectivity information from thunderstorm tops. Although the climatic conditions and the occurence of thunderstorms is quite different for Europe and South Africa, the quality score values are similar. Our conclusion is that Cb-TRAM provides robust results of well-defined quality for very different climatic regimes. The POD for a thunderstorm with intense lightning is about 80% during the day. The FAR for a Cb-TRAM detected thunderstorm which is not at least close to intense lightning activity is about 50%; if the proximity to any lightning activity is evaluated the FAR is even much lower at about 15%. Pixel-based analysis shows that the detected thunderstorm object size is not indiscriminately large, but well within the physical limitations of the method. Nighttime POD and FAR are somewhat worse as the detection scheme can not use high resolution visible information. Nowcasting scores show useful values up to approximatelly 30 min.


Sign in / Sign up

Export Citation Format

Share Document