Complex Approach of High-Resolution Multispectral Data Engineering for Deep Neural Network Processing

Author(s):  
Volodymyr Hnatushenko ◽  
Vadym Zhernovyi
2018 ◽  
Vol 4 (8) ◽  
pp. eaat5218 ◽  
Author(s):  
Steven G. Worswick ◽  
James A. Spencer ◽  
Gunnar Jeschke ◽  
Ilya Kuprov

2018 ◽  
Vol 468 ◽  
pp. 142-154 ◽  
Author(s):  
Hui Liu ◽  
Jun Xu ◽  
Yan Wu ◽  
Qiang Guo ◽  
Bulat Ibragimov ◽  
...  

2020 ◽  
Vol 12 (2) ◽  
pp. 316
Author(s):  
Vesta Afzali Gorooh ◽  
Subodh Kalia ◽  
Phu Nguyen ◽  
Kuo-lin Hsu ◽  
Soroosh Sorooshian ◽  
...  

Satellite remote sensing plays a pivotal role in characterizing hydrometeorological components including cloud types and their associated precipitation. The Cloud Profiling Radar (CPR) on the Polar Orbiting CloudSat satellite has provided a unique dataset to characterize cloud types. However, data from this nadir-looking radar offers limited capability for estimating precipitation because of the narrow satellite swath coverage and low temporal frequency. We use these high-quality observations to build a Deep Neural Network Cloud-Type Classification (DeepCTC) model to estimate cloud types from multispectral data from the Advanced Baseline Imager (ABI) onboard the GOES-16 platform. The DeepCTC model is trained and tested using coincident data from both CloudSat and ABI over the CONUS region. Evaluations of DeepCTC indicate that the model performs well for a variety of cloud types including Altostratus, Altocumulus, Cumulus, Nimbostratus, Deep Convective and High clouds. However, capturing low-level clouds remains a challenge for the model. Results from simulated GOES-16 ABI imageries of the Hurricane Harvey event show a large-scale perspective of the rapid and consistent cloud-type monitoring is possible using the DeepCTC model. Additionally, assessments using half-hourly Multi-Radar/Multi-Sensor (MRMS) precipitation rate data (for Hurricane Harvey as a case study) show the ability of DeepCTC in identifying rainy clouds, including Deep Convective and Nimbostratus and their precipitation potential. We also use DeepCTC to evaluate the performance of the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) product over different cloud types with respect to MRMS referenced at a half-hourly time scale for July 2018. Our analysis suggests that DeepCTC provides supplementary insights into the variability of cloud types to diagnose the weakness and strength of near real-time GEO-based precipitation retrievals. With additional training and testing, we believe DeepCTC has the potential to augment the widely used PERSIANN-CCS algorithm for estimating precipitation.


2021 ◽  
Author(s):  
Christopher B Fritz

We hypothesize that deep networks are superior to linear decoders at recovering visual stimuli from neural activity. Using high-resolution, multielectrode Neuropixels recordings, we verify this is the case for a simple feed-forward deep neural network having just 7 layers. These results suggest that these feed-forward neural networks and perhaps more complex deep architectures will give superior performance in a visual brain-machine interface.


2012 ◽  
Vol 13 (3) ◽  
pp. 913-931 ◽  
Author(s):  
Robin Chadwick ◽  
David Grimes

Abstract Multispectral Spinning Enhanced Visible and IR Interferometer (SEVIRI) data, calibrated with daily rain gauge estimates, were used to produce daily high-resolution rainfall estimates over Africa. An artificial neural network (ANN) approach was used, producing an output of satellite pixel–scale daily rainfall totals. This product, known as the Rainfall Intensity Artificial Neural Network African Algorithm (RIANNAA), was calibrated and validated using gauge data from the highland Oromiya region of Ethiopia. Validation was performed at a variety of spatial and temporal scales, and results were also compared against Tropical Applications of Meteorology Using Satellite Data (TAMSAT) single-channel IR-based rainfall estimates. Several versions of RIANNAA, with different combinations of SEVIRI channels as inputs, were developed. RIANNAA was an improvement over TAMSAT at all validation scales, for all versions of RIANNAA. However, the addition of multispectral data to RIANNAA only provided a statistically significant improvement over the single-channel RIANNAA at the highest spatial and temporal-resolution validation scale. It appears that multispectral data add more value to rainfall estimates at high-resolution scales than at averaged time scales, where the cloud microphysical information that they provide may be less important for determining rainfall totals than larger-scale processes such as total moisture advection aloft.


2018 ◽  
Vol 9 (12) ◽  
pp. 5997 ◽  
Author(s):  
Yanyu Zhao ◽  
Mattew B. Applegate ◽  
Raeef Istfan ◽  
Ashvin Pande ◽  
Darren Roblyer

2018 ◽  
Vol 10 (10) ◽  
pp. 1602 ◽  
Author(s):  
Rudong Xu ◽  
Yiting Tao ◽  
Zhongyuan Lu ◽  
Yanfei Zhong

A deep neural network is suitable for remote sensing image pixel-wise classification because it effectively extracts features from the raw data. However, remote sensing images with higher spatial resolution exhibit smaller inter-class differences and greater intra-class differences; thus, feature extraction becomes more difficult. The attention mechanism, as a method that simulates the manner in which humans comprehend and perceive images, is useful for the quick and accurate acquisition of key features. In this study, we propose a novel neural network that incorporates two kinds of attention mechanisms in its mask and trunk branches; i.e., control gate (soft) and feedback attention mechanisms, respectively, based on the branches’ primary roles. Thus, a deep neural network can be equipped with an attention mechanism to perform pixel-wise classification for very high-resolution remote sensing (VHRRS) images. The control gate attention mechanism in the mask branch is utilized to build pixel-wise masks for feature maps, to assign different priorities to different locations on different channels for feature extraction recalibration, to apply stress to the effective features, and to weaken the influence of other profitless features. The feedback attention mechanism in the trunk branch allows for the retrieval of high-level semantic features. Hence, additional aids are provided for lower layers to re-weight the focus and to re-update higher-level feature extraction in a target-oriented manner. These two attention mechanisms are fused to form a neural network module. By stacking various modules with different-scale mask branches, the network utilizes different attention-aware features under different local spatial structures. The proposed method is tested on the VHRRS images from the BJ-02, GF-02, Geoeye, and Quickbird satellites, and the influence of the network structure and the rationality of the network design are discussed. Compared with other state-of-the-art methods, our proposed method achieves competitive accuracy, thereby proving its effectiveness.


Sign in / Sign up

Export Citation Format

Share Document