scholarly journals Conditional Generative Adversarial Networks (cGANs) for Near Real-Time Precipitation Estimation from Multispectral GOES-16 Satellite Imageries—PERSIANN-cGAN

2019 ◽  
Vol 11 (19) ◽  
pp. 2193 ◽  
Author(s):  
Negin Hayatbini ◽  
Bailey Kong ◽  
Kuo-lin Hsu ◽  
Phu Nguyen ◽  
Soroosh Sorooshian ◽  
...  

In this paper, we present a state-of-the-art precipitation estimation framework which leverages advances in satellite remote sensing as well as Deep Learning (DL). The framework takes advantage of the improvements in spatial, spectral and temporal resolutions of the Advanced Baseline Imager (ABI) onboard the GOES-16 platform along with elevation information to improve the precipitation estimates. The procedure begins by first deriving a Rain/No Rain (R/NR) binary mask through classification of the pixels and then applying regression to estimate the amount of rainfall for rainy pixels. A Fully Convolutional Network is used as a regressor to predict precipitation estimates. The network is trained using the non-saturating conditional Generative Adversarial Network (cGAN) and Mean Squared Error (MSE) loss terms to generate results that better learn the complex distribution of precipitation in the observed data. Common verification metrics such as Probability Of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), Bias, Correlation and MSE are used to evaluate the accuracy of both R/NR classification and real-valued precipitation estimates. Statistics and visualizations of the evaluation measures show improvements in the precipitation retrieval accuracy in the proposed framework compared to the baseline models trained using conventional MSE loss terms. This framework is proposed as an augmentation for PERSIANN-CCS (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network- Cloud Classification System) algorithm for estimating global precipitation.

Atmosphere ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 134
Author(s):  
Xiaoyu Li ◽  
Sheng Chen ◽  
Zhenqing Liang ◽  
Chaoying Huang ◽  
Zhi Li ◽  
...  

This paper evaluated the latest version 6.0 Global Satellite Mapping of Precipitation (GSMaP) and version 6.0 Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG) products during 2018 Typhoon Mangkhut in China. The reference data is the rain gauge datasets from Gauge-Calibrated Climate Prediction Centre (CPC) Morphing Technique (CMORPHGC). The products for comparison include the GSMaP near-real-time, Microwave-IR merged, and gauge-calibrated (GSMaP_NRT, GSMaP_MVK, and GSMaP_Gauge) and the IMERG Early, Final, and Final gauge-calibrated (IMERG_ERUncal, IMERG_FRUncal, and IMERG_FRCal) products. The results show that (1) both GSMaP_Gauge and IMERG_FRCal considerably reduced the bias of their satellite-only products. GSMaP_Gauge outperforms IMERG_FRCal with higher Correlation Coefficient (CC) values of about 0.85, 0.78, and 0.50; lower Fractional Standard Error (FSE) values of about 18.00, 18.85, and 29.30; and Root-Mean-Squared Error (RMSE) values of about 12.12, 33.35, and 32.99 mm in the rainfall centers over mainland China, southern China, and eastern China, respectively. (2) GSMaP products perform better than IMERG products, with higher Probability of Detection (POD) and Critical Success Index (CSI) and lower False Alarm Ratio (FAR) in detecting rainfall occurrence, especially for high rainfall rates. (3) For area-mean rainfall, IMERG performs worse than GSMaP in the rainfall centers over mainland China and southern China but shows better performance in the rainfall center over eastern China. GSMaP_Gauge and IMERG_FRCal perform well in the three regions with a high CC (0.79 vs. 0.94, 0.81 vs. 0.96, and 0.95 vs. 0.97) and a low RMSE (0.04 vs. 0.06, 0.40 vs. 0.59, and 0.19 vs. 0.34 mm). These useful findings will help algorithm developers and data users to better understand the performance of GSMaP and IMERG products during typhoon precipitation events.


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Yingxi Yang ◽  
Hui Wang ◽  
Wen Li ◽  
Xiaobo Wang ◽  
Shizhao Wei ◽  
...  

Abstract Background Protein post-translational modification (PTM) is a key issue to investigate the mechanism of protein’s function. With the rapid development of proteomics technology, a large amount of protein sequence data has been generated, which highlights the importance of the in-depth study and analysis of PTMs in proteins. Method We proposed a new multi-classification machine learning pipeline MultiLyGAN to identity seven types of lysine modified sites. Using eight different sequential and five structural construction methods, 1497 valid features were remained after the filtering by Pearson correlation coefficient. To solve the data imbalance problem, Conditional Generative Adversarial Network (CGAN) and Conditional Wasserstein Generative Adversarial Network (CWGAN), two influential deep generative methods were leveraged and compared to generate new samples for the types with fewer samples. Finally, random forest algorithm was utilized to predict seven categories. Results In the tenfold cross-validation, accuracy (Acc) and Matthews correlation coefficient (MCC) were 0.8589 and 0.8376, respectively. In the independent test, Acc and MCC were 0.8549 and 0.8330, respectively. The results indicated that CWGAN better solved the existing data imbalance and stabilized the training error. Alternatively, an accumulated feature importance analysis reported that CKSAAP, PWM and structural features were the three most important feature-encoding schemes. MultiLyGAN can be found at https://github.com/Lab-Xu/MultiLyGAN. Conclusions The CWGAN greatly improved the predictive performance in all experiments. Features derived from CKSAAP, PWM and structure schemes are the most informative and had the greatest contribution to the prediction of PTM.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Mojtaba Sadeghi ◽  
Phu Nguyen ◽  
Matin Rahnamay Naeini ◽  
Kuolin Hsu ◽  
Dan Braithwaite ◽  
...  

AbstractAccurate long-term global precipitation estimates, especially for heavy precipitation rates, at fine spatial and temporal resolutions is vital for a wide variety of climatological studies. Most of the available operational precipitation estimation datasets provide either high spatial resolution with short-term duration estimates or lower spatial resolution with long-term duration estimates. Furthermore, previous research has stressed that most of the available satellite-based precipitation products show poor performance for capturing extreme events at high temporal resolution. Therefore, there is a need for a precipitation product that reliably detects heavy precipitation rates with fine spatiotemporal resolution and a longer period of record. Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System-Climate Data Record (PERSIANN-CCS-CDR) is designed to address these limitations. This dataset provides precipitation estimates at 0.04° spatial and 3-hourly temporal resolutions from 1983 to present over the global domain of 60°S to 60°N. Evaluations of PERSIANN-CCS-CDR and PERSIANN-CDR against gauge and radar observations show the better performance of PERSIANN-CCS-CDR in representing the spatiotemporal resolution, magnitude, and spatial distribution patterns of precipitation, especially for extreme events.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1349
Author(s):  
Stefan Lattner ◽  
Javier Nistal

Lossy audio codecs compress (and decompress) digital audio streams by removing information that tends to be inaudible in human perception. Under high compression rates, such codecs may introduce a variety of impairments in the audio signal. Many works have tackled the problem of audio enhancement and compression artifact removal using deep-learning techniques. However, only a few works tackle the restoration of heavily compressed audio signals in the musical domain. In such a scenario, there is no unique solution for the restoration of the original signal. Therefore, in this study, we test a stochastic generator of a Generative Adversarial Network (GAN) architecture for this task. Such a stochastic generator, conditioned on highly compressed musical audio signals, could one day generate outputs indistinguishable from high-quality releases. Therefore, the present study may yield insights into more efficient musical data storage and transmission. We train stochastic and deterministic generators on MP3-compressed audio signals with 16, 32, and 64 kbit/s. We perform an extensive evaluation of the different experiments utilizing objective metrics and listening tests. We find that the models can improve the quality of the audio signals over the MP3 versions for 16 and 32 kbit/s and that the stochastic generators are capable of generating outputs that are closer to the original signals than those of the deterministic generators.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4867
Author(s):  
Lu Chen ◽  
Hongjun Wang ◽  
Xianghao Meng

With the development of science and technology, neural networks, as an effective tool in image processing, play an important role in gradual remote-sensing image-processing. However, the training of neural networks requires a large sample database. Therefore, expanding datasets with limited samples has gradually become a research hotspot. The emergence of the generative adversarial network (GAN) provides new ideas for data expansion. Traditional GANs either require a large number of input data, or lack detail in the pictures generated. In this paper, we modify a shuffle attention network and introduce it into GAN to generate higher quality pictures with limited inputs. In addition, we improved the existing resize method and proposed an equal stretch resize method to solve the problem of image distortion caused by different input sizes. In the experiment, we also embed the newly proposed coordinate attention (CA) module into the backbone network as a control test. Qualitative indexes and six quantitative evaluation indexes were used to evaluate the experimental results, which show that, compared with other GANs used for picture generation, the modified Shuffle Attention GAN proposed in this paper can generate more refined and high-quality diversified aircraft pictures with more detailed features of the object under limited datasets.


Author(s):  
Lingyu Yan ◽  
Jiarun Fu ◽  
Chunzhi Wang ◽  
Zhiwei Ye ◽  
Hongwei Chen ◽  
...  

AbstractWith the development of image recognition technology, face, body shape, and other factors have been widely used as identification labels, which provide a lot of convenience for our daily life. However, image recognition has much higher requirements for image conditions than traditional identification methods like a password. Therefore, image enhancement plays an important role in the process of image analysis for images with noise, among which the image of low-light is the top priority of our research. In this paper, a low-light image enhancement method based on the enhanced network module optimized Generative Adversarial Networks(GAN) is proposed. The proposed method first applied the enhancement network to input the image into the generator to generate a similar image in the new space, Then constructed a loss function and minimized it to train the discriminator, which is used to compare the image generated by the generator with the real image. We implemented the proposed method on two image datasets (DPED, LOL), and compared it with both the traditional image enhancement method and the deep learning approach. Experiments showed that our proposed network enhanced images have higher PNSR and SSIM, the overall perception of relatively good quality, demonstrating the effectiveness of the method in the aspect of low illumination image enhancement.


2021 ◽  
Vol 13 (2) ◽  
pp. 254 ◽  
Author(s):  
Jie Hsu ◽  
Wan-Ru Huang ◽  
Pin-Yi Liu ◽  
Xiuzhen Li

The Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), which incorporates satellite imagery and in situ station information, is a new high-resolution long-term precipitation dataset available since 1981. This study aims to understand the performance of the latest version of CHIRPS in depicting the multiple timescale precipitation variation over Taiwan. The analysis is focused on examining whether CHIRPS is better than another satellite precipitation product—the Integrated Multi-satellitE Retrievals for Global Precipitation Mission (GPM) final run (hereafter IMERG)—which is known to effectively capture the precipitation variation over Taiwan. We carried out the evaluations made for annual cycle, seasonal cycle, interannual variation, and daily variation during 2001–2019. Our results show that IMERG is slightly better than CHIRPS considering most of the features examined; however, CHIRPS performs better than that of IMERG in representing the (1) magnitude of the annual cycle of monthly precipitation climatology, (2) spatial distribution of the seasonal mean precipitation for all four seasons, (3) quantitative precipitation estimation of the interannual variation of area-averaged winter precipitation in Taiwan, and (4) occurrence frequency of the non-rainy grids in winter. Notably, despite the fact that CHIRPS is not better than IMERG for many examined features, CHIRPS can depict the temporal variation in precipitation over Taiwan on annual, seasonal, and interannual timescales with 95% significance. This highlights the potential use of CHIRPS in studying the multiple timescale variation in precipitation over Taiwan during the years 1981–2000, for which there are no data available in the IMERG database.


2021 ◽  
Vol 11 (4) ◽  
pp. 1380
Author(s):  
Yingbo Zhou ◽  
Pengcheng Zhao ◽  
Weiqin Tong ◽  
Yongxin Zhu

While Generative Adversarial Networks (GANs) have shown promising performance in image generation, they suffer from numerous issues such as mode collapse and training instability. To stabilize GAN training and improve image synthesis quality with diversity, we propose a simple yet effective approach as Contrastive Distance Learning GAN (CDL-GAN) in this paper. Specifically, we add Consistent Contrastive Distance (CoCD) and Characteristic Contrastive Distance (ChCD) into a principled framework to improve GAN performance. The CoCD explicitly maximizes the ratio of the distance between generated images and the increment between noise vectors to strengthen image feature learning for the generator. The ChCD measures the sampling distance of the encoded images in Euler space to boost feature representations for the discriminator. We model the framework by employing Siamese Network as a module into GANs without any modification on the backbone. Both qualitative and quantitative experiments conducted on three public datasets demonstrate the effectiveness of our method.


Sign in / Sign up

Export Citation Format

Share Document