scholarly journals MSG-GAN-SD: A Multi-Scale Gradients GAN for Statistical Downscaling of 2-Meter Temperature over the EURO-CORDEX Domain

AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 600-620
Author(s):  
Gabriele Accarino ◽  
Marco Chiarelli ◽  
Francesco Immorlano ◽  
Valeria Aloisi ◽  
Andrea Gatto ◽  
...  

One of the most important open challenges in climate science is downscaling. It is a procedure that allows making predictions at local scales, starting from climatic field information available at large scale. Recent advances in deep learning provide new insights and modeling solutions to tackle downscaling-related tasks by automatically learning the coarse-to-fine grained resolution mapping. In particular, deep learning models designed for super-resolution problems in computer vision can be exploited because of the similarity between images and climatic fields maps. For this reason, a new architecture tailored for statistical downscaling (SD), named MSG-GAN-SD, has been developed, allowing interpretability and good stability during training, due to multi-scale gradient information. The proposed architecture, based on a Generative Adversarial Network (GAN), was applied to downscale ERA-Interim 2-m temperature fields, from 83.25 to 13.87 km resolution, covering the EURO-CORDEX domain within the 1979–2018 period. The training process involves seasonal and monthly dataset arrangements, in addition to different training strategies, leading to several models. Furthermore, a model selection framework is introduced in order to mathematically select the best models during the training. The selected models were then tested on the 2015–2018 period using several metrics to identify the best training strategy and dataset arrangement, which finally produced several evaluation maps. This work is the first attempt to use the MSG-GAN architecture for statistical downscaling. The achieved results demonstrate that the models trained on seasonal datasets performed better than those trained on monthly datasets. This study presents an accurate and cost-effective solution that is able to perform downscaling of 2 m temperature climatic maps.

2018 ◽  
Vol 7 (10) ◽  
pp. 389 ◽  
Author(s):  
Wei He ◽  
Naoto Yokoya

In this paper, we present the optical image simulation from synthetic aperture radar (SAR) data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multi-temporal SAR-optical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network (CNN) with a residual architecture and a conditional generative adversarial network (cGAN). We validate our models using the Sentinel-1 and -2 datasets. The experiments demonstrate that the model with multi-temporal SAR-optical data can successfully simulate the optical image; meanwhile, the state-of-the-art model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SAR-optical information blending for the subsequent applications such as large-scale cloud removal, and optical data temporal super-resolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.


Micromachines ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 670
Author(s):  
Mingzheng Hou ◽  
Song Liu ◽  
Jiliu Zhou ◽  
Yi Zhang ◽  
Ziliang Feng

Activity recognition is a fundamental and crucial task in computer vision. Impressive results have been achieved for activity recognition in high-resolution videos, but for extreme low-resolution videos, which capture the action information at a distance and are vital for preserving privacy, the performance of activity recognition algorithms is far from satisfactory. The reason is that extreme low-resolution (e.g., 12 × 16 pixels) images lack adequate scene and appearance information, which is needed for efficient recognition. To address this problem, we propose a super-resolution-driven generative adversarial network for activity recognition. To fully take advantage of the latent information in low-resolution images, a powerful network module is employed to super-resolve the extremely low-resolution images with a large scale factor. Then, a general activity recognition network is applied to analyze the super-resolved video clips. Extensive experiments on two public benchmarks were conducted to evaluate the effectiveness of our proposed method. The results demonstrate that our method outperforms several state-of-the-art low-resolution activity recognition approaches.


Generative Adversarial Networks have gained prominence in a short span of time as they can synthesize images from latent noise by minimizing the adversarial cost function. New variants of GANs have been developed to perform specific tasks using state-of-the-art GAN models, like image translation, single image super resolution, segmentation, classification, style transfer etc. However, a combination of two GANs to perform two different applications in one model has been sparsely explored. Hence, this paper concatenates two GANs and aims to perform Image Translation using Cycle GAN model on bird images and improve their resolution using SRGAN. During the extensive survey, it is observed that most of the deep learning databases on Aves were built using the new world species (i.e. species found in North America). Hence, to bridge this gap, a new Ave database, 'Common Birds of North - Western India' (CBNWI-50), is also proposed in this work.


2021 ◽  
Author(s):  
Jiaoyue Li ◽  
Weifeng Liu ◽  
Kai Zhang ◽  
Baodi Liu

Remote sensing image super-resolution (SR) plays an essential role in many remote sensing applications. Recently, remote sensing image super-resolution methods based on deep learning have shown remarkable performance. However, directly utilizing the deep learning methods becomes helpless to recover the remote sensing images with a large number of complex objectives or scene. So we propose an edge-based dense connection generative adversarial network (SREDGAN), which minimizes the edge differences between the generated image and its corresponding ground truth. Experimental results on NWPU-VHR-10 and UCAS-AOD datasets demonstrate that our method improves 1.92 and 0.045 in PSNR and SSIM compared with SRGAN, respectively.


2021 ◽  
Vol 35 (5) ◽  
pp. 395-401
Author(s):  
Mohan Mahanty ◽  
Debnath Bhattacharyya ◽  
Divya Midhunchakkaravarthy

Colon cancer is thought about as the third most regularly identified cancer after Brest and lung cancer. Most colon cancers are adenocarcinomas developing from adenomatous polyps, grow on the intima of the colon. The standard procedure for polyp detection is colonoscopy, where the success of the standard colonoscopy depends on the colonoscopist experience and other environmental factors. Nonetheless, throughout colonoscopy procedures, a considerable number (8-37%) of polyps are missed due to human mistakes, and these missed polyps are the prospective reason for colorectal cancer cells. In the last few years, many research groups developed deep learning-based computer-aided (CAD) systems that recommended many techniques for automated polyp detection, localization, and segmentation. Still, accurate polyp detection, segmentation is required to minimize polyp miss out rates. This paper suggested a Super-Resolution Generative Adversarial Network (SRGAN) assisted Encoder-Decoder network for fully automated colon polyp segmentation from colonoscopic images. The proposed deep learning model incorporates the SRGAN in the up-sampling process to achieve more accurate polyp segmentation. We examined our model on the publicly available benchmark datasets CVC-ColonDB and Warwick- QU. The model accomplished a dice score of 0.948 on the CVC-ColonDB dataset, surpassed the recently advanced state-of-the-art (SOTA) techniques. When it is evaluated on the Warwick-QU dataset, it attains a Dice Score of 0.936 on part A and 0.895 on Part B. Our model showed more accurate results for sessile and smaller-sized polyps.


2021 ◽  
Author(s):  
Tianyu Liu ◽  
Yuge Wang ◽  
Hong-yu Zhao

With the advancement of technology, we can generate and access large-scale, high dimensional and diverse genomics data, especially through single-cell RNA sequencing (scRNA-seq). However, integrative downstream analysis from multiple scRNA-seq datasets remains challenging due to batch effects. In this paper, we focus on scRNA-seq data integration and propose a new deep learning framework based on Wasserstein Generative Adversarial Network (WGAN) combined with an attention mechanism to reduce the differences among batches. We also discuss the limitations of the existing methods and demonstrate the advantages of our new model from both theoretical and practical aspects, advocating the use of deep learning in genomics research.


2020 ◽  
Vol 34 (07) ◽  
pp. 11157-11164
Author(s):  
Sheng Jin ◽  
Shangchen Zhou ◽  
Yao Liu ◽  
Chao Chen ◽  
Xiaoshuai Sun ◽  
...  

Deep hashing methods have been proved to be effective and efficient for large-scale Web media search. The success of these data-driven methods largely depends on collecting sufficient labeled data, which is usually a crucial limitation in practical cases. The current solutions to this issue utilize Generative Adversarial Network (GAN) to augment data in semi-supervised learning. However, existing GAN-based methods treat image generations and hashing learning as two isolated processes, leading to generation ineffectiveness. Besides, most works fail to exploit the semantic information in unlabeled data. In this paper, we propose a novel Semi-supervised Self-pace Adversarial Hashing method, named SSAH to solve the above problems in a unified framework. The SSAH method consists of an adversarial network (A-Net) and a hashing network (H-Net). To improve the quality of generative images, first, the A-Net learns hard samples with multi-scale occlusions and multi-angle rotated deformations which compete against the learning of accurate hashing codes. Second, we design a novel self-paced hard generation policy to gradually increase the hashing difficulty of generated samples. To make use of the semantic information in unlabeled ones, we propose a semi-supervised consistent loss. The experimental results show that our method can significantly improve state-of-the-art models on both the widely-used hashing datasets and fine-grained datasets.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2158
Author(s):  
Juan Du ◽  
Kuanhong Cheng ◽  
Yue Yu ◽  
Dabao Wang ◽  
Huixin Zhou

Panchromatic (PAN) images contain abundant spatial information that is useful for earth observation, but always suffer from low-resolution ( LR) due to the sensor limitation and large-scale view field. The current super-resolution (SR) methods based on traditional attention mechanism have shown remarkable advantages but remain imperfect to reconstruct the edge details of SR images. To address this problem, an improved SR model which involves the self-attention augmented Wasserstein generative adversarial network ( SAA-WGAN) is designed to dig out the reference information among multiple features for detail enhancement. We use an encoder-decoder network followed by a fully convolutional network (FCN) as the backbone to extract multi-scale features and reconstruct the High-resolution (HR) results. To exploit the relevance between multi-layer feature maps, we first integrate a convolutional block attention module (CBAM) into each skip-connection of the encoder-decoder subnet, generating weighted maps to enhance both channel-wise and spatial-wise feature representation automatically. Besides, considering that the HR results and LR inputs are highly similar in structure, yet cannot be fully reflected in traditional attention mechanism, we, therefore, designed a self augmented attention (SAA) module, where the attention weights are produced dynamically via a similarity function between hidden features; this design allows the network to flexibly adjust the fraction relevance among multi-layer features and keep the long-range inter information, which is helpful to preserve details. In addition, the pixel-wise loss is combined with perceptual and gradient loss to achieve comprehensive supervision. Experiments on benchmark datasets demonstrate that the proposed method outperforms other SR methods in terms of both objective evaluation and visual effect.


Sign in / Sign up

Export Citation Format

Share Document