Downscaling of Precipitation Forecasts Based on Single Image Super-Resolution

Author(s):  
Yan Ji ◽  
Xiefei Zhi ◽  
Ye Tian ◽  
Ting Peng ◽  
Ziqiang Huo ◽  
...  

<p>High spatial resolution weather forecasts that capture regional-scale dynamics are important for natural hazards prevention, especially for the regions featured with large topographical variety and local climate. While deep convolutional neural networks have made great progress in single image super-resolution (SR) which learns mapping relationship between low- and high- resolution images, limited efforts have been made to explore the potential of downscaling in this way. In the study, three advanced SR deep learning frameworks including Super-Resolution Convolutional Neural Network (SRCNN), Super-Resolution Generative Adversarial Networks (SRGAN) and Enhanced Deep residual networks for Super-Resolution (EDSR) are proposed for downscaling forecasts of daily precipitation in southeast China (100°E -130°E, 15°N -35°N). The SR frameworks are designed to improve the horizontal resolution of daily precipitation forecasts from raw 1/2 degrees (~50km) to 1/4 degrees (~25km) and 1/8 degrees (~12.5km), respectively. For comparison, Bias Correction Spatial Disaggregation (BCSD) as a traditional SD method is also performed under the same framework. The precipitation forecasts used in our work are obtained from different Ensemble Prediction Systems (EPSs) including ECMWF, NCEP and JMA which are provided by TIGGE datasets. A group of metrics have been applied to assess the performance of the three SR models, including Root Mean Square Error (RMSE), Anomaly Correlation Coefficient (ACC) and Equitable Threat Score (ETS). Results show that three SR models can effectively capture the detailed spatial information of local precipitation that is ignored in global NWPs. Among the three SR models, EDSR obtains the optimum results with lower RMSE and higher ACC which shows better downscaling skills. Furthermore, the SR downscaling methods can be extended to the statistical downscaling for other predictors as well.</p>

2021 ◽  
Vol 12 (6) ◽  
pp. 1-20
Author(s):  
Fayaz Ali Dharejo ◽  
Farah Deeba ◽  
Yuanchun Zhou ◽  
Bhagwan Das ◽  
Munsif Ali Jatoi ◽  
...  

Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution. Recently, deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR) . However, the generated image still suffers from undesirable artifacts such as the absence of texture-feature representation and high-frequency information. We propose a frequency domain-based spatio-temporal remote sensing single image super-resolution technique to reconstruct the HR image combined with generative adversarial networks (GANs) on various frequency bands (TWIST-GAN). We have introduced a new method incorporating Wavelet Transform (WT) characteristics and transferred generative adversarial network. The LR image has been split into various frequency bands by using the WT, whereas the transfer generative adversarial network predicts high-frequency components via a proposed architecture. Finally, the inverse transfer of wavelets produces a reconstructed image with super-resolution. The model is first trained on an external DIV2 K dataset and validated with the UC Merced Landsat remote sensing dataset and Set14 with each image size of 256 × 256. Following that, transferred GANs are used to process spatio-temporal remote sensing images in order to minimize computation cost differences and improve texture information. The findings are compared qualitatively and qualitatively with the current state-of-art approaches. In addition, we saved about 43% of the GPU memory during training and accelerated the execution of our simplified version by eliminating batch normalization layers.


2019 ◽  
Vol 11 (15) ◽  
pp. 1817 ◽  
Author(s):  
Jun Gu ◽  
Xian Sun ◽  
Yue Zhang ◽  
Kun Fu ◽  
Lei Wang

Recently, deep convolutional neural networks (DCNN) have obtained promising results in single image super-resolution (SISR) of remote sensing images. Due to the high complexity of remote sensing image distribution, most of the existing methods are not good enough for remote sensing image super-resolution. Enhancing the representation ability of the network is one of the critical factors to improve remote sensing image super-resolution performance. To address this problem, we propose a new SISR algorithm called a Deep Residual Squeeze and Excitation Network (DRSEN). Specifically, we propose a residual squeeze and excitation block (RSEB) as a building block in DRSEN. The RSEB fuses the input and its internal features of current block, and models the interdependencies and relationships between channels to enhance the representation power. At the same time, we improve the up-sampling module and the global residual pathway in the network to reduce the parameters of the network. Experiments on two public remote sensing datasets (UC Merced and NWPU-RESISC45) show that our DRSEN achieves better accuracy and visual improvements against most state-of-the-art methods. The DRSEN is beneficial for the progress in the remote sensing images super-resolution field.


2019 ◽  
Vol 9 (15) ◽  
pp. 2992 ◽  
Author(s):  
Xi Cheng ◽  
Xiang Li ◽  
Jian Yang

Single-image super-resolution is of great importance as a low-level computer-vision task. Recent approaches with deep convolutional neural networks have achieved impressive performance. However, existing architectures have limitations due to the less sophisticated structure along with less strong representational power. In this work, to significantly enhance the feature representation, we proposed triple-attention mixed-link network (TAN), which consists of (1) three different aspects (i.e., kernel, spatial, and channel) of attention mechanisms and (2) fusion of both powerful residual and dense connections (i.e., mixed link). Specifically, the network with multi-kernel learns multi-hierarchical representations under different receptive fields. The features are recalibrated by the effective kernel and channel attention, which filters the information and enables the network to learn more powerful representations. The features finally pass through the spatial attention in the reconstruction network, which generates a fusion of local and global information, lets the network restore more details, and improves the reconstruction quality. The proposed network structure decreases 50% of the parameter growth rate compared with previous approaches. The three attention mechanisms provide 0.49 dB, 0.58 dB, and 0.32 dB performance gain when evaluating on Set5, Set14, and BSD100. Thanks to the diverse feature recalibrations and the advanced information flow topology, our proposed model is strong enough to perform against the state-of-the-art methods on the benchmark evaluations.


Sign in / Sign up

Export Citation Format

Share Document