A Method of Weather Radar Echo Extrapolation Based on Convolutional Neural Networks

Author(s):  
En Shi ◽  
Qian Li ◽  
Daquan Gu ◽  
Zhangming Zhao
2019 ◽  
Vol 10 (11) ◽  
pp. 1908-1922 ◽  
Author(s):  
Tsung‐Yu Lin ◽  
Kevin Winner ◽  
Garrett Bernstein ◽  
Abhay Mittal ◽  
Adriaan M. Dokter ◽  
...  

2021 ◽  
Author(s):  
Matej Choma ◽  
Jakub Bartel ◽  
Petr Šimánek ◽  
Vojtěch Rybář

<p>The standard for weather radar nowcasting in the Central Europe region is the COTREC extrapolation method. We propose a recurrent neural network based on the PredRNN architecture, which outperforms the COTREC 60 minutes predictions by a significant margin.</p><p>Nowcasting, as a complement to numerical weather predictions, is a well-known concept. However, the increasing speed of information flow in our society today creates an opportunity for its effective implementation. Methods currently used for these predictions are primarily based on the optical flow and are struggling in the prediction of the development of the echo shape and intensity.</p><p>In this work, we are benefiting from a data-driven approach and building on the advances in the capabilities of neural networks for computer vision. We define the prediction task as an extrapolation of sequences of the latest weather radar echo measurements. To capture the spatiotemporal behaviour of rainfall and storms correctly, we propose the use of a recurrent neural network using a combination of long short term memory (LSTM) techniques with convolutional neural networks (CNN). Our approach is applicable to any geographical area, radar network resolution and refresh rate.</p><p>We conducted the experiments comparing predictions for 10 to 60 minutes into the future with the Critical Success Index, which evaluates the spatial accuracy of the predicted echo and Mean Squared Error. Our neural network model has been trained with three years of rainfall data captured by weather radars over the Czech Republic. Results for our bordered testing domain show that our method achieves comparable or better scores than both COTREC and optical flow extrapolation methods available in the open-source pySTEPS and rainymotion libraries.</p><p>With our work, we aim to contribute to the nowcasting research in general and create another source of short-time predictions for both experts and the general public.</p>


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document