Deep Learning Reconstruction Method of Meteorological Radar Echo Data based on Satellite Data

Author(s):  
Su Jiang ◽  
Yunxin Huang ◽  
Fuhan Zhang
2010 ◽  
Vol 69 (17) ◽  
pp. 1517-1527 ◽  
Author(s):  
Ye. N. Belov ◽  
O. A. Voytovich ◽  
T. A. Makulina ◽  
G. A. Rudnev ◽  
G. I. Khlopov ◽  
...  

2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


Author(s):  
Ryan Lagerquist ◽  
Jebb Q. Stewart ◽  
Imme Ebert-Uphoff ◽  
Christina Kumler

AbstractPredicting the timing and location of thunderstorms (“convection”) allows for preventive actions that can save both lives and property. We have applied U-nets, a deep-learning-based type of neural network, to forecast convection on a grid at lead times up to 120 minutes. The goal is to make skillful forecasts with only present and past satellite data as predictors. Specifically, predictors are multispectral brightness-temperature images from the Himawari-8 satellite, while targets (ground truth) are provided by weather radars in Taiwan. U-nets are becoming popular in atmospheric science due to their advantages for gridded prediction. Furthermore, we use three novel approaches to advance U-nets in atmospheric science. First, we compare three architectures – vanilla, temporal, and U-net++ – and find that vanilla U-nets are best for this task. Second, we train U-nets with the fractions skill score, which is spatially aware, as the loss function. Third, because we do not have adequate ground truth over the full Himawari-8 domain, we train the U-nets with small radar-centered patches, then apply trained U-nets to the full domain. Also, we find that the best predictions are given by U-nets trained with satellite data from multiple lag times, not only the present. We evaluate U-nets in detail – by time of day, month, and geographic location – and compare to persistence models. The U-nets outperform persistence at lead times ≥ 60 minutes, and at all lead times the U-nets provide a more realistic climatology than persistence. Our code is available publicly.


2020 ◽  
Vol 1550 ◽  
pp. 032051
Author(s):  
Yun-peng Liu ◽  
Xing-peng Yan ◽  
Ning Wang ◽  
Xin Zhang ◽  
Zhe Li

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jie Shen ◽  
Mengxi Xu ◽  
Xinyu Du ◽  
Yunbo Xiong

Video surveillance is an important data source of urban computing and intelligence. The low resolution of many existing video surveillance devices affects the efficiency of urban computing and intelligence. Therefore, improving the resolution of video surveillance is one of the important tasks of urban computing and intelligence. In this paper, the resolution of video is improved by superresolution reconstruction based on a learning method. Different from the superresolution reconstruction of static images, the superresolution reconstruction of video is characterized by the application of motion information. However, there are few studies in this area so far. Aimed at fully exploring motion information to improve the superresolution of video, this paper proposes a superresolution reconstruction method based on an efficient subpixel convolutional neural network, where the optical flow is introduced in the deep learning network. Fusing the optical flow features between successive frames can compensate for information in frames and generate high-quality superresolution results. In addition, in order to improve the superresolution, a superpixel convolution layer is added after the deep convolution network. Finally, experimental evaluations demonstrate the satisfying performance of our method compared with previous methods and other deep learning networks; our method is more efficient.


2020 ◽  
Vol 10 (7) ◽  
pp. 2279
Author(s):  
Vanshika Gupta ◽  
Sharad Kumar Gupta ◽  
Jungrack Kim

Machine learning (ML) algorithmic developments and improvements in Earth and planetary science are expected to bring enormous benefits for areas such as geospatial database construction, automated geological feature reconstruction, and surface dating. In this study, we aim to develop a deep learning (DL) approach to reconstruct the subsurface discontinuities in the subsurface environment of Mars employing the echoes of the Shallow Subsurface Radar (SHARAD), a sounding radar equipped on the Mars Reconnaissance Orbiter (MRO). Although SHARAD has produced highly valuable information about the Martian subsurface, the interpretation of the radar echo of SHARAD is a challenging task considering the vast stocks of datasets and the noisy signal. Therefore, we introduced a 3D subsurface mapping strategy consisting of radar echo pre-processors and a DL algorithm to automatically detect subsurface discontinuities. The developed components the of DL algorithm were synthesized into a subsurface mapping scheme and applied over a few target areas such as mid-latitude lobate debris aprons (LDAs), polar deposits and shallow icy bodies around the Phoenix landing site. The outcomes of the subsurface discontinuity detection scheme were rigorously validated by computing several quality metrics such as accuracy, recall, Jaccard index, etc. In the context of undergoing development and its output, we expect to automatically trace the shapes of Martian subsurface icy structures with further improvements in the DL algorithm.


Sign in / Sign up

Export Citation Format

Share Document