scholarly journals Weather Radar Image Superresolution Using a Nonlocal Residual Network

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Haoxuan Yuan ◽  
Qiangyu Zeng ◽  
Jianxin He

Accurate and high-resolution weather radar images reflecting detailed structure information of radar echo are vital for analysis and forecast of extreme weather. Typically, this is performed by using interpolation schemes, which only use several neighboring data values for computational approximation to get the estimated value regardless of the large-scale context feature of weather radar images. Inspired by the striking performance of the convolutional neural network (CNN) applied in feature extraction and nonlocal self-similarity of weather radar images, we proposed a nonlocal residual network (NLRN) on the basis of CNN. The proposed network mainly consists of several nonlocal residual blocks (NLRB), which combine short skip connection (SSC) and nonlocal operation to train the deep network and capture large-scale context information. In addition, long skip connection (LSC) added in the network avoids learning low-frequency information, making the network focus on high-level features. Extensive experiments of ×2 and ×4 super-resolution reconstruction demonstrate that NLRN achieves superior performance in terms of both quantitative evaluation metrics and visual quality, especially for the reconstruction of the edge and detailed information of the weather radar echo.

2022 ◽  
Author(s):  
Haoxuan Yuan ◽  
Rahat Ihsan

Abstract Accurate and high-resolution weather radar data reflecting detailed structure information of radar echo plays an important role in analysis and forecast of extreme weather. Typically, this is done using interpolation schemes, which only use several neighboring data values for computational approximation to get the estimated, resulting the loss of intense echo information. Focus on this limitation, a super-resolution reconstruction algorithm of weather radar data based on adaptive sparse domain selection (ASDS) is proposed in this article. First, the ASDS algorithm gets a compact dictionary by learning the pre-collected data of model weather radar echo patches. Second, the most relevant sub-dictionaries are adaptively select for each low-resolution echo patches during the spare coding using a complex decision support system. Third, two adaptive regularization terms are introduced to further improve the reconstruction effect of the edge and intense echo information of the radar echo. Experimental results show that the ASDS algorithm substantially outperforms interpolation methods for ×2 and ×4 reconstruction in terms of both visual quality and quantitative evaluation metrics.


2012 ◽  
Vol 16 (11) ◽  
pp. 4101-4117 ◽  
Author(s):  
A. Wagner ◽  
J. Seltmann ◽  
H. Kunstmann

Abstract. First results of radar derived climatology have emerged over the last years, as datasets of appropriate extent are becoming available. Usually, these statistics are based on time series lasting up to ten years as continuous storage of radar data was often not achieved before. This kind of climatology demands a high level of data quality. Small deviations or minor systematic under- or overestimations in single radar images become a major cause of error in statistical analysis. Extensive corrections of radar data are a crucial prerequisite for radar derived climatology. We present a new statistical post-correction scheme based on a climatological analysis of seven years of radar data of the Munich weather radar (2000–2006) operated by DWD (German Weather Service). Original radar products are used subject only to corrections within the signal processor without any further corrections on single radar images. The aim of this statistical correction is to make up for the average systematic errors caused by clutter, propagation, or measuring effects but to conserve small-scale natural variations in space. The statistical correction is based on a thorough analysis of the different causes of possible errors for the Munich weather radar. This analysis revealed the following basic effects: the decrease of rain amount as a function of height and distance from the radar, clutter effects such as clutter remnants after filtering, holes by eliminated clutter or shading effects from obstacles near the radar, visible as spokes, as well as the influence of the bright band. The correction algorithm is correspondingly based on these results. It consists of three modules. The first one is an altitude correction which minimises measuring effects. The second module corrects clutter effects and disturbances and the third one realises a mean adjustment to selected rain gauges. Two different sets of radar products are used. The statistical analysis as well as module 1 and module 2 of the correction algorithm are based on frequencies of the six reflectivity levels within the so-called PX product. For correction module 3 and for the validation of the correction algorithm, rain amounts are calculated from the 8-bit so-called DX product. The correction algorithm is created to post-correct climatological or statistical analysis of radar data with a temporal resolution larger than one year. The correction algorithm is used for frequencies of occurrence of radar reflectivities which enables its application even for radar products such as DWD's cell-tracking-product CONRAD. Application (2004–2006) and validation (2007–2009) periods of this correction algorithm with rain gauges show an increased conformity for radar climatology after the statistical correction. In the years 2004 to 2006 the Root-Mean-Square-Error (RMSE) between mean annual rain amounts of rain gauges and corresponding radar pixels decreases from 262 mm to 118 mm excluding those pairs of values where the rain gauges are situated in areas of obviously corrupted radar data. The results for the validation period 2007 to 2009 are based on all pairs of values and show a decline of the RMSE from 322 mm to 174 mm.


Author(s):  
S. Hosseinyalamdary ◽  
A. Yilmaz

Laser scanner point cloud has been emerging in Photogrammetry and computer vision to achieve high level tasks such as object tracking, object recognition and scene understanding. However, low cost laser scanners are noisy, sparse and prone to systematic errors. This paper proposes a novel 3D super resolution approach to reconstruct surface of the objects in the scene. This method works on sparse, unorganized point clouds and has superior performance over other surface recovery approaches. Since the proposed approach uses anisotropic diffusion equation, it does not deteriorate the object boundaries and it preserves topology of the object.


2019 ◽  
Author(s):  
Edward Y Sheffield

It is usually believed that the low frequency part of a signal’s Fourier spectrum represent its profile, while the high frequency part represent its details. Conventional light microscopes filter the high frequency parts of image signals, so that people cannot see the details of the samples (objects being imaged) in the blurred images. However, we find that in a certain condition (isolated lighting or named separated lighting), a signal’s low frequency and high frequency parts not only represent profile and details respectively. Actually, any one of them also contains the full information (including both profile and details) of the sample’s structure. Therefore, for samples with spatial frequency beyond diffraction-limit, even if the image’s high frequency part is filtered by the microscope, it is still possible to extract the full information from the low frequency part. Based on the above findings, we propose the technique of Deconvolution Super-resolution (DeSu-re), including two methods. One method extract the full information of the sample’s structure information directly from the diffraction-blurred image, while the other extract it directly from part of the observed image’s spectrum, e.g., low frequency part. Both theoretical analysis and simulation experiment support the above findings, and also verify the effectiveness of the proposed methods.


Atmosphere ◽  
2019 ◽  
Vol 10 (9) ◽  
pp. 555 ◽  
Author(s):  
Chen ◽  
Zhang ◽  
Liu ◽  
Zeng

Improving the resolution of degraded radar echo images of weather radar systems can aid severe weather forecasting and disaster prevention. Previous approaches to this problem include classical super-resolution (SR) algorithms such as iterative back-projection (IBP) and a recent nonlocal self-similarity sparse representation (NSSR) that exploits the data redundancy of radar echo data, etc. However, since radar echoes tend to have rich edge information and contour textures, the textural detail in the reconstructed echoes of traditional approaches is typically absent. Inspired by the recent advances of faster and deeper neural networks, especially the generative adversarial networks (GAN), which are capable of pushing SR solutions to the natural image manifold, we propose using GAN to tackle the problem of weather radar echo super-resolution to achieve better reconstruction performance (measured in peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM)). Using authentic weather radar echo data, we present the experimental results and compare its reconstruction performance with the above-mentioned methods. The experimental results showed that the GAN-based method is capable of generating perceptually superior solutions while achieving higher PSNR/SSIM results.


2020 ◽  
Vol 12 (16) ◽  
pp. 2535
Author(s):  
Xiaoxu Ren ◽  
Liangfu Lu ◽  
Jocelyn Chanussot

In recent years, fusing hyperspectral images (HSIs) and multispectral images (MSIs) to acquire super-resolution images (SRIs) has been in the spotlight and gained tremendous attention. However, some current methods, such as those based on low rank matrix decomposition, also have a fair share of challenges. These algorithms carry out the matrixing process for the original image tensor, which will lose the structure information of the original image. In addition, there is no corresponding theory to prove whether the algorithm can guarantee the accurate restoration of the fused image due to the non-uniqueness of matrix decomposition. Moreover, degenerate operators are usually unknown or difficult to estimate in some practical applications. In this paper, an image fusion method based on joint tensor decomposition (JTF) is proposed, which is more effective and more applicable to the circumstance that degenerate operators are unknown or tough to gauge. Specifically, in the proposed JTF method, we consider SRI as a three-dimensional tensor and redefine the fusion problem with the decomposition issue of joint tensors. We then formulate the JTF algorithm, and the experimental results certify the superior performance of the proposed method in comparison to the current popular schemes.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Zhen Hua ◽  
Haicheng Zhang ◽  
Jinjiang Li

Fractal coding techniques are an effective tool for describing image textures. Considering the shortcomings of the existing image super-resolution (SR) method, the large-scale factor reconstruction performance is poor and the texture details are incomplete. In this paper, we propose an SR method based on error compensation and fractal coding. First, quadtree coding is performed on the image, and the similarity between the range block and the domain block is established to determine the fractal code. Then, through this similarity relationship, the attractor is reconstructed by super-resolution fractal decoding to obtain an interpolated image. Finally, the fractal error of the fractal code is estimated by the depth residual network, and the estimated version of the error image is added as an error compensation term to the interpolation image to obtain the final reconstructed image. The network structure is jointly trained by a deep network and a shallow network. Residual learning is introduced to greatly improve the convergence speed and reconstruction accuracy of the network. Experiments with other state-of-the-art methods on the benchmark datasets Set5, Set14, B100, and Urban100 show that our algorithm achieves competitive performance quantitatively and qualitatively, with subtle edges and vivid textures. Large-scale factor images can also be reconstructed better.


2021 ◽  
Vol 11 (22) ◽  
pp. 10803
Author(s):  
Jiagang Song ◽  
Yunwu Lin ◽  
Jiayu Song ◽  
Weiren Yu ◽  
Leyuan Zhang

Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-scale geo-multimedia retrieval. To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes deep neural network and an enhanced triplet constraint to capture high-level semantics. Besides, a novel hybrid index, called TH-Quadtree, is developed by combining cross-modal binary hash codes and quadtree to support high-performance search. Extensive experiments are conducted on three common used benchmarks, and the results show the superior performance of the proposed method.


2012 ◽  
Vol 9 (4) ◽  
pp. 4703-4746
Author(s):  
A. Wagner ◽  
J. Seltmann ◽  
H. Kunstmann

Abstract. Extensive corrections of radar data are a crucial prerequisite for radar derived climatology. This kind of climatology demands a high level of data quality. Little deviations or minor systematic underestimations or overestimations in single radar images become a major cause of error in statistical analysis. First results of radar derived climatology have emerged over the last years, as data sets of appropriate extent are becoming available. Usually, these statistics are based on time series lasting up to ten years as storage of radar data was not achieved before. We present a new statistical post-correction scheme, which is based on seven years of radar data of the Munich weather radar (2000–2006) that is operated by DWD (German Weather Service). The typical correction algorithms for single radar images, such as clutter corrections, are used. Then an additional statistical post-correction based on the results of a climatological analysis from radar images follows. The aim of this statistical correction is to correct systematic errors caused by clutter effects or measuring effects but to conserve small-scale natural variations in space. The statistical correction is based on a thorough analysis of the different causes of possible errors for the Munich weather radar. This robust analysis revealed the following basic effects: the decrease of rain rate in relation to height and distance from the radar, clutter effects such as remaining clutter, eliminated clutter or shading effects from obstacles near the radar, visible as spokes, as well as the influence of the Bright Band. The correction algorithm is correspondingly based on these results. It consists of three modules. The first one is an altitude correction, which minimizes measuring effects. The second module corrects clutter effects and the third one realizes a mean adjustment to selected rain gauges. Two different radar products are used. The statistical analysis as well as module one and module two of the correction algorithm are based on frequencies of occurrence of the so-called PX-product with six reflectivity levels. For correction module 3 and for the validation of the correction algorithm rain rates are calculated from the 8-bit-depth so-called DX-product. An application (2004–2006) and a validation (2007–2009) of this correction algorithm with rain gauges show a much higher conformity for radar climatology after the statistical correction. In the years 2004 to 2006 the Root-Mean-Square-Error (RMSE) decreases from 262 mm to 118 mm excluding those pair of values where the rain gauges are situated in areas of obviously corrupted radar data. The results for the validation period 2007 to 2009 are based on all pairs of values and show a decline of the RMSE from 322 mm to 174 mm.


2017 ◽  
Author(s):  
Tino Pleiner ◽  
Mark Bates ◽  
Dirk Görlich

AbstractPolyclonal anti-IgG secondary antibodies are essential tools for many molecular biology techniques and diagnostic tests. Their animal-based production is, however, a major ethical problem. Here, we introduce a sustainable alternative, namely nanobodies against all mouse IgG subclasses and rabbit IgG. They can be produced at large scale in E. coli and could thus make secondary antibody-production in animals obsolete. Their recombinant nature allows fusion with affinity tags or reporter enzymes as well as efficient maleimide chemistry for fluorophore-coupling. We demonstrate their superior performance in Western Blotting, both in peroxidase- and fluorophore-linked form. Their site-specific labeling with multiple fluorophores creates bright imaging reagents for confocal and super-resolution microscopy with much smaller label displacement than traditional secondary antibodies. They also enable simpler and faster immunostaining protocols and even allow multi-target localization with primary IgGs from the same species and of the same class.


Sign in / Sign up

Export Citation Format

Share Document