scholarly journals Deep-learning based climate downscaling using the super-resolution method: a case study over the western US

2020 ◽  
Author(s):  
Xingying Huang

Abstract. Demand for high-resolution climate information is growing rapidly to fulfill the needs of both scientists and stakeholders. However, deriving high-quality fine-resolution information is still challenging due to either the complexity of a dynamical climate model or the uncertainty of an empirical statistical model. In this work, a new downscaling framework is developed using the deep-learning based super-resolution method to generate very high-resolution output from coarse-resolution input. The modeling framework has been trained, tested, and validated for generating high-resolution (here, 4 km) climate data focusing on temperature and precipitation at daily scale from the year 1981 to 2010. This newly designed downscaling framework is composed of multiple convolutional layers involving batch normalization, rectification-linear unit, and skip connection strategies, with different loss functions explored. The overall logic for this modeling framework is to learn optimal parameters from the training data for later-on prediction applications. This new method and framework is found to largely reduce the time and computation cost (~ 23 milliseconds for one-day inference) for climate downscaling compared to current downscaling strategies. The strength and limitation of this deep-learning based downscaling have been investigated and evaluated using both fine-scale gridded observations and dynamical downscaling data from regional climate models. The performance of this deep-learning framework is found to be competitive in either generating the spatial details or maintaining the temporal evolutions at a very fine grid-scale. It is promising that this deep-learning based downscaling method can be a powerful and effective way to retrieve fine-scale climate information from other coarse-resolution climate data. When seeking an efficient and affordable way for intensive climate downscaling, an optimized convolution neural network framework like the one explored here could be an alternative option and applied to a broad relevant application.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Huanyu Liu ◽  
Jiaqi Liu ◽  
Junbao Li ◽  
Jeng-Shyang Pan ◽  
Xiaqiong Yu

Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.


2020 ◽  
Vol 10 (12) ◽  
pp. 4282
Author(s):  
Ghada Zamzmi ◽  
Sivaramakrishnan Rajaraman ◽  
Sameer Antani

Medical images are acquired at different resolutions based on clinical goals or available technology. In general, however, high-resolution images with fine structural details are preferred for visual task analysis. Recognizing this significance, several deep learning networks have been proposed to enhance medical images for reliable automated interpretation. These deep networks are often computationally complex and require a massive number of parameters, which restrict them to highly capable computing platforms with large memory banks. In this paper, we propose an efficient deep learning approach, called Hydra, which simultaneously reduces computational complexity and improves performance. The Hydra consists of a trunk and several computing heads. The trunk is a super-resolution model that learns the mapping from low-resolution to high-resolution images. It has a simple architecture that is trained using multiple scales at once to minimize a proposed learning-loss function. We also propose to append multiple task-specific heads to the trained Hydra trunk for simultaneous learning of multiple visual tasks in medical images. The Hydra is evaluated on publicly available chest X-ray image collections to perform image enhancement, lung segmentation, and abnormality classification. Our experimental results support our claims and demonstrate that the proposed approach can improve the performance of super-resolution and visual task analysis in medical images at a remarkably reduced computational cost.


Author(s):  
Fuqi Mao ◽  
Xiaohan Guan ◽  
Ruoyu Wang ◽  
Wen Yue

As an important tool to study the microstructure and properties of materials, High Resolution Transmission Electron Microscope (HRTEM) images can obtain the lattice fringe image (reflecting the crystal plane spacing information), structure image and individual atom image (which reflects the configuration of atoms or atomic groups in crystal structure). Despite the rapid development of HTTEM devices, HRTEM images still have limited achievable resolution for human visual system. With the rapid development of deep learning technology in recent years, researchers are actively exploring the Super-resolution (SR) model based on deep learning, and the model has reached the current best level in various SR benchmarks. Using SR to reconstruct high-resolution HRTEM image is helpful to the material science research. However, there is one core issue that has not been resolved: most of these super-resolution methods require the training data to exist in pairs. In actual scenarios, especially for HRTEM images, there are no corresponding HR images. To reconstruct high quality HRTEM image, a novel Super-Resolution architecture for HRTEM images is proposed in this paper. Borrowing the idea from Dual Regression Networks (DRN), we introduce an additional dual regression structure to ESRGAN, by training the model with unpaired HRTEM images and paired nature images. Results of extensive benchmark experiments demonstrate that the proposed method achieves better performance than the most resent SISR methods with both quantitative and visual results.


2021 ◽  
Vol 13 (21) ◽  
pp. 4220
Author(s):  
Yu Tao ◽  
Jan-Peter Muller ◽  
Siting Xiong ◽  
Susan J. Conway

The High-Resolution Imaging Science Experiment (HiRISE) onboard the Mars Reconnaissance Orbiter provides remotely sensed imagery at the highest spatial resolution at 25–50 cm/pixel of the surface of Mars. However, due to the spatial resolution being so high, the total area covered by HiRISE targeted stereo acquisitions is very limited. This results in a lack of the availability of high-resolution digital terrain models (DTMs) which are better than 1 m/pixel. Such high-resolution DTMs have always been considered desirable for the international community of planetary scientists to carry out fine-scale geological analysis of the Martian surface. Recently, new deep learning-based techniques that are able to retrieve DTMs from single optical orbital imagery have been developed and applied to single HiRISE observational data. In this paper, we improve upon a previously developed single-image DTM estimation system called MADNet (1.0). We propose optimisations which we collectively call MADNet 2.0, which is based on a supervised image-to-height estimation network, multi-scale DTM reconstruction, and 3D co-alignment processes. In particular, we employ optimised single-scale inference and multi-scale reconstruction (in MADNet 2.0), instead of multi-scale inference and single-scale reconstruction (in MADNet 1.0), to produce more accurate large-scale topographic retrieval with boosted fine-scale resolution. We demonstrate the improvements of the MADNet 2.0 DTMs produced using HiRISE images, in comparison to the MADNet 1.0 DTMs and the published Planetary Data System (PDS) DTMs over the ExoMars Rosalind Franklin rover’s landing site at Oxia Planum. Qualitative and quantitative assessments suggest the proposed MADNet 2.0 system is capable of producing pixel-scale DTM retrieval at the same spatial resolution (25 cm/pixel) of the input HiRISE images.


2021 ◽  
Vol 14 (3) ◽  
pp. 1267-1293
Author(s):  
Sara Top ◽  
Lola Kotova ◽  
Lesley De Cruz ◽  
Svetlana Aniskevich ◽  
Leonid Bobylev ◽  
...  

Abstract. To allow for climate impact studies on human and natural systems, high-resolution climate information is needed. Over some parts of the world plenty of regional climate simulations have been carried out, while in other regions hardly any high-resolution climate information is available. The CORDEX Central Asia domain is one of these regions, and this article describes the evaluation for two regional climate models (RCMs), REMO and ALARO-0, that were run for the first time at a horizontal resolution of 0.22∘ (25 km) over this region. The output of the ERA-Interim-driven RCMs is compared with different observational datasets over the 1980–2017 period. REMO scores better for temperature, whereas the ALARO-0 model prevails for precipitation. Studying specific subregions provides deeper insight into the strengths and weaknesses of both RCMs over the CAS-CORDEX domain. For example, ALARO-0 has difficulties in simulating the temperature over the northern part of the domain, particularly when snow cover is present, while REMO poorly simulates the annual cycle of precipitation over the Tibetan Plateau. The evaluation of minimum and maximum temperature demonstrates that both models underestimate the daily temperature range. This study aims to evaluate whether REMO and ALARO-0 provide reliable climate information over the CAS-CORDEX domain for impact modeling and environmental assessment applications. Depending on the evaluated season and variable, it is demonstrated that the produced climate data can be used in several subregions, e.g., temperature and precipitation over western Central Asia in autumn. At the same time, a bias adjustment is required for regions where significant biases have been identified.


Author(s):  
Filippo Giorgi

Dynamical downscaling has been used for about 30 years to produce high-resolution climate information for studies of regional climate processes and for the production of climate information usable for vulnerability, impact assessment and adaptation studies. Three dynamical downscaling tools are available in the literature: high-resolution global atmospheric models (HIRGCMs), variable resolution global atmospheric models (VARGCMs), and regional climate models (RCMs). These techniques share their basic principles, but have different underlying assumptions, advantages and limitations. They have undergone a tremendous growth in the last decades, especially RCMs, to the point that they are considered fundamental tools in climate change research. Major intercomparison programs have been implemented over the years, culminating in the Coordinated Regional climate Downscaling EXperiment (CORDEX), an international program aimed at producing fine scale regional climate information based on multi-model and multi-technique approaches. These intercomparison projects have lead to an increasing understanding of fundamental issues in climate downscaling and in the potential of downscaling techniques to provide actionable climate change information. Yet some open issues remain, most notably that of the added value of downscaling, which are the focus of substantial current research. One of the primary future directions in dynamical downscaling is the development of fully coupled regional earth system models including multiple components, such as the atmosphere, the oceans, the biosphere and the chemosphere. Within this context, dynamical downscaling models offer optimal testbeds to incorporate the human component in a fully interactive way. Another main future research direction is the transition to models running at convection-permitting scales, order of 1–3 km, for climate applications. This is a major modeling step which will require substantial development in research and infrastructure, and will allow the description of local scale processes and phenomena within the climate change context. Especially in view of these future directions, climate downscaling will increasingly constitute a fundamental interface between the climate modeling and end-user communities in support of climate service activities.


Sign in / Sign up

Export Citation Format

Share Document