scholarly journals DL-MRI: A Unified Framework of Deep Learning-Based MRI Super Resolution

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Huanyu Liu ◽  
Jiaqi Liu ◽  
Junbao Li ◽  
Jeng-Shyang Pan ◽  
Xiaqiong Yu

Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.

2020 ◽  
Author(s):  
Xingying Huang

Abstract. Demand for high-resolution climate information is growing rapidly to fulfill the needs of both scientists and stakeholders. However, deriving high-quality fine-resolution information is still challenging due to either the complexity of a dynamical climate model or the uncertainty of an empirical statistical model. In this work, a new downscaling framework is developed using the deep-learning based super-resolution method to generate very high-resolution output from coarse-resolution input. The modeling framework has been trained, tested, and validated for generating high-resolution (here, 4 km) climate data focusing on temperature and precipitation at daily scale from the year 1981 to 2010. This newly designed downscaling framework is composed of multiple convolutional layers involving batch normalization, rectification-linear unit, and skip connection strategies, with different loss functions explored. The overall logic for this modeling framework is to learn optimal parameters from the training data for later-on prediction applications. This new method and framework is found to largely reduce the time and computation cost (~ 23 milliseconds for one-day inference) for climate downscaling compared to current downscaling strategies. The strength and limitation of this deep-learning based downscaling have been investigated and evaluated using both fine-scale gridded observations and dynamical downscaling data from regional climate models. The performance of this deep-learning framework is found to be competitive in either generating the spatial details or maintaining the temporal evolutions at a very fine grid-scale. It is promising that this deep-learning based downscaling method can be a powerful and effective way to retrieve fine-scale climate information from other coarse-resolution climate data. When seeking an efficient and affordable way for intensive climate downscaling, an optimized convolution neural network framework like the one explored here could be an alternative option and applied to a broad relevant application.


2020 ◽  
Vol 10 (12) ◽  
pp. 4282
Author(s):  
Ghada Zamzmi ◽  
Sivaramakrishnan Rajaraman ◽  
Sameer Antani

Medical images are acquired at different resolutions based on clinical goals or available technology. In general, however, high-resolution images with fine structural details are preferred for visual task analysis. Recognizing this significance, several deep learning networks have been proposed to enhance medical images for reliable automated interpretation. These deep networks are often computationally complex and require a massive number of parameters, which restrict them to highly capable computing platforms with large memory banks. In this paper, we propose an efficient deep learning approach, called Hydra, which simultaneously reduces computational complexity and improves performance. The Hydra consists of a trunk and several computing heads. The trunk is a super-resolution model that learns the mapping from low-resolution to high-resolution images. It has a simple architecture that is trained using multiple scales at once to minimize a proposed learning-loss function. We also propose to append multiple task-specific heads to the trained Hydra trunk for simultaneous learning of multiple visual tasks in medical images. The Hydra is evaluated on publicly available chest X-ray image collections to perform image enhancement, lung segmentation, and abnormality classification. Our experimental results support our claims and demonstrate that the proposed approach can improve the performance of super-resolution and visual task analysis in medical images at a remarkably reduced computational cost.


Author(s):  
Fuqi Mao ◽  
Xiaohan Guan ◽  
Ruoyu Wang ◽  
Wen Yue

As an important tool to study the microstructure and properties of materials, High Resolution Transmission Electron Microscope (HRTEM) images can obtain the lattice fringe image (reflecting the crystal plane spacing information), structure image and individual atom image (which reflects the configuration of atoms or atomic groups in crystal structure). Despite the rapid development of HTTEM devices, HRTEM images still have limited achievable resolution for human visual system. With the rapid development of deep learning technology in recent years, researchers are actively exploring the Super-resolution (SR) model based on deep learning, and the model has reached the current best level in various SR benchmarks. Using SR to reconstruct high-resolution HRTEM image is helpful to the material science research. However, there is one core issue that has not been resolved: most of these super-resolution methods require the training data to exist in pairs. In actual scenarios, especially for HRTEM images, there are no corresponding HR images. To reconstruct high quality HRTEM image, a novel Super-Resolution architecture for HRTEM images is proposed in this paper. Borrowing the idea from Dual Regression Networks (DRN), we introduce an additional dual regression structure to ESRGAN, by training the model with unpaired HRTEM images and paired nature images. Results of extensive benchmark experiments demonstrate that the proposed method achieves better performance than the most resent SISR methods with both quantitative and visual results.


2021 ◽  
Author(s):  
Timo Kumpula ◽  
Janne Mäyrä ◽  
Anton Kuzmin ◽  
Arto Viinikka ◽  
Sonja Kivinen ◽  
...  

<p>Sustainable forest management increasingly highlights the maintenance of biological diversity and requires up-to-date information on the occurrence and distribution of key ecological features in forest environments. Different proxy variables indicating species richness and quality of the sites are essential for efficient detecting and monitoring forest biodiversity. European aspen (Populus tremula L.) is a minor deciduous tree species with a high importance in maintaining biodiversity in boreal forests. Large aspen trees host hundreds of species, many of them classified as threatened. However, accurate fine-scale spatial data on aspen occurrence remains scarce and incomprehensive.</p><p> </p><p>We studied detection of aspen using different remote sensing techniques in Evo, southern Finland. Our study area of 83 km<sup>2</sup> contains both managed and protected southern boreal forests characterized by Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst), and birch (Betula pendula and pubescens L.), whereas European aspen has a relatively sparse and scattered occurrence in the area. We collected high-resolution airborne hyperspectral and airborne laser scanning data covering the whole study area and ultra-high resolution unmanned aerial vehicle (UAV) data with RGB and multispectral sensors from selected parts of the area. We tested the discrimination of aspen from other species at tree level using different machine learning methods (Support Vector Machines, Random Forest, Gradient Boosting Machine) and deep learning methods (3D convolutional neural networks).</p><p> </p><p>Airborne hyperspectral and lidar data gave excellent results with machine learning and deep learning classification methods The highest classification accuracies for aspen varied between 91-92% (F1-score). The most important wavelengths for discriminating aspen from other species included reflectance bands of red edge range (724–727 nm) and shortwave infrared (1520–1564 nm and 1684–1706 nm) (Viinikka et al. 2020; Mäyrä et al 2021). Aspen detection using RGB and multispectral data also gave good results (highest F1-score of aspen = 87%) (Kuzmin et al 2021). Different remote sensing data enabled production of a spatially explicit map of aspen occurrence in the study area. Information on aspen occurrence and abundance can significantly contribute to biodiversity management and conservation efforts in boreal forests. Our results can be further utilized in upscaling efforts aiming at aspen detection over larger geographical areas using satellite images.</p>


Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 190 ◽  
Author(s):  
Zhiwei Huang ◽  
Jinzhao Lin ◽  
Liming Xu ◽  
Huiqian Wang ◽  
Tong Bai ◽  
...  

The application of deep convolutional neural networks (CNN) in the field of medical image processing has attracted extensive attention and demonstrated remarkable progress. An increasing number of deep learning methods have been devoted to classifying ChestX-ray (CXR) images, and most of the existing deep learning methods are based on classic pretrained models, trained by global ChestX-ray images. In this paper, we are interested in diagnosing ChestX-ray images using our proposed Fusion High-Resolution Network (FHRNet). The FHRNet concatenates the global average pooling layers of the global and local feature extractors—it consists of three branch convolutional neural networks and is fine-tuned for thorax disease classification. Compared with the results of other available methods, our experimental results showed that the proposed model yields a better disease classification performance for the ChestX-ray 14 dataset, according to the receiver operating characteristic curve and area-under-the-curve score. An ablation study further confirmed the effectiveness of the global and local branch networks in improving the classification accuracy of thorax diseases.


2020 ◽  
Author(s):  
Gili Dardikman-Yoffe ◽  
Yonina C. Eldar

AbstractThe use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and require learning of many parameters. They also tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. It even allows producing a high-quality reconstruction from as few as 25 frames. This is due to a significantly smaller network, which also contributes to fast performance - 5× improvement in execution time relative to SPARCOM, and a full order of magnitudes improvement relative to a leading competing deep learning method (Deep-STORM) when implemented serially. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.


Sign in / Sign up

Export Citation Format

Share Document