MRI super‐resolution reconstruction for MRI‐guided adaptive radiotherapy using cascaded deep learning: In the presence of limited training data and unknown translation model

2019 ◽  
Vol 46 (9) ◽  
pp. 4148-4164 ◽  
Author(s):  
Jaehee Chun ◽  
Hao Zhang ◽  
H. Michael Gach ◽  
Sven Olberg ◽  
Thomas Mazur ◽  
...  
Author(s):  
Fuqi Mao ◽  
Xiaohan Guan ◽  
Ruoyu Wang ◽  
Wen Yue

As an important tool to study the microstructure and properties of materials, High Resolution Transmission Electron Microscope (HRTEM) images can obtain the lattice fringe image (reflecting the crystal plane spacing information), structure image and individual atom image (which reflects the configuration of atoms or atomic groups in crystal structure). Despite the rapid development of HTTEM devices, HRTEM images still have limited achievable resolution for human visual system. With the rapid development of deep learning technology in recent years, researchers are actively exploring the Super-resolution (SR) model based on deep learning, and the model has reached the current best level in various SR benchmarks. Using SR to reconstruct high-resolution HRTEM image is helpful to the material science research. However, there is one core issue that has not been resolved: most of these super-resolution methods require the training data to exist in pairs. In actual scenarios, especially for HRTEM images, there are no corresponding HR images. To reconstruct high quality HRTEM image, a novel Super-Resolution architecture for HRTEM images is proposed in this paper. Borrowing the idea from Dual Regression Networks (DRN), we introduce an additional dual regression structure to ESRGAN, by training the model with unpaired HRTEM images and paired nature images. Results of extensive benchmark experiments demonstrate that the proposed method achieves better performance than the most resent SISR methods with both quantitative and visual results.


Author(s):  
Chinmay Belthangady ◽  
Loic A. Royer

Deep Learning is a recent and important addition to the computational toolbox available for image reconstruction in fluorescence microscopy. We review state-of-the-art applications such as image restoration, super-resolution, and light-field imaging, and discuss how the latest Deep Learning research can be applied to other image reconstruction tasks such as structured illumination, spectral deconvolution, and sample stabilisation. Despite its successes, Deep Learning also poses significant challenges, has often misunderstood capabilities, and overlooked limits. We will address key questions, such as: What are the challenges in obtaining training data? Can we discover structures not present in the training data? And, what is the danger of inferring unsubstantiated image details?


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ruoqian Lin ◽  
Rui Zhang ◽  
Chunyang Wang ◽  
Xiao-Qing Yang ◽  
Huolin L. Xin

AbstractAtom segmentation and localization, noise reduction and deblurring of atomic-resolution scanning transmission electron microscopy (STEM) images with high precision and robustness is a challenging task. Although several conventional algorithms, such has thresholding, edge detection and clustering, can achieve reasonable performance in some predefined sceneries, they tend to fail when interferences from the background are strong and unpredictable. Particularly, for atomic-resolution STEM images, so far there is no well-established algorithm that is robust enough to segment or detect all atomic columns when there is large thickness variation in a recorded image. Herein, we report the development of a training library and a deep learning method that can perform robust and precise atom segmentation, localization, denoising, and super-resolution processing of experimental images. Despite using simulated images as training datasets, the deep-learning model can self-adapt to experimental STEM images and shows outstanding performance in atom detection and localization in challenging contrast conditions and the precision consistently outperforms the state-of-the-art two-dimensional Gaussian fit method. Taking a step further, we have deployed our deep-learning models to a desktop app with a graphical user interface and the app is free and open-source. We have also built a TEM ImageNet project website for easy browsing and downloading of the training data.


2021 ◽  
Author(s):  
Andres Munoz-Jaramillo ◽  
Anna Jungbluth ◽  
Xavier Gitiaux ◽  
Paul Wright ◽  
Carl Shneider ◽  
...  

Abstract Super-resolution techniques aim to increase the resolution of images by adding detail. Compared to upsampling techniques reliant on interpolation, deep learning-based approaches learn features and their relationships across the training data set to leverage prior knowledge on what low resolution patterns look like in higher resolution images. As an added benefit, deep neural networks can learn the systematic properties of the target images (i.e.\ texture), combining super-resolution with instrument cross-calibration. While the successful use of super-resolution algorithms for natural images is rooted in creating perceptually convincing results, super-resolution applied to scientific data requires careful quantitative evaluation of performances. In this work, we demonstrate that deep learning can increase the resolution and calibrate space- and ground-based imagers belonging to different instrumental generations. In addition, we establish a set of measurements to benchmark the performance of scientific applications of deep learning-based super-resolution and calibration. We super-resolve and calibrate solar magnetic field images taken by the Michelson Doppler Imager (MDI; resolution ~2"/pixel; science-grade, space-based) and the Global Oscillation Network Group (GONG; resolution ~2.5"/pixel; space weather operations, ground-based) to the pixel resolution of images taken by the Helioseismic and Magnetic Imager (HMI; resolution ~0.5"/pixel; last generation, science-grade, space-based).


Author(s):  
Chinmay Belthangady ◽  
Loic A. Royer

Deep Learning is a recent and important addition to the computational toolbox available for image reconstruction in fluorescence microscopy. We review state-of-the-art applications such as image restoration, super-resolution, and light-field imaging, and discuss how the latest Deep Learning research can be applied to other image reconstruction tasks such as structured illumination, spectral deconvolution, and sample stabilisation. Despite its successes, Deep Learning also poses significant challenges, has often misunderstood capabilities, and overlooked limits. We will address key questions, such as: What are the challenges in obtaining training data? Can we discover structures not present in the training data? And, what is the danger of inferring unsubstantiated image details?


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5789
Author(s):  
Tarek Stiebel ◽  
Dorit Merhof

Spectral reconstruction from RGB or spectral super-resolution (SSR) offers a cheap alternative to otherwise costly and more complex spectral imaging devices. In recent years, deep learning based methods consistently achieved the best reconstruction quality in terms of spectral error metrics. However, there are important properties that are not maintained by deep neural networks. This work is primarily dedicated to scale invariance, also known as brightness invariance or exposure invariance. When RGB signals only differ in their absolute scale, they should lead to identical spectral reconstructions apart from the scaling factor. Scale invariance is an essential property that signal processing must guarantee for a wide range of practical applications. At the moment, scale invariance can only be achieved by relying on a diverse database during network training that covers all possibly occurring signal intensities. In contrast, we propose and evaluate a fundamental approach for deep learning based SSR that holds the property of scale invariance by design and is independent of the training data. The approach is independent of concrete network architectures and instead focuses on reevaluating what neural networks should actually predict. The key insight is that signal magnitudes are irrelevant for acquiring spectral reconstructions from camera signals and are only useful for a potential signal denoising.


2019 ◽  
Author(s):  
Linjing Fang ◽  
Fred Monroe ◽  
Sammy Weiser Novak ◽  
Lyndsey Kirk ◽  
Cara Rae Schiavon ◽  
...  

Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be mitigated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point-scanning super-resolution (PSSR) imaging. Oversampled, high SNR ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were "crappified" to generate semi-synthetic training data for PSSR models that were then used to restore real-world undersampled images. Remarkably, our EM PSSR model could restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs. PSSR enabled previously unattainable 2 nm resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spatial resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


Sign in / Sign up

Export Citation Format

Share Document