NIMG-16. DEEP LEARNING SUPER-RESOLUTION MR SPECTROSCOPIC IMAGING TO MAP TUMOR METABOLISM IN MUTANT IDH GLIOMA PATIENTS

2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi131-vi131
Author(s):  
Xianqi Li ◽  
Ovidiu Andronesi

Abstract Metabolic imaging can map spatially abnormal molecular pathways with higher specificity for cancer compared to anatomical imaging. However, acquiring high resolution metabolic maps similar to anatomical MRI is challenging in patients due to low metabolite concentrations, and alternative approaches that increase resolution by post-acquisition image processing can mitigate this limitation. We developed deep learning super-resolution MR spectroscopic imaging (MRSI) to map tumor metabolism in patients with mutant IDH glioma. We used a generative adversarial network (GAN) architecture comprised of a UNet neural network as the generator network and a discriminator network for adversarial training. For initial training we simulated a large data set of 9600 images with realistic quality for acquired MRSI to effectively train the deep learning model to upsample by a factor of four. Two types of training were performed: 1) using only the MRSI data, and 2) using MRSI and prior information from anatomical MRI to further enhance structural details. The performance of super-resolution methods was evaluated by peak SNR (PSNR), structure similarity index (SSIM), and feature similarity index (FSIM). After training on simulations, GAN was evaluated on measured MRSI metabolic maps acquired with resolution 5.2×5.2 mm2 and upsampled to 1.3×1.3 mm2. The GAN trained only on MRSI achieved PSNR = 27.94, SSIM = 0.88, FSIM = 0.89. Using prior anatomical MRI improved GAN performance to PSNR = 30.75, SSIM = 0.90, FSIM = 0.92. In the patient measured data, GAN super-resolution metabolic images provided clearer tumor margins and made apparent the tumor metabolic heterogeneity. Compared to conventional image interpolation such as bicubic or total variation, deep learning methods provided sharper edges and less blurring of structural details. Our results indicate that the proposed deep learning method is effective in enhancing the spatial resolution of metabolite maps which may better guide treatment in mutant IDH glioma patients.

2020 ◽  
Vol 3 (Supplement_1) ◽  
pp. i5-i6
Author(s):  
Xianqi Li ◽  
Ovidiu Andronesi ◽  
Bernhard Strasser ◽  
Kourosh Jafari-Khouzani ◽  
Daniel Cahill ◽  
...  

Abstract Metabolic imaging can map spatially abnormal molecular pathways with higher specificity for cancer compared to anatomical imaging. However, acquiring high resolution metabolic maps similar to anatomical MRI is challenging due to low metabolite concentrations, and alternative approaches that increase resolution by post-acquisition image processing can mitigate this limitation. We developed deep learning super-resolution MR spectroscopic imaging (MRSI) to map tumor metabolism in patients with mutant IDH glioma. We used a generative adversarial network (GAN) architecture comprised of a UNet neural network as the generator network and a discriminator network for adversarial training. For training we simulated a large data set of 9600 images with realistic quality for acquired MRSI to effectively train the deep learning model to upsample by a factor of four. Two types of training were performed: 1) using only the MRSI data, and 2) using MRSI and prior information from anatomical MRI to further enhance structural details. The performance of super-resolution methods was evaluated by peak SNR (PSNR), structure similarity index (SSIM), and feature similarity index (FSIM). After training on simulations, GAN was evaluated on measured MRSI metabolic maps acquired with resolution 5.2×5.2 mm2 and upsampled to 1.3×1.3 mm2. The GAN trained only on MRSI achieved PSNR = 27.94, SSIM = 0.88, FSIM = 0.89. Using prior anatomical MRI improved GAN performance to PSNR = 30.75, SSIM = 0.90, FSIM = 0.92. In the patient measured data, GAN super-resolution metabolic images provided clearer tumor margins and made apparent the tumor metabolic heterogeneity. Compared to conventional image interpolation such as bicubic or total variation, deep learning methods provided sharper edges and less blurring of structural details. Our results indicate that the proposed deep learning method is effective in enhancing the spatial resolution of metabolite maps which may better guide treatment in mutant IDH glioma patients.


2015 ◽  
Vol 17 (suppl 3) ◽  
pp. iii5-iii5
Author(s):  
I. Park ◽  
R. Hashizume ◽  
X. Yang ◽  
P. Larson ◽  
C. D. James ◽  
...  

2020 ◽  
pp. 147592172094295
Author(s):  
Homin Song ◽  
Yongchao Yang

Subwavelength defect imaging using guided waves has been known to be a difficult task mainly due to the diffraction limit and dispersion of guided waves. In this article, we present a noncontact super-resolution guided wave array imaging approach based on deep learning to visualize subwavelength defects in plate-like structures. The proposed approach is a novel hierarchical multiscale imaging approach that combines two distinct fully convolutional networks. The first fully convolutional network, the global detection network, globally detects subwavelength defects in a raw low-resolution guided wave beamforming image. Then, the subsequent second fully convolutional network, the local super-resolution network, locally resolves subwavelength-scale fine structural details of the detected defects. We conduct a series of numerical simulations and laboratory-scale experiments using a noncontact guided wave array enabled by a scanning laser Doppler vibrometer on aluminate plates with various subwavelength defects. The results demonstrate that the proposed super-resolution guided wave array imaging approach not only locates subwavelength defects but also visualizes super-resolution fine structural details of these defects, thus enabling further estimation of the size and shape of the detected subwavelength defects. We discuss several key aspects of the performance of our approach, compare with an existing super-resolution algorithm, and make recommendations for its successful implementations.


2021 ◽  
Vol 11 (3) ◽  
pp. 1089
Author(s):  
Suhong Yoo ◽  
Jisang Lee ◽  
Junsu Bae ◽  
Hyoseon Jang ◽  
Hong-Gyoo Sohn

Aerial images are an outstanding option for observing terrain with their high-resolution (HR) capability. The high operational cost of aerial images makes it difficult to acquire periodic observation of the region of interest. Satellite imagery is an alternative for the problem, but low-resolution is an obstacle. In this study, we proposed a context-based approach to simulate the 10 m resolution of Sentinel-2 imagery to produce 2.5 and 5.0 m prediction images using the aerial orthoimage acquired over the same period. The proposed model was compared with an enhanced deep super-resolution network (EDSR), which has excellent performance among the existing super-resolution (SR) deep learning algorithms, using the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-squared error (RMSE). Our context-based ResU-Net outperformed the EDSR in all three metrics. The inclusion of the 60 m resolution of Sentinel-2 imagery performs better through fine-tuning. When 60 m images were included, RMSE decreased, and PSNR and SSIM increased. The result also validated that the denser the neural network, the higher the quality. Moreover, the accuracy is much higher when both denser feature dimensions and the 60 m images were used.


2021 ◽  
Vol 38 (5) ◽  
pp. 1361-1368
Author(s):  
Fatih M. Senalp ◽  
Murat Ceylan

The thermal camera systems can be used in all kinds of applications that require the detection of heat change, but thermal imaging systems are highly costly systems. In recent years, developments in the field of deep learning have increased the success by obtaining quality results compared to traditional methods. In this paper, thermal images of neonates (healthy - unhealthy) obtained from a high-resolution thermal camera were used and these images were evaluated as high resolution (ground truth) images. Later, these thermal images were downscaled at 1/2, 1/4, 1/8 ratios, and three different datasets consisting of low-resolution images in different sizes were obtained. In this way, super-resolution applications have been carried out on the deep network model developed based on generative adversarial networks (GAN) by using three different datasets. The successful performance of the results was evaluated with PSNR (peak signal to noise ratio) and SSIM (structural similarity index measure). In addition, healthy - unhealthy classification application was carried out by means of a classifier network developed based on convolutional neural networks (CNN) to evaluate the super-resolution images obtained using different datasets. The obtained results show the importance of combining medical thermal imaging with super-resolution methods.


2021 ◽  
Author(s):  
Andres Munoz-Jaramillo ◽  
Anna Jungbluth ◽  
Xavier Gitiaux ◽  
Paul Wright ◽  
Carl Shneider ◽  
...  

Abstract Super-resolution techniques aim to increase the resolution of images by adding detail. Compared to upsampling techniques reliant on interpolation, deep learning-based approaches learn features and their relationships across the training data set to leverage prior knowledge on what low resolution patterns look like in higher resolution images. As an added benefit, deep neural networks can learn the systematic properties of the target images (i.e.\ texture), combining super-resolution with instrument cross-calibration. While the successful use of super-resolution algorithms for natural images is rooted in creating perceptually convincing results, super-resolution applied to scientific data requires careful quantitative evaluation of performances. In this work, we demonstrate that deep learning can increase the resolution and calibrate space- and ground-based imagers belonging to different instrumental generations. In addition, we establish a set of measurements to benchmark the performance of scientific applications of deep learning-based super-resolution and calibration. We super-resolve and calibrate solar magnetic field images taken by the Michelson Doppler Imager (MDI; resolution ~2"/pixel; science-grade, space-based) and the Global Oscillation Network Group (GONG; resolution ~2.5"/pixel; space weather operations, ground-based) to the pixel resolution of images taken by the Helioseismic and Magnetic Imager (HMI; resolution ~0.5"/pixel; last generation, science-grade, space-based).


2019 ◽  
Vol 9 ◽  
Author(s):  
Zohaib Iqbal ◽  
Dan Nguyen ◽  
Gilbert Hangel ◽  
Stanislav Motyka ◽  
Wolfgang Bogner ◽  
...  

Radiology ◽  
1990 ◽  
Vol 176 (3) ◽  
pp. 791-799 ◽  
Author(s):  
P R Luyten ◽  
A J Marien ◽  
W Heindel ◽  
P H van Gerwen ◽  
K Herholz ◽  
...  

Electronics ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 553 ◽  
Author(s):  
Faisal Sahito ◽  
Pan Zhiwen ◽  
Junaid Ahmed ◽  
Raheel Ahmed Memon

We propose a scale-invariant deep neural network model based on wavelets for single image super-resolution (SISR). The wavelet approximation images and their corresponding wavelet sub-bands across all predefined scale factors are combined to form a big training data set. Then, mappings are determined between the wavelet sub-band images and their corresponding approximation images. Finally, the gradient clipping process is used to boost the training speed of the algorithm. Furthermore, stationary wavelet transform (SWT) is used instead of a discrete wavelet transform (DWT), due to its up-scaling property. In this way, we can preserve more information about the images. In the proposed model, the high-resolution image is recovered with detailed features, due to redundancy (across the scale) property of wavelets. Experimental results show that the proposed model outperforms state-of-the algorithms in terms of peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).


Sign in / Sign up

Export Citation Format

Share Document