scholarly journals Image Super-Resolution for MRI Images using 3D Faster Super-Resolution Convolutional Neural Network architecture

2020 ◽  
Vol 32 ◽  
pp. 03044
Author(s):  
Vanita Mane ◽  
Suchit Jadhav ◽  
Praneya Lal

Single image super-resolution using deep learning techniques has shown very high reconstruction performance over the last few years. We propose a novel three-dimensional convolutional neural network called 3D FSRCNN based on FSRCNN, which reinstates the high-resolution quality of structural MRI. The 3D neural network generates output brain images of high-resolution (HR) from a low-resolution (LR) input image. A simple design ensures less time complexity and high reconstruction quality. The network is trained using T1-weighted structural MRI images from the human connectome project dataset which is a large publicly available brain MRI database.

2021 ◽  
Author(s):  
Debjoy Chowdhury

Recovering a High-Resolution (HR) image from a Low-Resolution (LR) image is the main concept of image Super-Resolution (SR). Convolution Neural Networks (CNN) are becoming widely adopted in many applications including generation of HR images from LR images. Although CNNs are widely used with great performance improvements, there is still much room for improvement. There has always been a trade-off between the number of parameters and performance enhancement. This thesis presents a novel convolutional neural network architecture for high scale image SR inspired by the DenseNet and ResNet architecture. In particular, modifications can be made to the convolutional layers in the network: stacking the features and reusing the weight layers to increase the receptive field. It is shown how this method can be used to expand the receptive field and performance of super-resolution networks, without increasing the number of trainable parameters and sacrificing the computation time. These modifications can easily be integrated into any convolutional neural network to improve the accuracy by efficient high-level feature extraction while reducing training time and parameter numbers. Proposed methods are especially effective for the challenging high scale SR due to edge and texture recovery through the expanded network receptive field. Experimental results show that the proposed model outperforms the state-of-the-art methods.


2021 ◽  
Author(s):  
Debjoy Chowdhury

Recovering a High-Resolution (HR) image from a Low-Resolution (LR) image is the main concept of image Super-Resolution (SR). Convolution Neural Networks (CNN) are becoming widely adopted in many applications including generation of HR images from LR images. Although CNNs are widely used with great performance improvements, there is still much room for improvement. There has always been a trade-off between the number of parameters and performance enhancement. This thesis presents a novel convolutional neural network architecture for high scale image SR inspired by the DenseNet and ResNet architecture. In particular, modifications can be made to the convolutional layers in the network: stacking the features and reusing the weight layers to increase the receptive field. It is shown how this method can be used to expand the receptive field and performance of super-resolution networks, without increasing the number of trainable parameters and sacrificing the computation time. These modifications can easily be integrated into any convolutional neural network to improve the accuracy by efficient high-level feature extraction while reducing training time and parameter numbers. Proposed methods are especially effective for the challenging high scale SR due to edge and texture recovery through the expanded network receptive field. Experimental results show that the proposed model outperforms the state-of-the-art methods.


Author(s):  
Vikas Kumar ◽  
Tanupriya Choudhury ◽  
Suresh Chandra Satapathy ◽  
Ravi Tomar ◽  
Archit Aggarwal

Recently, huge progress has been achieved in the field of single image super resolution which augments the resolution of images. The idea behind super resolution is to convert low-resolution images into high-resolution images. SRCNN (Single Resolution Convolutional Neural Network) was a huge improvement over the existing methods of single-image super resolution. However, video super-resolution, despite being an active field of research, is yet to benefit from deep learning. Using still images and videos downloaded from various sources, we explore the possibility of using SRCNN along with image fusion techniques (minima, maxima, average, PCA, DWT) to improve over existing video super resolution methods. Video Super-Resolution has inherent difficulties such as unexpected motion, blur and noise. We propose Video Super Resolution – Image Fusion (VSR-IF) architecture which utilizes information from multiple frames to produce a single high- resolution frame for a video. We use SRCNN as a reference model to obtain high resolution adjacent frames and use a concatenation layer to group those frames into a single frame. Since, our method is data-driven and requires only minimal initial training, it is faster than other video super resolution methods. After testing our program, we find that our technique shows a significant improvement over SCRNN and other single image and frame super resolution techniques.


2021 ◽  
Author(s):  
George Seif

This thesis presents a novel convolutional neural network architecture for high-scale image super-resolution. In particular, we introduce two separate modifications that can be made to the convolutional layers in the network: one-dimensional kernels and dilated kernels. We show how both of these methods can be used to expand the receptive field and performance of super-resolution networks, without increasing the number of trainable parameters or network depth. We show that these modifications can easily be integrated into any convolutional neural network to improve performance. Our methods are especially effective for the challenging high scale super-resolution due to the expanded network receptive field. We conduct extensive empirical evaluations to demonstrate the effectiveness of our methods, showing strong improvements over the state-of-the-art.


2021 ◽  
Author(s):  
Guosheng Zhao ◽  
Kun Wang

With the development of deep convolutional neural network, recent research on single image super-resolution (SISR) has achieved great achievements. In particular, the networks, which fully utilize features, achieve a better performance. In this paper, we propose an image super-resolution dual features extraction network (SRDFN). Our method uses the dual features extraction blocks (DFBs) to extract and combine low-resolution features, with less noise but less detail, and high-resolution features, with more detail but more noise. The output of DFB contains the advantages of low- and high-resolution features, with more detail and less noise. Moreover, due to that the number of DFB and channels can be set by weighting accuracy against size of model, SRDFN can be designed according to actual situation. The experimental results demonstrate that the proposed SRDFN performs well in comparison with the state-of-the-art methods.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2348
Author(s):  
Zhe Liu ◽  
Yinqiang Zheng ◽  
Xian-Hua Han

Hyperspectral image (HSI) super-resolution (SR) is a challenging task due to its ill-posed nature, and has attracted extensive attention by the research community. Previous methods concentrated on leveraging various hand-crafted image priors of a latent high-resolution hyperspectral (HR-HS) image to regularize the degradation model of the observed low-resolution hyperspectral (LR-HS) and HR-RGB images. Different optimization strategies for searching a plausible solution, which usually leads to a limited reconstruction performance, were also exploited. Recently, deep-learning-based methods evolved for automatically learning the abundant image priors in a latent HR-HS image. These methods have made great progress for HS image super resolution. Current deep-learning methods have faced difficulties in designing more complicated and deeper neural network architectures for boosting the performance. They also require large-scale training triplets, such as the LR-HS, HR-RGB, and their corresponding HR-HS images for neural network training. These training triplets significantly limit their applicability to real scenarios. In this work, a deep unsupervised fusion-learning framework for generating a latent HR-HS image using only the observed LR-HS and HR-RGB images without previous preparation of any other training triplets is proposed. Based on the fact that a convolutional neural network architecture is capable of capturing a large number of low-level statistics (priors) of images, the automatic learning of underlying priors of spatial structures and spectral attributes in a latent HR-HS image using only its corresponding degraded observations is promoted. Specifically, the parameter space of a generative neural network used for learning the required HR-HS image to minimize the reconstruction errors of the observations using mathematical relations between data is investigated. Moreover, special convolutional layers for approximating the degradation operations between observations and the latent HR-HS image are specifically to construct an end-to-end unsupervised learning framework for HS image super-resolution. Experiments on two benchmark HS datasets, including the CAVE and Harvard, demonstrate that the proposed method can is capable of producing very promising results, even under a large upscaling factor. Furthermore, it can outperform other unsupervised state-of-the-art methods by a large margin, and manifests its superiority and efficiency.


Author(s):  
Anil Bhujel ◽  
Dibakar Raj Pant

<p>Single image super-resolution (SISR) is a technique that reconstructs high resolution image from single low resolution image. Dynamic Convolutional Neural Network (DCNN) is used here for the reconstruction of high resolution image from single low resolution image. It takes low resolution image as input and produce high resolution image as output for dynamic up-scaling factor 2, 3, and 4. The dynamic convolutional neural network directly learns an end-to-end mapping between low resolution and high resolution images. The CNN trained simultaneously with images up-scaled by factors 2, 3, and 4 to make it dynamic. The system is then tested for the input images with up-scaling factors 2, 3 and 4. The dynamically trained CNN performs well for all three up-scaling factors. The performance of network is measured by PSNR, WPSNR, SSIM, MSSSIM, and also by perceptual.</p><p><strong>Journal of Advanced College of Engineering and Management,</strong> Vol. 3, 2017, Page: 1-10</p>


2021 ◽  
Author(s):  
George Seif

This thesis presents a novel convolutional neural network architecture for high-scale image super-resolution. In particular, we introduce two separate modifications that can be made to the convolutional layers in the network: one-dimensional kernels and dilated kernels. We show how both of these methods can be used to expand the receptive field and performance of super-resolution networks, without increasing the number of trainable parameters or network depth. We show that these modifications can easily be integrated into any convolutional neural network to improve performance. Our methods are especially effective for the challenging high scale super-resolution due to the expanded network receptive field. We conduct extensive empirical evaluations to demonstrate the effectiveness of our methods, showing strong improvements over the state-of-the-art.


2021 ◽  
Vol 13 (20) ◽  
pp. 4074
Author(s):  
Xiaochen Lu ◽  
Dezheng Yang ◽  
Junping Zhang ◽  
Fengde Jia

Super-resolution (SR) technology has emerged as an effective tool for image analysis and interpretation. However, single hyperspectral (HS) image SR remains challenging, due to the high spectral dimensionality and lack of available high-resolution information of auxiliary sources. To fully exploit the spectral and spatial characteristics, in this paper, a novel single HS image SR approach is proposed based on a spatial correlation-regularized unmixing convolutional neural network (CNN). The proposed approach takes advantage of a CNN to explore the collaborative spatial and spectral information of an HS image and infer the high-resolution abundance maps, thereby reconstructing the anticipated high-resolution HS image via the linear spectral mixture model. Moreover, a dual-branch architecture network and spatial spread transform function are employed to characterize the spatial correlation between the high- and low-resolution HS images, aiming at promoting the fidelity of the super-resolved image. Experiments on three public remote sensing HS images demonstrate the feasibility and superiority in terms of spectral fidelity, compared with some state-of-the-art HS image super-resolution methods.


Sign in / Sign up

Export Citation Format

Share Document