scholarly journals FL-MISR: fast large-scale multi-image super-resolution for computed tomography based on multi-GPU acceleration

Author(s):  
Kaicong Sun ◽  
Trung-Hieu Tran ◽  
Jajnabalkya Guhathakurta ◽  
Sven Simon

AbstractMulti-image super-resolution (MISR) usually outperforms single-image super-resolution (SISR) under a proper inter-image alignment by explicitly exploiting the inter-image correlation. However, the large computational demand encumbers the deployment of MISR in practice. In this work, we propose a distributed optimization framework based on data parallelism for fast large-scale MISR using multi-GPU acceleration named FL-MISR. The scaled conjugate gradient (SCG) algorithm is applied to the distributed subfunctions and the local SCG variables are communicated to synchronize the convergence rate over multi-GPU systems towards a consistent convergence. Furthermore, an inner-outer border exchange scheme is performed to obviate the border effect between neighboring GPUs. The proposed FL-MISR is applied to the computed tomography (CT) system by super-resolving the projections acquired by subpixel detector shift. The SR reconstruction is performed on the fly during the CT acquisition such that no additional computation time is introduced. FL-MISR is extensively evaluated from different aspects and experimental results demonstrate that FL-MISR effectively improves the spatial resolution of CT systems in modulation transfer function (MTF) and visual perception. Comparing to a multi-core CPU implementation, FL-MISR achieves a more than 50$$\times$$ × speedup on an off-the-shelf 4-GPU system.

Author(s):  
Xin Li ◽  
Jie Chen ◽  
Ziguan Cui ◽  
Minghu Wu ◽  
Xiuchang Zhu

Sparse representation theory has attracted much attention, and has been successfully used in image super-resolution (SR) reconstruction. However, it could only provide the local prior of image patches. Field of experts (FoE) is a way to develop the generic and expressive prior of the whole image. The algorithm proposed in this paper uses the FoE model as the global constraint of SR reconstruction problem to pre-process the low-resolution image. Since a single dictionary could not accurately represent different types of image patches, our algorithm classifies the sample patches composed of pre-processed image and high-resolution image, obtains the sub-dictionaries by training, and adaptively selects the most appropriate sub-dictionary for reconstruction according to the pyramid histogram of oriented gradients feature of image patches. Furthermore, in order to reduce the computational complexity, our algorithm makes use of edge detection, and only applies SR reconstruction based on sparse representation to the edge patches of the test image. Nonedge patches are directly replaced by the pre-processing results of FoE model. Experimental results show that our algorithm can effectively guarantee the quality of the reconstructed image, and reduce the computation time to a certain extent.


2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Jianhong Li ◽  
Kanoksak Wattanachote ◽  
Yarong Wu

Prior knowledge plays an important role in the process of image super-resolution reconstruction, which can constrain the solution space efficiently. In this paper, we utilized the fact that clear image exhibits stronger self-similarity property than other degradated version to present a new prior called maximizing nonlocal self-similarity for single image super-resolution. For describing the prior with mathematical language, a joint Gaussian mixture model was trained with LR and HR patch pairs extracted from the input LR image and its lower scale, and the prior can be described as a specific Gaussian distribution by derivation. In our algorithm, a large scale of sophisticated training and time-consuming nearest neighbor searching is not necessary, and the cost function of this algorithm shows closed form solution. The experiments conducted on BSD500 and other popular images demonstrate that the proposed method outperforms traditional methods and is competitive with the current state-of-the-art algorithms in terms of both quantitative metrics and visual quality.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Zhen Hua ◽  
Haicheng Zhang ◽  
Jinjiang Li

Fractal coding techniques are an effective tool for describing image textures. Considering the shortcomings of the existing image super-resolution (SR) method, the large-scale factor reconstruction performance is poor and the texture details are incomplete. In this paper, we propose an SR method based on error compensation and fractal coding. First, quadtree coding is performed on the image, and the similarity between the range block and the domain block is established to determine the fractal code. Then, through this similarity relationship, the attractor is reconstructed by super-resolution fractal decoding to obtain an interpolated image. Finally, the fractal error of the fractal code is estimated by the depth residual network, and the estimated version of the error image is added as an error compensation term to the interpolation image to obtain the final reconstructed image. The network structure is jointly trained by a deep network and a shallow network. Residual learning is introduced to greatly improve the convergence speed and reconstruction accuracy of the network. Experiments with other state-of-the-art methods on the benchmark datasets Set5, Set14, B100, and Urban100 show that our algorithm achieves competitive performance quantitatively and qualitatively, with subtle edges and vivid textures. Large-scale factor images can also be reconstructed better.


Author(s):  
S. E. EL-KHAMY ◽  
M. M. HADHOUD ◽  
M. I. DESSOUKY ◽  
B. M. SALAM ◽  
F. E. ABD EL-SAMIE

This paper presents a wavelet-based computationally efficient implementation of the Linear Minimum Mean Square Error (LMMSE) algorithm in image super-resolution. The image super-resolution reconstruction problem is well-known to be an ill-posed inverse problem of large dimensions. The LMMSE estimator to be implemented in the image super-resolution reconstruction problem requires an inversion of a very large dimension matrix, which is practically impossible. Our suggested implementation is based on breaking the problem into four consecutive steps, a registration step, a multi-channel LMMSE restoration step, a wavelet-based image fusion step and an LMMSE image interpolation step. The objective of the wavelet fusion step is to integrate the data obtained from each observation into a single image, which is then interpolated to give a high-resolution image. The paper explains the implementation of each step. The proposed implementation has succeeded in obtaining a high-resolution image from multiple degraded observations with a high PSNR. The computation time of the suggested implementation is small when compared to traditional iterative image super-resolution algorithms.


2020 ◽  
Vol 37 (12) ◽  
pp. 2197-2207
Author(s):  
Andrew Geiss ◽  
Joseph C. Hardin

AbstractSuper resolution involves synthetically increasing the resolution of gridded data beyond their native resolution. Typically, this is done using interpolation schemes, which estimate sub-grid-scale values from neighboring data, and perform the same operation everywhere regardless of the large-scale context, or by requiring a network of radars with overlapping fields of view. Recently, significant progress has been made in single-image super resolution using convolutional neural networks. Conceptually, a neural network may be able to learn relations between large-scale precipitation features and the associated sub-pixel-scale variability and outperform interpolation schemes. Here, we use a deep convolutional neural network to artificially enhance the resolution of NEXRAD PPI scans. The model is trained on 6 months of reflectivity observations from the Langley Hill, Washington, radar (KLGX), and we find that it substantially outperforms common interpolation schemes for 4× and 8× resolution increases based on several objective error and perceptual quality metrics.


Sign in / Sign up

Export Citation Format

Share Document