Computationally Efficient Image Super Resolution from Totally Aliased Low Resolution Images

Author(s):  
A Anil Kumar ◽  
N Narendra ◽  
P Balamuralidhar ◽  
M Girish Chandra
Author(s):  
Dong Seon Cheng ◽  
Marco Cristani ◽  
Vittorio Murino

Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. Especially with hand-held cameras, the overall quality may be poor due to low resolution or unsteadiness. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how we can overcome these problems. In our first contribution, we propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences. We call this process Distillation. In the traditional formulation of the image super-resolution problem, the observed target is (1) always the same, (2) acquired using a camera making small movements, and (3) found in a number of low resolution images sufficient to recover high-frequency information. These assumptions are usually unsatisfied in real world video acquisitions and often beyond the control of the video operator. With Distillation, we aim to extend and to generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. In practice, the Distillation process: i) individuates, in a semi supervised way, a set of objects of interest, clustering the related video frames and registering them with respect to global rigid transformations; ii) for each one, produces a high resolution image, by weighting each pixel according to the information retrieved about the object of interest. As a second contribution, we extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. Such process, built on top of the Distillation, is hierarchical, in the sense that a process of clustering is applied recursively, beginning with the analysis of whole frames, and selectively focusing on smaller sub-regions whose isolated motion can be reasonably assumed as rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. Our approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.


2019 ◽  
Vol 78 ◽  
pp. 236-245 ◽  
Author(s):  
Dewan Fahim Noor ◽  
Yue Li ◽  
Zhu Li ◽  
Shuvra Bhattacharyya ◽  
George York

Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 464
Author(s):  
Muhammad Irfan ◽  
Sahib Khan ◽  
Arslan Arif ◽  
Khalil Khan ◽  
Aleem Khaliq ◽  
...  

The super-resolution (SR) technique reconstructs a high-resolution image from single or multiple low-resolution images. SR has gained much attention over the past decade, as it has significant applications in our daily life. This paper provides a new technique of a single image super-resolution on true colored images. The key idea is to obtain the super-resolved image from observed low-resolution images. A proposed technique is based on both the wavelet and spatial domain-based algorithms by exploiting the advantages of both of the algorithms. A back projection with an iterative method is implemented to minimize the reconstruction error and for noise removal wavelet-based de-noising method is used. Previously, this technique has been followed for the grayscale images. In this proposed algorithm, the colored images are taken into account for super-resolution. The results of the proposed method have been examined both subjectively by observation of the results visually and objectively by considering the peak signal-to-noise ratio (PSNR) and mean squared error (MSE), which gives significant results and visually better in quality from the bi-cubic interpolation technique.


This project is an attempt to understand the suitability of the Single image super resolution models to video super resolution. Super Resolution refers to the process of enhancing the quality of low resolution images and video. Single image super resolution algorithms refer to those algorithms that can be applied on a single image to enhance its resolution. Whereas, video super resolution algorithms are those algorithms that are applied to a sequence of frames/images that constitute a video to enhance its resolution. In this paper we determine whether single image super resolution models can be applied to videos as well. When images are simply resized in Open CV, the traditional methods such as Interpolation are used which approximate the values of new pixels based on nearby pixel values which leave much to be desired in terms of visual quality, as the details (e.g. sharp edges) are often not preserved. We use deep learning techniques such as GANs (Generative Adversarial Networks) to train a model to output high resolution images from low resolution images. In this paper we analyse suitability of SRGAN and EDSR network architectures which are widely used and are popular for single image super resolution problem. We quantify the performance of these models, provide a method to evaluate and compare the models. We further draw a conclusion on the suitability and extent to which these models may be used for video super resolution. If found suitable this can have huge impact including but not limited to video compression, embedded models in end devices to enhance video output quality.


Author(s):  
Xin Jin ◽  
Jianfeng Xu ◽  
Kazuyuki Tasaka ◽  
Zhibo Chen

In this article, we address the degraded image super-resolution problem in a multi-task learning (MTL) manner. To better share representations between multiple tasks, we propose an all-in-one collaboration framework (ACF) with a learnable “junction” unit to handle two major problems that exist in MTL—“How to share” and “How much to share.” Specifically, ACF consists of a sharing phase and a reconstruction phase. Considering the intrinsic characteristic of multiple image degradations, we propose to first deal with the compression artifact, motion blur, and spatial structure information of the input image in parallel under a three-branch architecture in the sharing phase. Subsequently, in the reconstruction phase, we up-sample the previous features for high-resolution image reconstruction with a channel-wise and spatial attention mechanism. To coordinate two phases, we introduce a learnable “junction” unit with a dual-voting mechanism to selectively filter or preserve shared feature representations that come from sharing phase, learning an optimal combination for the following reconstruction phase. Finally, a curriculum learning-based training scheme is further proposed to improve the convergence of the whole framework. Extensive experimental results on synthetic and real-world low-resolution images show that the proposed all-in-one collaboration framework not only produces favorable high-resolution results while removing serious degradation, but also has high computational efficiency, outperforming state-of-the-art methods. We also have applied ACF to some image-quality sensitive practical task, such as pose estimation, to improve estimation accuracy of low-resolution images.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042026
Author(s):  
Lizhuo Gao

Abstract Super resolution is applied in many digital image fields. In many cases, only a set of low-resolution images can be obtained, but the image needs a higher resolution, and then SR needs to be applied. SR technology has undergone years of development. Among them, SRGAN is the key work to introduce GAN into the SR field, which can truly restore a large number of details on the basis of low-pixel pictures. ESRGAN is a further improvement on SRGAN. By removing the BN layer in SRGAN, the effect of artifacts in SRGAN is eliminated. However, there is still a problem that the restoration of information on small and medium scales is not accurate enough. The proposed ERDBNet improve the model on the basis of ESRGAN, and use the ERDB block to replace the original RRDB block. The new structure uses a three-layer dense block to replace the original dense block, and a residual structure of the starting point is added to each dense block. The pre-trained network can reach a PSNR of 30.425 after 200k iterations, and the minimum floating PSNR is only 30.213. Compared with the original structure, it is more stable and performs better in the detail recovery of many low-pixel images.


2020 ◽  
Vol 53 (7-8) ◽  
pp. 1429-1439
Author(s):  
Ziwei Zhang ◽  
Yangjing Shi ◽  
Xiaoshi Zhou ◽  
Hongfei Kan ◽  
Juan Wen

When low-resolution face images are used for face recognition, the model accuracy is substantially decreased. How to recover high-resolution face features from low-resolution images precisely and efficiently is an essential subtask in face recognition. In this study, we introduce shuffle block SRGAN, a new image super-resolution network inspired by the SRGAN structure. By replacing the residual blocks with shuffle blocks, we can achieve efficient super-resolution reconstruction. Furthermore, by considering the generated image quality in the loss function, we can obtain more realistic super-resolution images. We train and test SB-SRGAN in three public face image datasets and use transfer learning strategy during the training process. The experimental results show that shuffle block SRGAN can achieve desirable image super-resolution performance with respect to visual effect as well as the peak signal-to-noise ratio and structure similarity index method metrics, compared with the performance attained by the other chosen deep-leaning models.


Sign in / Sign up

Export Citation Format

Share Document