Distillation

Author(s):  
Dong Seon Cheng ◽  
Marco Cristani ◽  
Vittorio Murino

Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. Especially with hand-held cameras, the overall quality may be poor due to low resolution or unsteadiness. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how we can overcome these problems. In our first contribution, we propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences. We call this process Distillation. In the traditional formulation of the image super-resolution problem, the observed target is (1) always the same, (2) acquired using a camera making small movements, and (3) found in a number of low resolution images sufficient to recover high-frequency information. These assumptions are usually unsatisfied in real world video acquisitions and often beyond the control of the video operator. With Distillation, we aim to extend and to generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. In practice, the Distillation process: i) individuates, in a semi supervised way, a set of objects of interest, clustering the related video frames and registering them with respect to global rigid transformations; ii) for each one, produces a high resolution image, by weighting each pixel according to the information retrieved about the object of interest. As a second contribution, we extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. Such process, built on top of the Distillation, is hierarchical, in the sense that a process of clustering is applied recursively, beginning with the analysis of whole frames, and selectively focusing on smaller sub-regions whose isolated motion can be reasonably assumed as rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. Our approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.

2014 ◽  
Vol 568-570 ◽  
pp. 652-655 ◽  
Author(s):  
Zhao Li ◽  
Le Wang ◽  
Tao Yu ◽  
Bing Liang Hu

This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.


Author(s):  
Xiongxiong Xue ◽  
Zhenqi Han ◽  
Weiqin Tong ◽  
Mingqi Li ◽  
Lizhuang Liu

Video super-resolution, which utilizes the relevant information of several low-resolution frames to generate high-resolution images, is a challenging task. One possible solution called sliding window method tries to divide the generation of high-resolution video sequences into independent sub-tasks, and only adjacent low-resolution images are used to estimate the high-resolution version of the central low-resolution image. Another popular method named recurrent algorithm proposes to utilize not only the low-resolution images but also the generated high-resolution images of previous frames to generate the high-resolution image. However, both methods have some unavoidable disadvantages. The former one usually leads to bad temporal consistency and requires higher computational cost while the latter method always can not make full use of information contained by optical flow or any other calculated features. Thus more investigations need to be done to explore the balance between these two methods. In this work, a bidirectional frame recurrent video super-resolution method is proposed. To be specific, a reverse training is proposed that the generated high-resolution frame is also utilized to help estimate the high-resolution version of the former frame. With the contribution of reverse training and the forward training, the idea of bidirectional recurrent method not only guarantees the temporal consistency but also make full use of the adjacent information due to the bidirectional training operation while the computational cost is acceptable. Experimental results demonstrate that the bidirectional super-resolution framework gives remarkable performance that it solves the time-related problems when the generated high-resolution image is impressive compared with recurrent-based video super-resolution method.


Author(s):  
Anil Bhujel ◽  
Dibakar Raj Pant

<p>Single image super-resolution (SISR) is a technique that reconstructs high resolution image from single low resolution image. Dynamic Convolutional Neural Network (DCNN) is used here for the reconstruction of high resolution image from single low resolution image. It takes low resolution image as input and produce high resolution image as output for dynamic up-scaling factor 2, 3, and 4. The dynamic convolutional neural network directly learns an end-to-end mapping between low resolution and high resolution images. The CNN trained simultaneously with images up-scaled by factors 2, 3, and 4 to make it dynamic. The system is then tested for the input images with up-scaling factors 2, 3 and 4. The dynamically trained CNN performs well for all three up-scaling factors. The performance of network is measured by PSNR, WPSNR, SSIM, MSSSIM, and also by perceptual.</p><p><strong>Journal of Advanced College of Engineering and Management,</strong> Vol. 3, 2017, Page: 1-10</p>


2012 ◽  
Vol 241-244 ◽  
pp. 1913-1917
Author(s):  
Jie Xu ◽  
Xiao Lin Jiang ◽  
Xiao Yang Yu

The robot visual servo control is the current robot control of a main research direction, this design based on the analysis of the theory of compression sensing, distributed video decoding and image super-resolution reconstruction. Simulation and experimental results show that the compression sensing theory applied to image super-resolution reconstruction, make high resolution image reconstruction can fully exert the original low resolution image of the structural characteristics, in order to protect the original low resolution image edge details such as information, and the traditional calibration approach compared with high resolution image can improve the image edge, texture, and other details of the characteristics of reconstruction effect and improve the precision of the recognition.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Zhang Liu ◽  
Qi Huang ◽  
Jian Li ◽  
Qi Wang

We propose a single image super-resolution method based on aL0smoothing approach. We consider a low-resolution image as two parts: one is the smooth image generated by theL0smoothing method and the other is the error image between the low-resolution image and the smoothing image. We get an intermediate high-resolution image via a classical interpolation and then generate a high-resolution smoothing image with sharp edges by theL0smoothing method. For the error image, a learning-based super-resolution approach, keeping image details well, is employed to obtain a high-resolution error image. The resulting high-resolution image is the sum of the high-resolution smoothing image and the high-resolution error image. Experimental results show the effectiveness of the proposed method.


Author(s):  
R. S. Hansen ◽  
D. W. Waldram ◽  
T. Q. Thai ◽  
R. B. Berke

Abstract Background High-resolution Digital Image Correlation (DIC) measurements have previously been produced by stitching of neighboring images, which often requires short working distances. Separately, the image processing community has developed super resolution (SR) imaging techniques, which improve resolution by combining multiple overlapping images. Objective This work investigates the novel pairing of super resolution with digital image correlation, as an alternative method to produce high-resolution full-field strain measurements. Methods First, an image reconstruction test is performed, comparing the ability of three previously published SR algorithms to replicate a high-resolution image. Second, an applied translation is compared against DIC measurement using both low- and super-resolution images. Third, a ring sample is mechanically deformed and DIC strain measurements from low- and super-resolution images are compared. Results SR measurements show improvements compared to low-resolution images, although they do not perfectly replicate the high-resolution image. SR-DIC demonstrates reduced error and improved confidence in measuring rigid body translation when compared to low resolution alternatives, and it also shows improvement in spatial resolution for strain measurements of ring deformation. Conclusions Super resolution imaging can be effectively paired with Digital Image Correlation, offering improved spatial resolution, reduced error, and increased measurement confidence.


Sign in / Sign up

Export Citation Format

Share Document