scholarly journals Advance Neighbor Embedding for Image Super Resolution

2013 ◽  
Vol 8 (2) ◽  
pp. 768-776
Author(s):  
Dr. Ruikar Sachin D ◽  
Mr. Wadhavane Tushar D

This paper presents the Advance Neighbor embedding (ANE) method for image super resolution. The assumption of the neighbor-embedding (NE) algorithm for single-image super-resolution Reconstruction is that the feature spaces are locally isometric of low-resolution and high-resolution Patches. But, this is not true for Super Resolution because of one to many mappings between Low Resolution and High Resolution patches. Advance NE method minimize the problem occurred in NE using combine learning technique used to train two projection matrices simultaneously and to map the original Low Resolution and High Resolution feature spaces onto a unified feature subspace. The Reconstruction weights of k- Nearest neighbour of Low Resolution image patches is found by performing operation on those Low Resolution patches in unified feature space. Combine learning use a coupled constraint by linking the LR–HR counterparts together with the k-nearest grouping patch pairs to handle a large number of samples. So, Advance neighbour embedding method gives better resolution than NE method

2014 ◽  
Vol 568-570 ◽  
pp. 652-655 ◽  
Author(s):  
Zhao Li ◽  
Le Wang ◽  
Tao Yu ◽  
Bing Liang Hu

This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.


Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 41 ◽  
Author(s):  
Vahid Anari ◽  
Farbod Razzazi ◽  
Rasoul Amirfattahi

In the current study, we were inspired by sparse analysis signal representation theory to propose a novel single-image super-resolution method termed “sparse analysis-based super resolution” (SASR). This study presents and demonstrates mapping between low and high resolution images using a coupled sparse analysis operator learning method to reconstruct high resolution (HR) images. We further show that the proposed method selects more informative high and low resolution (LR) learning patches based on image texture complexity to train high and low resolution operators more efficiently. The coupled high and low resolution operators are used for high resolution image reconstruction at a low computational complexity cost. The experimental results for quantitative criteria peak signal to noise ratio (PSNR), root mean square error (RMSE), structural similarity index (SSIM) and elapsed time, human observation as a qualitative measure, and computational complexity verify the improvements offered by the proposed SASR algorithm.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Zhang Liu ◽  
Qi Huang ◽  
Jian Li ◽  
Qi Wang

We propose a single image super-resolution method based on aL0smoothing approach. We consider a low-resolution image as two parts: one is the smooth image generated by theL0smoothing method and the other is the error image between the low-resolution image and the smoothing image. We get an intermediate high-resolution image via a classical interpolation and then generate a high-resolution smoothing image with sharp edges by theL0smoothing method. For the error image, a learning-based super-resolution approach, keeping image details well, is employed to obtain a high-resolution error image. The resulting high-resolution image is the sum of the high-resolution smoothing image and the high-resolution error image. Experimental results show the effectiveness of the proposed method.


Author(s):  
Dong Seon Cheng ◽  
Marco Cristani ◽  
Vittorio Murino

Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. Especially with hand-held cameras, the overall quality may be poor due to low resolution or unsteadiness. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how we can overcome these problems. In our first contribution, we propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences. We call this process Distillation. In the traditional formulation of the image super-resolution problem, the observed target is (1) always the same, (2) acquired using a camera making small movements, and (3) found in a number of low resolution images sufficient to recover high-frequency information. These assumptions are usually unsatisfied in real world video acquisitions and often beyond the control of the video operator. With Distillation, we aim to extend and to generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. In practice, the Distillation process: i) individuates, in a semi supervised way, a set of objects of interest, clustering the related video frames and registering them with respect to global rigid transformations; ii) for each one, produces a high resolution image, by weighting each pixel according to the information retrieved about the object of interest. As a second contribution, we extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. Such process, built on top of the Distillation, is hierarchical, in the sense that a process of clustering is applied recursively, beginning with the analysis of whole frames, and selectively focusing on smaller sub-regions whose isolated motion can be reasonably assumed as rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. Our approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.


2013 ◽  
Vol 457-458 ◽  
pp. 1032-1036
Author(s):  
Feng Qing Qin ◽  
Li Hong Zhu ◽  
Li Lan Cao ◽  
Wa Nan Yang

A framework is proposed to reconstruct a super resolution image from a single low resolution image with Gaussian noise. The degrading processes of Gaussian blur, down-sampling, and Gaussian noise are all considered. For the low resolution image, the Gaussian noise is reduced through Wiener filtering algorithm. For the de-noised low resolution image, iterative back projection algorithm is used to reconstruct a super resolution image. Experiments show that de-noising plays an important part in single-image super resolution reconstruction. In the super reconstructed image, the Gaussian noise is reduced effectively and the peak signal to noise ratio (PSNR) is increased.


Author(s):  
Zheng Wang ◽  
Mang Ye ◽  
Fan Yang ◽  
Xiang Bai ◽  
Shin'ichi Satoh

Person re-identification (REID) is an important task in video surveillance and forensics applications. Most of previous approaches are based on a key assumption that all person images have uniform and sufficiently high resolutions. Actually, various low-resolutions and scale mismatching always exist in open world REID. We name this kind of problem as Scale-Adaptive Low Resolution Person Re-identification (SALR-REID). The most intuitive way to address this problem is to increase various low-resolutions (not only low, but also with different scales) to a uniform high-resolution. SR-GAN is one of the most competitive image super-resolution deep networks, designed with a fixed upscaling factor. However, it is still not suitable for SALR-REID task, which requires a network not only synthesizing high-resolution images with different upscaling factors, but also extracting discriminative image feature for judging person’s identity. (1) To promote the ability of scale-adaptive upscaling, we cascade multiple SRGANs in series. (2) To supplement the ability of image feature representation, we plug-in a reidentification network. With a unified formulation, a Cascaded Super-Resolution GAN (CSR-GAN) framework is proposed. Extensive evaluations on two simulated datasets and one public dataset demonstrate the advantages of our method over related state-of-the-art methods.


2018 ◽  
Vol 10 (10) ◽  
pp. 1574 ◽  
Author(s):  
Dongsheng Gao ◽  
Zhentao Hu ◽  
Renzhen Ye

Due to sensor limitations, hyperspectral images (HSIs) are acquired by hyperspectral sensors with high-spectral-resolution but low-spatial-resolution. It is difficult for sensors to acquire images with high-spatial-resolution and high-spectral-resolution simultaneously. Hyperspectral image super-resolution tries to enhance the spatial resolution of HSI by software techniques. In recent years, various methods have been proposed to fuse HSI and multispectral image (MSI) from an unmixing or a spectral dictionary perspective. However, these methods extract the spectral information from each image individually, and therefore ignore the cross-correlation between the observed HSI and MSI. It is difficult to achieve high-spatial-resolution while preserving the spatial-spectral consistency between low-resolution HSI and high-resolution HSI. In this paper, a self-dictionary regression based method is proposed to utilize cross-correlation between the observed HSI and MSI. Both the observed low-resolution HSI and MSI are simultaneously considered to estimate the endmember dictionary and the abundance code. To preserve the spectral consistency, the endmember dictionary is extracted by performing a common sparse basis selection on the concatenation of observed HSI and MSI. Then, a consistent constraint is exploited to ensure the spatial consistency between the abundance code of low-resolution HSI and the abundance code of high-resolution HSI. Extensive experiments on three datasets demonstrate that the proposed method outperforms the state-of-the-art methods.


Author(s):  
Xin Li ◽  
Jie Chen ◽  
Ziguan Cui ◽  
Minghu Wu ◽  
Xiuchang Zhu

Sparse representation theory has attracted much attention, and has been successfully used in image super-resolution (SR) reconstruction. However, it could only provide the local prior of image patches. Field of experts (FoE) is a way to develop the generic and expressive prior of the whole image. The algorithm proposed in this paper uses the FoE model as the global constraint of SR reconstruction problem to pre-process the low-resolution image. Since a single dictionary could not accurately represent different types of image patches, our algorithm classifies the sample patches composed of pre-processed image and high-resolution image, obtains the sub-dictionaries by training, and adaptively selects the most appropriate sub-dictionary for reconstruction according to the pyramid histogram of oriented gradients feature of image patches. Furthermore, in order to reduce the computational complexity, our algorithm makes use of edge detection, and only applies SR reconstruction based on sparse representation to the edge patches of the test image. Nonedge patches are directly replaced by the pre-processing results of FoE model. Experimental results show that our algorithm can effectively guarantee the quality of the reconstructed image, and reduce the computation time to a certain extent.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xuan Zhu ◽  
Xianxian Wang ◽  
Jun Wang ◽  
Peng Jin ◽  
Li Liu ◽  
...  

Sparse representation has recently attracted enormous interests in the field of image super-resolution. The sparsity-based methods usually train a pair of global dictionaries. However, only a pair of global dictionaries cannot best sparsely represent different kinds of image patches, as it neglects two most important image features: edge and direction. In this paper, we propose to train two novel pairs of Direction and Edge dictionaries for super-resolution. For single-image super-resolution, the training image patches are, respectively, divided into two clusters by two new templates representing direction and edge features. For each cluster, a pair of Direction and Edge dictionaries is learned. Sparse coding is combined with the Direction and Edge dictionaries to realize super-resolution. The above single-image super-resolution can restore the faithful high-frequency details, and the POCS is convenient for incorporating any kind of constraints or priors. Therefore, we combine the two methods to realize multiframe super-resolution. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed method. Experimental results demonstrate that our method can recover better edge structure and details.


Sign in / Sign up

Export Citation Format

Share Document