scholarly journals Fast Extraction Algorithm for Local Edge Features of Super-Resolution Image

2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Feng Chen ◽  
Botao Yang

Image super-resolution is getting popularity these days in diverse fields, such as medical applications and industrial applications. The accuracy is imperative on image super-resolution. The traditional approaches for local edge feature point extraction algorithms are merely based on edge points for super-resolution images. The traditional algorithms are used to calculate the geometric center of gravity of the edge line when it is near, resulting in a low feature recall rate and unreliable results. In order to overcome these problems of lower accuracy in the existing system, an attempt is made in this research work to propose a new fast extraction algorithm for local edge features of super-resolution images. This paper primarily focuses on the super-resolution image reconstruction model, which is utilized to extract the super-resolution image. The edge contour of the super-resolution image feature is extracted based on the Chamfer distance function. Then, the geometric center of gravity of the closed edge line and the nonclosed edge line are calculated. The algorithm emphasizes on polarizing the edge points with the center of gravity to determine the local extreme points of the upper edge of the amplitude-diameter curve and to determine the feature points of the edges of the super-resolution image. The experimental results show that the proposed algorithm consumes 0.02 seconds to extract the local edge features of super-resolution images with an accuracy of up to 96.3%. The experimental results show that our proposed algorithm is an efficient method for the extraction of local edge features from the super-resolution images.

Author(s):  
Dong Seon Cheng ◽  
Marco Cristani ◽  
Vittorio Murino

Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. Especially with hand-held cameras, the overall quality may be poor due to low resolution or unsteadiness. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how we can overcome these problems. In our first contribution, we propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences. We call this process Distillation. In the traditional formulation of the image super-resolution problem, the observed target is (1) always the same, (2) acquired using a camera making small movements, and (3) found in a number of low resolution images sufficient to recover high-frequency information. These assumptions are usually unsatisfied in real world video acquisitions and often beyond the control of the video operator. With Distillation, we aim to extend and to generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. In practice, the Distillation process: i) individuates, in a semi supervised way, a set of objects of interest, clustering the related video frames and registering them with respect to global rigid transformations; ii) for each one, produces a high resolution image, by weighting each pixel according to the information retrieved about the object of interest. As a second contribution, we extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. Such process, built on top of the Distillation, is hierarchical, in the sense that a process of clustering is applied recursively, beginning with the analysis of whole frames, and selectively focusing on smaller sub-regions whose isolated motion can be reasonably assumed as rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. Our approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.


2013 ◽  
Vol 457-458 ◽  
pp. 1032-1036
Author(s):  
Feng Qing Qin ◽  
Li Hong Zhu ◽  
Li Lan Cao ◽  
Wa Nan Yang

A framework is proposed to reconstruct a super resolution image from a single low resolution image with Gaussian noise. The degrading processes of Gaussian blur, down-sampling, and Gaussian noise are all considered. For the low resolution image, the Gaussian noise is reduced through Wiener filtering algorithm. For the de-noised low resolution image, iterative back projection algorithm is used to reconstruct a super resolution image. Experiments show that de-noising plays an important part in single-image super resolution reconstruction. In the super reconstructed image, the Gaussian noise is reduced effectively and the peak signal to noise ratio (PSNR) is increased.


Author(s):  
Xin Li ◽  
Jie Chen ◽  
Ziguan Cui ◽  
Minghu Wu ◽  
Xiuchang Zhu

Sparse representation theory has attracted much attention, and has been successfully used in image super-resolution (SR) reconstruction. However, it could only provide the local prior of image patches. Field of experts (FoE) is a way to develop the generic and expressive prior of the whole image. The algorithm proposed in this paper uses the FoE model as the global constraint of SR reconstruction problem to pre-process the low-resolution image. Since a single dictionary could not accurately represent different types of image patches, our algorithm classifies the sample patches composed of pre-processed image and high-resolution image, obtains the sub-dictionaries by training, and adaptively selects the most appropriate sub-dictionary for reconstruction according to the pyramid histogram of oriented gradients feature of image patches. Furthermore, in order to reduce the computational complexity, our algorithm makes use of edge detection, and only applies SR reconstruction based on sparse representation to the edge patches of the test image. Nonedge patches are directly replaced by the pre-processing results of FoE model. Experimental results show that our algorithm can effectively guarantee the quality of the reconstructed image, and reduce the computation time to a certain extent.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xuan Zhu ◽  
Xianxian Wang ◽  
Jun Wang ◽  
Peng Jin ◽  
Li Liu ◽  
...  

Sparse representation has recently attracted enormous interests in the field of image super-resolution. The sparsity-based methods usually train a pair of global dictionaries. However, only a pair of global dictionaries cannot best sparsely represent different kinds of image patches, as it neglects two most important image features: edge and direction. In this paper, we propose to train two novel pairs of Direction and Edge dictionaries for super-resolution. For single-image super-resolution, the training image patches are, respectively, divided into two clusters by two new templates representing direction and edge features. For each cluster, a pair of Direction and Edge dictionaries is learned. Sparse coding is combined with the Direction and Edge dictionaries to realize super-resolution. The above single-image super-resolution can restore the faithful high-frequency details, and the POCS is convenient for incorporating any kind of constraints or priors. Therefore, we combine the two methods to realize multiframe super-resolution. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed method. Experimental results demonstrate that our method can recover better edge structure and details.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 339
Author(s):  
Yan Liu ◽  
Guangrui Zhang ◽  
Hai Wang ◽  
Wei Zhao ◽  
Min Zhang ◽  
...  

In this paper, we propose an efficient multibranch residual network for single image super-resolution. Based on the idea of aggregated transformations, the split-transform-merge strategy is exploited to implement the multibranch architecture in an easy, extensible way. By this means, both the number of parameters and the time complexity are significantly reduced. In addition, to ensure the high-performance of super-resolution reconstruction, the residual block is modified and simplified with reference to the enhanced deep super-resolution network (EDSR) model. Moreover, our developed method possesses advantages of flexibility and extendibility, which are helpful to establish a specific network according to practical demands. Experimental results on both the Diverse 2K (DIV2K) and other standard datasets show that the proposed method can achieve a good performance in comparison with EDSR under the same number of convolution layers.


2021 ◽  
Vol 11 (3) ◽  
pp. 1092
Author(s):  
Seonjae Kim ◽  
Dongsan Jun ◽  
Byung-Gyu Kim ◽  
Hunjoo Lee ◽  
Eunjun Rhee

There are many studies that seek to enhance a low resolution image to a high resolution image in the area of super-resolution. As deep learning technologies have recently shown impressive results on the image interpolation and restoration field, recent studies are focusing on convolutional neural network (CNN)-based super-resolution schemes to surpass the conventional pixel-wise interpolation methods. In this paper, we propose two lightweight neural networks with a hybrid residual and dense connection structure to improve the super-resolution performance. In order to design the proposed networks, we extracted training images from the DIVerse 2K (DIV2K) image dataset and investigated the trade-off between the quality enhancement performance and network complexity under the proposed methods. The experimental results show that the proposed methods can significantly reduce both the inference speed and the memory required to store parameters and intermediate feature maps, while maintaining similar image quality compared to the previous methods.


2014 ◽  
Vol 568-570 ◽  
pp. 652-655 ◽  
Author(s):  
Zhao Li ◽  
Le Wang ◽  
Tao Yu ◽  
Bing Liang Hu

This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.


Author(s):  
S. E. EL-KHAMY ◽  
M. M. HADHOUD ◽  
M. I. DESSOUKY ◽  
B. M. SALAM ◽  
F. E. ABD EL-SAMIE

This paper presents a wavelet-based computationally efficient implementation of the Linear Minimum Mean Square Error (LMMSE) algorithm in image super-resolution. The image super-resolution reconstruction problem is well-known to be an ill-posed inverse problem of large dimensions. The LMMSE estimator to be implemented in the image super-resolution reconstruction problem requires an inversion of a very large dimension matrix, which is practically impossible. Our suggested implementation is based on breaking the problem into four consecutive steps, a registration step, a multi-channel LMMSE restoration step, a wavelet-based image fusion step and an LMMSE image interpolation step. The objective of the wavelet fusion step is to integrate the data obtained from each observation into a single image, which is then interpolated to give a high-resolution image. The paper explains the implementation of each step. The proposed implementation has succeeded in obtaining a high-resolution image from multiple degraded observations with a high PSNR. The computation time of the suggested implementation is small when compared to traditional iterative image super-resolution algorithms.


2020 ◽  
Vol 49 (1) ◽  
pp. 179-190
Author(s):  
Bin Zhou ◽  
Dong-jun Ye ◽  
Wei Wei ◽  
Marcin Wozniak

Image reconstruction is important in computer vision and many technologies have been presented to achieve better results. In this paper, gradient information is introduced to define new convex sets. A novel POCS-based model is proposed for super resolution reconstruction. The projection on the convex sets is alternative according to the gray value field and the gradient field. Then the local noise estimation is introduced to determine the threshold adaptively. The efficiency of our proposed model is verified by several numerical experiments. Experimental results show that, the PSNR and the SSIM can be both significantly improved by the proposed model.


Sign in / Sign up

Export Citation Format

Share Document