scholarly journals Joint Image Deblurring and Matching with Blurred Invariant-Based Sparse Representation Prior

Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Yuanjie Shao ◽  
Nong Sang ◽  
Juncai Peng ◽  
Changxin Gao

Image matching is important for vision-based navigation. However, most image matching approaches do not consider the degradation of the real world, such as image blur; thus, the performance of image matching often decreases greatly. Recent methods try to deal with this problem by utilizing a two-stage framework—first resorting to image deblurring and then performing image matching, which is effective but depends heavily on the quality of image deblurring. An emerging way to resolve this dilemma is to perform image deblurring and matching jointly, which utilize sparse representation prior to explore the correlation between deblurring and matching. However, these approaches obtain the sparse representation prior in the original pixel space, which do not adequately consider the influence of image blurring and thus may lead to an inaccurate estimation of sparse representation prior. Fortunately, we can extract the pseudo-Zernike moment with blurred invariant from images and obtain a reliable sparse representation prior in the blurred invariant space. Motivated by the observation, we propose a joint image deblurring and matching method with blurred invariant-based sparse representation prior (JDM-BISR), which obtains the sparse representation prior in the robust blurred invariant space rather than the original pixel space and thus can effectively improve the quality of image deblurring and the accuracy of image matching. Moreover, since the dimension of the pseudo-Zernike moment is much lower than the original image feature, our model can also increase the computational efficiency. Extensive experimental results demonstrate that the proposed method performs favorably against the state-of-the-art blurred image matching approach.

2018 ◽  
Vol 32 (34n36) ◽  
pp. 1840087 ◽  
Author(s):  
Qiwei Chen ◽  
Yiming Wang

A blind image deblurring algorithm based on relative gradient and sparse representation is proposed in this paper. The layered method restores the image by three steps: edge extraction, blur kernel estimation and image reconstruction. The positive and negative gradients in texture part have reversal changes, and the edge part that reflects the image structure has only one gradient change. According to the characteristic, the edge of the image is extracted by using the relative gradient of image, so as to estimate the blur kernel of the image. In the stage of image reconstruction, in order to overcome the problem of oversize of the image and the overcomplete dictionary matrix, the image is divided into small blocks. An overcomplete dictionary is used for sparse representation, and the image is reconstructed by the iterative threshold shrinkage method to improve the quality of image restoration. Experimental results show that the proposed method can effectively improve the quality of image restoration.


2021 ◽  
Vol 11 (15) ◽  
pp. 6917
Author(s):  
Yogendra Rao Musunuri ◽  
Oh-Seol Kwon

A novel strategy is proposed to address block artifacts in a conventional dark channel prior (DCP). The DCP was used to estimate the transmission map based on patch-based processing, which also results in image blurring. To enhance a degraded image, the proposed single-image dehazing technique restores a blurred image with a refined DCP based on a hidden Markov random field. Therefore, the proposed algorithm estimates a refined transmission map that can reduce the block artifacts and improve the image clarity without explicit guided filters. Experiments were performed on the remote-sensing images. The results confirm that the proposed algorithm is superior to the conventional approaches to image haze removal. Moreover, the proposed algorithm is suitable for image matching based on local feature extraction.


2021 ◽  
Author(s):  
Jun Yang ◽  
Zihao Liu ◽  
Li Chen ◽  
Ying Wu ◽  
Chen Cui ◽  
...  

Abstract Halftoning image is widely used in printing and scanning equipment, which is of great significance for the preservation and processing of these images. However, because of the different resolution of the display devices, the processing and display of halftone image are confronted with great challenges, such as Moore pattern and image blurring. Therefore, the inverse halftone technique is required to remove the halftoning screen. In this paper, we propose a sparse representation based inverse halftone algorithm via learning the clean dictionary, which is realized by two steps: deconvolution and sparse optimization in the transform domain to remove the noise. The main contributions of this paper include three aspects: first, we analysis the denoising effects for different training sets and the redundancy of dictionary; Then we propose the improved a sparse representation based denoising algorithm through adaptively learning the dictionary, which iteratively remove the noise of the training set and upgrade the quality of the dictionary; Then the error diffusion halftone image inverse halftoning algorithm is proposed. Finally, we verify that the noise level in the error diffusion linear model is fixed, and the noise level is only related to the diffusion operator. Experimental results show that the proposed algorithm has better PSNR and visual performance than state-of-the-art methods.


2013 ◽  
Vol 401-403 ◽  
pp. 1315-1318
Author(s):  
Bao Shu Li ◽  
Wen Li Wei ◽  
Ke Bin Cui ◽  
Xue Tao Xu

According to the limitations of the shooting environment, captured image exist the phenomenon of image blurring and noise. This paper proposes that the improved maximum entropy method recovery blurred image which acquire in aerial. Finally, according to the first order Markoff theory to evaluate the quality of the processed image, the results show that maximum entropy image restoration method compared to the conventional approach increase image clarity and details more better.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2020 ◽  
Vol 102 ◽  
pp. 102736 ◽  
Author(s):  
Zhenhua Xu ◽  
Huasong Chen ◽  
Zhenhua Li

2021 ◽  
Vol 5 (4) ◽  
pp. 783-793
Author(s):  
Muhammad Muttabi Hudaya ◽  
Siti Saadah ◽  
Hendy Irawan

needs a solid validation that has verification and matching uploaded images. To solve this problem, this paper implementing a detection model using Faster R-CNN and a matching method using ORB (Oriented FAST and Rotated BRIEF) and KNN-BFM (K-Nearest Neighbor Brute Force Matcher). The goal of the implementations is to reach both an 80% mark of accuracy and prove matching using ORB only can be a replaced OCR technique. The implementation accuracy results in the detection model reach mAP (Mean Average Precision) of 94%. But, the matching process only achieves an accuracy of 43,46%. The matching process using only image feature matching underperforms the previous OCR technique but improves processing time from 4510ms to 60m). Image matching accuracy has proven to increase by using a high-quality dan high quantity dataset, extracting features on the important area of EKTP card images.


2018 ◽  
Vol 35 (10) ◽  
pp. 1373-1391 ◽  
Author(s):  
Bahman Sadeghi ◽  
Kamal Jamshidi ◽  
Abbas Vafaei ◽  
S. Amirhassan Monadjemi

Author(s):  
M. Alqurashi ◽  
J. Wang

In UAV mapping using direct geo-referencing, the formation of stochastic model generally takes into the account the different types of measurements required to estimate the 3D coordinates of the feature points. Such measurements include image tie point coordinate measurements, camera position measurements and camera orientation measurements. In the commonly used stochastic model, it is commonly assumed that all tie point measurements have the same variance. In fact, these assumptions are not always realistic and thus, can lead to biased 3D feature coordinates. Tie point measurements for different image feature objects may not have the same accuracy due to the facts that the geometric distribution of features, particularly their feature matching conditions are different. More importantly, the accuracies of the geo-referencing measurements should also be considered into the mapping process. In this paper, impacts of typical stochastic models on the UAV mapping are investigated. It has been demonstrated that the quality of the geo-referencing measurements plays a critical role in real-time UAV mapping scenarios.


Sign in / Sign up

Export Citation Format

Share Document