image priors
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 17)

H-INDEX

11
(FIVE YEARS 2)

Author(s):  
Pooja S.* ◽  
◽  
Mallikarjunaswamy S. ◽  
Sharmila N. ◽  
◽  
...  

Image deblurring is a challenging illposed problem with widespread applications. Most existing deblurring methods make use of image priors or priors on the PSF to achieve accurate results. The performance of these methods depends on various factors such as the presence of well-lit conditions in the case of dark image priors and in case of statistical image priors the assumption the image follows a certain distribution might not be fully accurate. This holds for statistical priors used on the blur kernel as well. The aim of this paper is to propose a novel image deblurring method which can be readily extended to various applications such that it effectively deblurs the image irrespective of the various factors affecting its capture. A hybrid regularization method is proposed which uses a TV regularization framework with varying sparsity inducing priors. The edges of the image are accurately recovered due to the TV regularization. The sparsity prior is implemented through a dictionary such that varying weights of sparsity is induced based on the different image regions. This helps in smoothing the unwanted artifacts generated due to blur in the uniform regions of the image.


Author(s):  
Feng Jiang ◽  
ZhiYuan Chen ◽  
Amril Nazir ◽  
WuZhen Shi ◽  
WeiXiang Lim ◽  
...  
Keyword(s):  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Farhad Niknam ◽  
Hamed Qazvini ◽  
Hamid Latifi

AbstractImage reconstruction using minimal measured information has been a long-standing open problem in many computational imaging approaches, in particular in-line holography. Many solutions are devised based on compressive sensing (CS) techniques with handcrafted image priors or supervised deep neural networks (DNN). However, the limited performance of CS methods due to lack of information about the image priors and the requirement of an enormous amount of per-sample-type training resources for DNNs has posed new challenges over the primary problem. In this study, we propose a single-shot lensless in-line holographic reconstruction method using an untrained deep neural network which is incorporated with a physical image formation algorithm. We demonstrate that by modifying a deep decoder network with simple regularizers, a Gabor hologram can be inversely reconstructed via a minimization process that is constrained by a deep image prior. The outcoming model allows to accurately recover the phase and amplitude images without any training dataset, excess measurements, or specific assumptions about the object’s or the measurement’s characteristics.


2021 ◽  
Vol 11 (11) ◽  
pp. 4803
Author(s):  
Shiming Chen ◽  
Shaoping Xu ◽  
Xiaoguo Chen ◽  
Fen Li

Image denoising, a classic ill-posed problem, aims to recover a latent image from a noisy measurement. Over the past few decades, a considerable number of denoising methods have been studied extensively. Among these methods, supervised deep convolutional networks have garnered increasing attention, and their superior performance is attributed to their capability to learn realistic image priors from a large amount of paired noisy and clean images. However, if the image to be denoised is significantly different from the training images, it could lead to inferior results, and the networks may even produce hallucinations by using inappropriate image priors to handle an unseen noisy image. Recently, deep image prior (DIP) was proposed, and it overcame this drawback to some extent. The structure of the DIP generator network is capable of capturing the low-level statistics of a natural image using an unsupervised method with no training images other than the image itself. Compared with a supervised denoising model, the unsupervised DIP is more flexible when processing image content that must be denoised. Nevertheless, the denoising performance of DIP is usually inferior to the current supervised learning-based methods using deep convolutional networks, and it is susceptible to the over-fitting problem. To solve these problems, we propose a novel deep generative network with multiple target images and an adaptive termination condition. Specifically, we utilized mainstream denoising methods to generate two clear target images to be used with the original noisy image, enabling better guidance during the convergence process and improving the convergence speed. Moreover, we adopted the noise level estimation (NLE) technique to set a more reasonable adaptive termination condition, which can effectively solve the problem of over-fitting. Extensive experiments demonstrated that, according to the denoising results, the proposed approach significantly outperforms the original DIP method in tests on different databases. Specifically, the average peak signal-to-noise ratio (PSNR) performance of our proposed method on four databases at different noise levels is increased by 1.90 to 4.86 dB compared to the original DIP method. Moreover, our method achieves superior performance against state-of-the-art methods in terms of popular metrics, which include the structural similarity index (SSIM) and feature similarity index measurement (FSIM). Thus, the proposed method lays a good foundation for subsequent image processing tasks, such as target detection and super-resolution.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2348
Author(s):  
Zhe Liu ◽  
Yinqiang Zheng ◽  
Xian-Hua Han

Hyperspectral image (HSI) super-resolution (SR) is a challenging task due to its ill-posed nature, and has attracted extensive attention by the research community. Previous methods concentrated on leveraging various hand-crafted image priors of a latent high-resolution hyperspectral (HR-HS) image to regularize the degradation model of the observed low-resolution hyperspectral (LR-HS) and HR-RGB images. Different optimization strategies for searching a plausible solution, which usually leads to a limited reconstruction performance, were also exploited. Recently, deep-learning-based methods evolved for automatically learning the abundant image priors in a latent HR-HS image. These methods have made great progress for HS image super resolution. Current deep-learning methods have faced difficulties in designing more complicated and deeper neural network architectures for boosting the performance. They also require large-scale training triplets, such as the LR-HS, HR-RGB, and their corresponding HR-HS images for neural network training. These training triplets significantly limit their applicability to real scenarios. In this work, a deep unsupervised fusion-learning framework for generating a latent HR-HS image using only the observed LR-HS and HR-RGB images without previous preparation of any other training triplets is proposed. Based on the fact that a convolutional neural network architecture is capable of capturing a large number of low-level statistics (priors) of images, the automatic learning of underlying priors of spatial structures and spectral attributes in a latent HR-HS image using only its corresponding degraded observations is promoted. Specifically, the parameter space of a generative neural network used for learning the required HR-HS image to minimize the reconstruction errors of the observations using mathematical relations between data is investigated. Moreover, special convolutional layers for approximating the degradation operations between observations and the latent HR-HS image are specifically to construct an end-to-end unsupervised learning framework for HS image super-resolution. Experiments on two benchmark HS datasets, including the CAVE and Harvard, demonstrate that the proposed method can is capable of producing very promising results, even under a large upscaling factor. Furthermore, it can outperform other unsupervised state-of-the-art methods by a large margin, and manifests its superiority and efficiency.


2021 ◽  
Author(s):  
Farhad Niknam ◽  
Hamed Ghazvini ◽  
Hamid Latifi

Abstract Image reconstruction using minimal measured information has been a long-standing open problem in many computational imaging approaches, in particular in-line holography. Many solutions are devised based on compressive sensing (CS) techniques with handcrafted image priors or supervised Deep Neural Networks (DNN). However, the limited performance of CS methods due to lack of information about the image priors and the requirement of an enormous amount of per-sample-type training resources for DNNs has posed new challenges over the primary problem. In this study, we propose a single-shot lensless in-line holographic reconstruction method using an untrained deep neural network which is incorporated with a physical image formation algorithm. We demonstrate that by modifying a deep decoder network with simple regularizers, a Gabor hologram can be inversely reconstructed via a minimization process that is constrained by a deep image prior. The outcoming model allows to accurately recover the phase and amplitude images without any training dataset, excess measurements, or specific assumptions about the object’s or the measurement’s characteristics.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1957
Author(s):  
Yu Shi ◽  
Cien Fan ◽  
Lian Zou ◽  
Caixia Sun ◽  
Yifeng Liu

Deep neural networks are vulnerable to the adversarial example synthesized by adding imperceptible perturbations to the original image but can fool the classifier to provide wrong prediction outputs. This paper proposes an image restoration approach which provides a strong defense mechanism to provide robustness against adversarial attacks. We show that the unsupervised image restoration framework, deep image prior, can effectively eliminate the influence of adversarial perturbations. The proposed method uses multiple deep image prior networks called tandem deep image priors to recover the original image from adversarial example. Tandem deep image priors contain two deep image prior networks. The first network captures the main information of images and the second network recovers original image based on the prior information provided by the first network. The proposed method reduces the number of iterations originally required by deep image prior network and does not require adjusting the classifier or pre-training. It can be combined with other defensive methods. Our experiments show that the proposed method surprisingly achieves higher classification accuracy on ImageNet against a wide variety of adversarial attacks than previous state-of-the-art defense methods.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5308
Author(s):  
Fernando Pérez-Bueno ◽  
Miguel Vega ◽  
Javier Mateos ◽  
Rafael Molina ◽  
Aggelos K. Katsaggelos

Pansharpening is a technique that fuses a low spatial resolution multispectral image and a high spatial resolution panchromatic one to obtain a multispectral image with the spatial resolution of the latter while preserving the spectral information of the multispectral image. In this paper we propose a variational Bayesian methodology for pansharpening. The proposed methodology uses the sensor characteristics to model the observation process and Super-Gaussian sparse image priors on the expected characteristics of the pansharpened image. The pansharpened image, as well as all model and variational parameters, are estimated within the proposed methodology. Using real and synthetic data, the quality of the pansharpened images is assessed both visually and quantitatively and compared with other pansharpening methods. Theoretical and experimental results demonstrate the effectiveness, efficiency, and flexibility of the proposed formulation.


Sign in / Sign up

Export Citation Format

Share Document