degraded images
Recently Published Documents


TOTAL DOCUMENTS

271
(FIVE YEARS 39)

H-INDEX

18
(FIVE YEARS 1)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 537
Author(s):  
Caiyue Zhou ◽  
Yanfen Kong ◽  
Chuanyong Zhang ◽  
Lin Sun ◽  
Dongmei Wu ◽  
...  

Group-based sparse representation (GSR) uses image nonlocal self-similarity (NSS) prior to grouping similar image patches, and then performs sparse representation. However, the traditional GSR model restores the image by training degraded images, which leads to the inevitable over-fitting of the data in the training model, resulting in poor image restoration results. In this paper, we propose a new hybrid sparse representation model (HSR) for image restoration. The proposed HSR model is improved in two aspects. On the one hand, the proposed HSR model exploits the NSS priors of both degraded images and external image datasets, making the model complementary in feature space and the plane. On the other hand, we introduce a joint sparse representation model to make better use of local sparsity and NSS characteristics of the images. This joint model integrates the patch-based sparse representation (PSR) model and GSR model, while retaining the advantages of the GSR model and the PSR model, so that the sparse representation model is unified. Extensive experimental results show that the proposed hybrid model outperforms several existing image recovery algorithms in both objective and subjective evaluations.


Author(s):  
Anchal Kumawat ◽  
Sucheta Panda

Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.


Information ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 1
Author(s):  
Rong Du ◽  
Weiwei Li ◽  
Shudong Chen ◽  
Congying Li ◽  
Yong Zhang

Underwater image enhancement recovers degraded underwater images to produce corresponding clear images. Image enhancement methods based on deep learning usually use paired data to train the model, while such paired data, e.g., the degraded images and the corresponding clear images, are difficult to capture simultaneously in the underwater environment. In addition, how to retain the detailed information well in the enhanced image is another critical problem. To solve such issues, we propose a novel unpaired underwater image enhancement method via a cycle generative adversarial network (UW-CycleGAN) to recover the degraded underwater images. Our proposed UW-CycleGAN model includes three main modules: (1) A content loss regularizer is adopted into the generator in CycleGAN, which constrains the detailed information existing in one degraded image to remain in the corresponding generated clear image; (2) A blur-promoting adversarial loss regularizer is introduced into the discriminator to reduce the blur and noise in the generated clear images; (3) We add the DenseNet block to the generator to retain more information of each feature map in the training stage. Finally, experimental results on two unpaired underwater image datasets produced satisfactory performance compared to the state-of-the-art image enhancement methods, which proves the effectiveness of the proposed model.


2021 ◽  
Vol 1 (1) ◽  
pp. 25-32
Author(s):  
Meryem H. Muhson ◽  
Ayad A. Al-Ani

Image restoration is a branch of image processing that involves a mathematical deterioration and restoration model to restore an original image from a degraded image. This research aims to restore blurred images that have been corrupted by a known or unknown degradation function. Image restoration approaches can be classified into 2 groups based on degradation feature knowledge: blind and non-blind techniques. In our research, we adopt the type of blind algorithm. A deep learning method (SR) has been proposed for single image super-resolution. This approach can directly learn an end-to-end mapping between low-resolution images and high-resolution images. The mapping is expressed by a deep convolutional neural network (CNN). The proposed restoration system must overcome and deal with the challenges that the degraded images have unknown kernel blur, to deblur degraded images as an estimation from original images with a minimum rate of error.  


Author(s):  
Borjan Gagoski ◽  
Junshen Xu ◽  
Paul Wighton ◽  
M. Dylan Tisdall ◽  
Robert Frost ◽  
...  

2021 ◽  
Vol 923 (1) ◽  
pp. 124
Author(s):  
Tim B. Miller ◽  
Pieter van Dokkum

Abstract Fitting parameterized models to images of galaxies has become the standard for measuring galaxy morphology. This forward-modeling technique allows one to account for the point-spread function to effectively study semi-resolved galaxies. However, using a specific parameterization for a galaxy’s surface brightness profile can bias measurements if it is not an accurate representation. Furthermore, it can be difficult to assess systematic errors in parameterized profiles. To overcome these issues we employ the Multi-Gaussian expansion (MGE) method of representing a galaxy’s profile together with a Bayesian framework for fitting images. MGE flexibly represents a galaxy’s profile using a series of Gaussians. We introduce a novel Bayesian inference approach that uses pre-rendered Gaussian components, which greatly speeds up computation time and makes it feasible to run the fitting code on large samples of galaxies. We demonstrate our method with a series of validation tests. By injecting galaxies, with properties similar to those observed at z ∼ 1.5, into deep Hubble Space Telescope observations we show that it can accurately recover total fluxes and effective radii of realistic galaxies. Additionally we use degraded images of local galaxies to show that our method can recover realistic galaxy surface brightness and color profiles. Our implementation is available in an open source python package imcascade, which contains all methods needed for the preparation of images, fitting, and analysis of results.


Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1856
Author(s):  
Shuhan Sun ◽  
Zhiyong Xu ◽  
Jianlin Zhang

Blind image deblurring is a well-known ill-posed inverse problem in the computer vision field. To make the problem well-posed, this paper puts forward a plain but effective regularization method, namely spectral norm regularization (SN), which can be regarded as the symmetrical form of the spectral norm. This work is inspired by the observation that the SN value increases after the image is blurred. Based on this observation, a blind deblurring algorithm (BDA-SN) is designed. BDA-SN builds a deblurring estimator for the image degradation process by investigating the inherent properties of SN and an image gradient. Compared with previous image regularization methods, SN shows more vital abilities to differentiate clear and degraded images. Therefore, the SN of an image can effectively help image deblurring in various scenes, such as text, face, natural, and saturated images. Qualitative and quantitative experimental evaluations demonstrate that BDA-SN can achieve favorable performances on actual and simulated images, with the average PSNR reaching 31.41, especially on the benchmark dataset of Levin et al.


Author(s):  
Hadi Salehi

Images are widely used in engineering. Unfortunately, medical ultrasound images and synthetic aperture radar (SAR) images are mainly degraded by an intrinsic noise called speckle. Therefore, de-speckling is a main pre-processing stage for degraded images. In this paper, first, an optimized adaptive Wiener filter (OAWF) is proposed. OAWF can be applied to the input image without the need for logarithmic transform. In addition its performance is improved. Next, the coefficient of variation (CV) is computed from the input image. With the help of CV, the guided filter converts to an improved guided filter (IGF). Next, the improved guided filter is applied on the image. Subsequently, the fast bilateral filter is applied on the image. The proposed filter has a better image detail preservation compared to some other standard methods. The experimental outcomes show that the proposed denoising algorithm is able to preserve image details and edges compared with other de-speckling methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Yongqi Guo ◽  
Yuxu Lu ◽  
Yu Guo ◽  
Ryan Wen Liu ◽  
Kwok Tai Chui

The timely, automatic, and accurate detection of water-surface targets has received significant attention in intelligent vision-enabled maritime transportation systems. The reliable detection results are also beneficial for water quality monitoring in practical applications. However, the visual image quality is often inevitably degraded due to the poor weather conditions, potentially leading to unsatisfactory target detection results. The degraded images could be restored using state-of-the-art visibility enhancement methods. It is still difficult to generate high-quality detection performance due to the unavoidable loss of details in restored images. To alleviate these limitations, we first investigate the influences of visibility enhancement methods on detection results and then propose a neural network-empowered water-surface target detection framework. A data augmentation strategy, which synthetically simulates the degraded images under different weather conditions, is further presented to promote the generalization and feature representation abilities of our network. The proposed detection performance has the capacity of accurately detecting the water-surface targets under different adverse imaging conditions, e.g., haze, low-lightness, and rain. Experimental results on both synthetic and realistic scenarios have illustrated the effectiveness of the proposed framework in terms of detection accuracy and efficacy.


Sign in / Sign up

Export Citation Format

Share Document