degraded image
Recently Published Documents


TOTAL DOCUMENTS

168
(FIVE YEARS 62)

H-INDEX

10
(FIVE YEARS 3)

Author(s):  
K. Praveen Kumar ◽  
C. Venkata Narasimhulu ◽  
K. Satya Prasad

The degraded image during the process of image analysis needs more number of iterations to restore it. These iterations take long waiting time and slow scanning, resulting in inefficient image restoration. A few numbers of measurements are enough to recuperate an image with good condition. Due to tree sparsity, a 2D wavelet tree reduces the number of coefficients and iterations to restore the degraded image. All the wavelet coefficients are extracted with overlaps as low and high sub-band space and ordered them such that they are decomposed in the tree ordering structured path. Some articles have addressed the problems with tree sparsity and total variation (TV), but few authors endorsed the benefits of tree sparsity. In this paper, a spatial variation regularization algorithm based on tree order is implemented to change the window size and variation estimators to reduce the loss of image information and to solve the problem of image smoothing operation. The acceptance rate of the tree-structured path relies on local variation estimators to regularize the performance parameters and update them to restore the image. For this, the Localized Total Variation (LTV) method is proposed and implemented on a 2D wavelet tree ordering structured path based on the proposed image smooth adjustment scheme. In the end, a reliable reordering algorithm proposed to reorder the set of pixels and to increase the reliability of the restored image. Simulation results clearly show that the proposed method improved the performance compared to existing methods of image restoration.


Author(s):  
Joycy K. Antony ◽  
K. Kanagalakshmi

Images captured in dim light are hardly satisfactory and increasing the International Organization for Standardization (ISO) for a short duration of exposure makes them noisy. The image restoration methods have a wide range of applications in the field of medical imaging, computer vision, remote sensing, and graphic design. Although the use of flash improves the lighting, it changed the image tone besides developing unnecessary highlight and shadow. Thus, these drawbacks are overcome using the image restoration methods that recovered the image with high quality from the degraded observation. The main challenge in the image restoration approach is recovering the degraded image contaminated with the noise. In this research, an effective algorithm, named T2FRF filter, is developed for the restoration of the image. The noisy pixel is identified from the input fingerprint image using Deep Convolutional Neural Network (Deep CNN), which is trained using the neighboring pixels. The Rider Optimization Algorithm (ROA) is used for the removal of the noisy pixel in the image. The enhancement of the pixel is performed using the type II fuzzy system. The developed T2FRF filter is measured using the metrics, such as correlation coefficient and Peak Signal to Noise Ratio (PSNR) for evaluating the performance. When compared with the existing image restoration method, the developed method obtained a maximum correlation coefficient of 0.7504 and a maximum PSNR of 28.2467dB, respectively.


Information ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 1
Author(s):  
Rong Du ◽  
Weiwei Li ◽  
Shudong Chen ◽  
Congying Li ◽  
Yong Zhang

Underwater image enhancement recovers degraded underwater images to produce corresponding clear images. Image enhancement methods based on deep learning usually use paired data to train the model, while such paired data, e.g., the degraded images and the corresponding clear images, are difficult to capture simultaneously in the underwater environment. In addition, how to retain the detailed information well in the enhanced image is another critical problem. To solve such issues, we propose a novel unpaired underwater image enhancement method via a cycle generative adversarial network (UW-CycleGAN) to recover the degraded underwater images. Our proposed UW-CycleGAN model includes three main modules: (1) A content loss regularizer is adopted into the generator in CycleGAN, which constrains the detailed information existing in one degraded image to remain in the corresponding generated clear image; (2) A blur-promoting adversarial loss regularizer is introduced into the discriminator to reduce the blur and noise in the generated clear images; (3) We add the DenseNet block to the generator to retain more information of each feature map in the training stage. Finally, experimental results on two unpaired underwater image datasets produced satisfactory performance compared to the state-of-the-art image enhancement methods, which proves the effectiveness of the proposed model.


Technologies ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 101
Author(s):  
Ho Sang Lee

A sandstorm image has features similar to those of a hazy image with regard to the obtaining process. However, the difference between a sand dust image and a hazy image is the color channel balance. In general, a hazy image has no color cast and has a balanced color channel with fog and dust. However, a sand dust image has a yellowish or reddish color cast due to sand particles, which cause the color channels to degrade. When the sand dust image is enhanced without color channel compensation, the improved image also has a new color cast. Therefore, to enhance the sandstorm image naturally without a color cast, the color channel compensation step is needed. Thus, to balance the degraded color channel, this paper proposes the color balance method using each color channel’s eigenvalue. The eigenvalue reflects the image’s features. The degraded image and the undegraded image have different eigenvalues on each color channel. Therefore, if using the eigenvalue of each color channel, the degraded image can be improved naturally and balanced. Due to the color-balanced image having the same features as the hazy image, this work, to improve the hazy image, uses dehazing methods such as the dark channel prior (DCP) method. However, because the ordinary DCP method has weak points, this work proposes a compensated dark channel prior and names it the adaptive DCP (ADCP) method. The proposed method is objectively and subjectively superior to existing methods when applied to various images.


2021 ◽  
Vol 1 (1) ◽  
pp. 25-32
Author(s):  
Meryem H. Muhson ◽  
Ayad A. Al-Ani

Image restoration is a branch of image processing that involves a mathematical deterioration and restoration model to restore an original image from a degraded image. This research aims to restore blurred images that have been corrupted by a known or unknown degradation function. Image restoration approaches can be classified into 2 groups based on degradation feature knowledge: blind and non-blind techniques. In our research, we adopt the type of blind algorithm. A deep learning method (SR) has been proposed for single image super-resolution. This approach can directly learn an end-to-end mapping between low-resolution images and high-resolution images. The mapping is expressed by a deep convolutional neural network (CNN). The proposed restoration system must overcome and deal with the challenges that the degraded images have unknown kernel blur, to deblur degraded images as an estimation from original images with a minimum rate of error.  


2021 ◽  
pp. 1-21
Author(s):  
Yu Guo ◽  
Yuxu Lu ◽  
Ryan Wen Liu

Abstract Maritime video surveillance has become an essential part of the vessel traffic services system, intended to guarantee vessel traffic safety and security in maritime applications. To make maritime surveillance more feasible and practicable, many intelligent vision-empowered technologies have been developed to automatically detect moving vessels from maritime visual sensing data (i.e., maritime surveillance videos). However, when visual data is collected in a low-visibility environment, the essential optical information is often hidden in the dark, potentially resulting in decreased accuracy of vessel detection. To guarantee reliable vessel detection under low-visibility conditions, the paper proposes a low-visibility enhancement network (termed LVENet) based on Retinex theory to enhance imaging quality in maritime video surveillance. LVENet is a lightweight deep neural network incorporating a depthwise separable convolution. The synthetically-degraded image generation and hybrid loss function are further presented to enhance the robustness and generalisation capacities of LVENet. Both full-reference and no-reference evaluation experiments demonstrate that LVENet could yield comparable or even better visual qualities than other state-of-the-art methods. In addition, it takes LVENet just 0⋅0045 s to restore degraded images with size 1920 × 1080 pixels on an NVIDIA 2080Ti GPU, which can adequately meet real-time requirements. Using LVENet, vessel detection performance can be greatly improved with enhanced visibility under low-light imaging conditions.


2021 ◽  
pp. 13050-13062
Author(s):  
Mrs. Poonam Y. Pawar, Dr. Bharati Sanjay Ainapure

Image Restoration is one of the challenging and essential milestones in the image processing domain. Digital image processing is a technique for manipulating digital images using a variety of computer algorithms. The process of transforming the degraded or damaged image to the original image can be known as Image Restoration. The image restoration process improves image quality by converting the degraded image into the original clean image. The techniques for image restoration are comprised of predefined parameters through which digital image gets processed for refinements. The purpose of restoration is to start with the acquired image and then estimate the original image as accurately as possible. A degraded image can be contaminated by any of a blur or noise or both. Many factors can contribute to image degradation, including poor capture, poor lighting, and poor eyesight. Medical science, defensive sensor systems, forensic detections, and astrology all rely on image restoration for accuracy. This paper discusses various image restoration techniques using recent trends for performance improvements.


2021 ◽  
Vol 11 (15) ◽  
pp. 6917
Author(s):  
Yogendra Rao Musunuri ◽  
Oh-Seol Kwon

A novel strategy is proposed to address block artifacts in a conventional dark channel prior (DCP). The DCP was used to estimate the transmission map based on patch-based processing, which also results in image blurring. To enhance a degraded image, the proposed single-image dehazing technique restores a blurred image with a refined DCP based on a hidden Markov random field. Therefore, the proposed algorithm estimates a refined transmission map that can reduce the block artifacts and improve the image clarity without explicit guided filters. Experiments were performed on the remote-sensing images. The results confirm that the proposed algorithm is superior to the conventional approaches to image haze removal. Moreover, the proposed algorithm is suitable for image matching based on local feature extraction.


2021 ◽  
Author(s):  
Adnan Qayyum ◽  
Waqas Sultani ◽  
Fahad Shamshad ◽  
Rashid Tufail ◽  
Junaid Qadir

Retinal images acquired using fundus cameras are often visually blurred due to imperfect imaging conditions, refractive medium turbidity, and motion blur. In addition, ocular diseases such as the presence of cataract also result in blurred retinal images. The presence of blur in retinal fundus images reduces the effectiveness of the diagnosis process of an expert ophthalmologist or a computer-aided detection/diagnosis system. In this paper, we put forward a single-shot deep image prior (DIP)-based approach for retinal image enhancement. Unlike typical deep learning-based approaches, our method does not require any training data. Instead, our DIP-based method can learn the underlying image prior while using a single degraded image. To perform retinal image enhancement, we frame it as a layer decomposition problem and investigate the use of two well-known analytical priors, i.e., dark channel prior (DCP) and bright channel prior (BCP) for atmospheric light estimation. We show that both the untrained neural networks and the pretrained neural networks can be used to generate an enhanced image while using only a single degraded image. We evaluate our proposed framework quantitatively on five datasets using three widely used metrics and complement that with a subjective qualitative assessment of the enhancement by two expert ophthalmologists. We have compared our method with a recent state-of-the-art method cofe-Net using synthetically degraded retinal fundus images and show that our method outperforms the state-of-the-art method and provides a gain of 1.23 and 1.4 in average PSNR and SSIM respectively. Our method also outperforms other works proposed in the literature, which have evaluated their performance on non-public proprietary datasets, on the basis of the reported results.


Sign in / Sign up

Export Citation Format

Share Document