spatial enhancement
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 14)

H-INDEX

10
(FIVE YEARS 1)

2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Muhammad Hameed Siddiqi ◽  
Amjad Alsirhani

Most medical images are low in contrast because adequate details that may prove vital decisions are not visible to the naked eye. Also, due to the low-contrast nature of the image, it is not easily segmented because there is no significant change between the pixel values, which makes the gradient very small Hence, the contour cannot converge on the edges of the object. In this work, we have proposed an ensembled spatial method for image enhancement. In this ensembled approach, we first employed the Laplacian filter, which highlights the areas of fast intensity variation. This filter can determine the sufficient details of an image. The Laplacian filter will also improve those features having shrill disjointedness. Then, the gradient of the image has been determined, which utilizes the surrounding pixels for the weighted convolution operation for noise diminishing. However, in the gradient filter, there is one negative integer in the weighting. The intensity value of the middle pixel might be deducted from the surrounding pixels, to enlarge the difference between the head-to-head pixels for calculating the gradients. This is one of the reasons due to which the gradient filter is not entirely optimistic, which may be calculated in eight directions. Therefore, the averaging filter has been utilized, which is an effective filter for image enhancement. This approach does not rely on the values that are completely diverse from distinctive values in the surrounding due to which it recollects the details of the image. The proposed approach significantly showed the best performance on various images collected in dynamic environments.


Author(s):  
Nour Aburaed ◽  
Mohammed Q. Alkhatib ◽  
Stephen Marshall ◽  
Jaime Zabalza ◽  
Hussain Al Ahmad

Author(s):  
Prasen Kumar Sharma ◽  
Sujoy Ghosh ◽  
Arijit Sur

In this article, we address the problem of rain-streak removal in the videos. Unlike the image, challenges in video restoration comprise temporal consistency besides spatial enhancement. The researchers across the world have proposed several effective methods for estimating the de-noised videos with outstanding temporal consistency. However, such methods also amplify the computational cost due to their larger size. By way of analysis, incorporating separate modules for spatial and temporal enhancement may require more computational resources. It motivates us to propose a unified architecture that directly estimates the de-rained frame with maximal visual quality and minimal computational cost. To this end, we present a deep learning-based Frame-recurrent Multi-contextual Adversarial Network for rain-streak removal in videos. The proposed model is built upon a Conditional Generative Adversarial Network (CGAN)-based framework where the generator model directly estimates the de-rained frame from the previously estimated one with the help of its multi-contextual adversary. To optimize the proposed model, we have incorporated the Perceptual loss function in addition to the conventional Euclidean distance. Also, instead of traditional entropy loss from the adversary, we propose to use the Euclidean distance between the features of de-rained and clean frames, extracted from the discriminator model as a cost function for video de-raining. Various experimental observations across 11 test sets, with over 10 state-of-the-art methods, using 14 image-quality metrics, prove the efficacy of the proposed work, both visually and computationally.


2020 ◽  
Vol 27 ◽  
pp. 1520-1524
Author(s):  
Xiaolei Qin ◽  
Yongxin Ge ◽  
Hui Yu ◽  
Feiyu Chen ◽  
Dan Yang

2019 ◽  
Vol 78 ◽  
pp. 164-175
Author(s):  
Zhen Yang ◽  
Cheng Chen ◽  
Yuqing Lin ◽  
Duming Wang ◽  
Hongting Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document