video restoration
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 16)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Sachin Mehta ◽  
Amit Kumar ◽  
Fitsum Reda ◽  
Varun Nasery ◽  
Vikram Mulukutla ◽  
...  

2021 ◽  
Author(s):  
Seunghwan Lee ◽  
Donghyeon Cho ◽  
Jiwon Kim ◽  
Tae Hyun Kim
Keyword(s):  

Author(s):  
Prasen Kumar Sharma ◽  
Sujoy Ghosh ◽  
Arijit Sur

In this article, we address the problem of rain-streak removal in the videos. Unlike the image, challenges in video restoration comprise temporal consistency besides spatial enhancement. The researchers across the world have proposed several effective methods for estimating the de-noised videos with outstanding temporal consistency. However, such methods also amplify the computational cost due to their larger size. By way of analysis, incorporating separate modules for spatial and temporal enhancement may require more computational resources. It motivates us to propose a unified architecture that directly estimates the de-rained frame with maximal visual quality and minimal computational cost. To this end, we present a deep learning-based Frame-recurrent Multi-contextual Adversarial Network for rain-streak removal in videos. The proposed model is built upon a Conditional Generative Adversarial Network (CGAN)-based framework where the generator model directly estimates the de-rained frame from the previously estimated one with the help of its multi-contextual adversary. To optimize the proposed model, we have incorporated the Perceptual loss function in addition to the conventional Euclidean distance. Also, instead of traditional entropy loss from the adversary, we propose to use the Euclidean distance between the features of de-rained and clean frames, extracted from the discriminator model as a cost function for video de-raining. Various experimental observations across 11 test sets, with over 10 state-of-the-art methods, using 14 image-quality metrics, prove the efficacy of the proposed work, both visually and computationally.


2020 ◽  
Vol 8 (6) ◽  
pp. 1592-1595

Image colorization is a fascinating topic and has become an area of research in the recent years. In this project, we are going to colorize black and white images with the help of Deep Learning techniques. Some previous approaches required human involvement or resulted in the development of desaturated images. We are building a Deep Convolutional Neural Network (CNN) which will be trained on over a million images. The output generated by the model is fully dependent on the images it has been trained from and requires no human help. The images are taken from different sources like ResNet, Reddit, etc. The model will include many hidden layers to make the output more accurate. This will be a fully automatic model and will produce images with accurate colors and contrast. Finally, the goal of this project is to produce realistic and color accurate images that can easily fool the viewer. The viewer wouldn’t be able to differentiate between the photo which the model produced and the real photo. Our project has wide practical applications like historical image/video restoration, image enhancement for better interpretability, frame by frame colorization of black and white documentaries, etc.


Sign in / Sign up

Export Citation Format

Share Document