Motion-blur kernel size estimation via learning a convolutional neural network

2019 ◽  
Vol 119 ◽  
pp. 86-93 ◽  
Author(s):  
Lerenhan Li ◽  
Nong Sang ◽  
Luxin Yan ◽  
Changxin Gao
2014 ◽  
Vol 31 (5) ◽  
pp. 733-746 ◽  
Author(s):  
Shaoguo Liu ◽  
Haibo Wang ◽  
Jue Wang ◽  
Sunghyun Cho ◽  
Chunhong Pan

Author(s):  
Vikas Kumar ◽  
Tanupriya Choudhury ◽  
Suresh Chandra Satapathy ◽  
Ravi Tomar ◽  
Archit Aggarwal

Recently, huge progress has been achieved in the field of single image super resolution which augments the resolution of images. The idea behind super resolution is to convert low-resolution images into high-resolution images. SRCNN (Single Resolution Convolutional Neural Network) was a huge improvement over the existing methods of single-image super resolution. However, video super-resolution, despite being an active field of research, is yet to benefit from deep learning. Using still images and videos downloaded from various sources, we explore the possibility of using SRCNN along with image fusion techniques (minima, maxima, average, PCA, DWT) to improve over existing video super resolution methods. Video Super-Resolution has inherent difficulties such as unexpected motion, blur and noise. We propose Video Super Resolution – Image Fusion (VSR-IF) architecture which utilizes information from multiple frames to produce a single high- resolution frame for a video. We use SRCNN as a reference model to obtain high resolution adjacent frames and use a concatenation layer to group those frames into a single frame. Since, our method is data-driven and requires only minimal initial training, it is faster than other video super resolution methods. After testing our program, we find that our technique shows a significant improvement over SCRNN and other single image and frame super resolution techniques.


Sign in / Sign up

Export Citation Format

Share Document