scholarly journals Dual Features Extraction Network for Image Super-Resolution

2021 ◽  
Author(s):  
Guosheng Zhao ◽  
Kun Wang

With the development of deep convolutional neural network, recent research on single image super-resolution (SISR) has achieved great achievements. In particular, the networks, which fully utilize features, achieve a better performance. In this paper, we propose an image super-resolution dual features extraction network (SRDFN). Our method uses the dual features extraction blocks (DFBs) to extract and combine low-resolution features, with less noise but less detail, and high-resolution features, with more detail but more noise. The output of DFB contains the advantages of low- and high-resolution features, with more detail and less noise. Moreover, due to that the number of DFB and channels can be set by weighting accuracy against size of model, SRDFN can be designed according to actual situation. The experimental results demonstrate that the proposed SRDFN performs well in comparison with the state-of-the-art methods.

Author(s):  
Vikas Kumar ◽  
Tanupriya Choudhury ◽  
Suresh Chandra Satapathy ◽  
Ravi Tomar ◽  
Archit Aggarwal

Recently, huge progress has been achieved in the field of single image super resolution which augments the resolution of images. The idea behind super resolution is to convert low-resolution images into high-resolution images. SRCNN (Single Resolution Convolutional Neural Network) was a huge improvement over the existing methods of single-image super resolution. However, video super-resolution, despite being an active field of research, is yet to benefit from deep learning. Using still images and videos downloaded from various sources, we explore the possibility of using SRCNN along with image fusion techniques (minima, maxima, average, PCA, DWT) to improve over existing video super resolution methods. Video Super-Resolution has inherent difficulties such as unexpected motion, blur and noise. We propose Video Super Resolution – Image Fusion (VSR-IF) architecture which utilizes information from multiple frames to produce a single high- resolution frame for a video. We use SRCNN as a reference model to obtain high resolution adjacent frames and use a concatenation layer to group those frames into a single frame. Since, our method is data-driven and requires only minimal initial training, it is faster than other video super resolution methods. After testing our program, we find that our technique shows a significant improvement over SCRNN and other single image and frame super resolution techniques.


Sign in / Sign up

Export Citation Format

Share Document