Single Image Super-Resolution Using Deep CNN with Dense Skip Connections and Inception-ResNet

Author(s):  
Chao Chen ◽  
Feng Qi
Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1979
Author(s):  
Wazir Muhammad ◽  
Zuhaibuddin Bhutto ◽  
Arslan Ansari ◽  
Mudasar Latif Memon ◽  
Ramesh Kumar ◽  
...  

Recent research on single-image super-resolution (SISR) using deep convolutional neural networks has made a breakthrough and achieved tremendous performance. Despite their significant progress, numerous convolutional neural networks (CNN) are limited in practical applications, owing to the requirement of the heavy computational cost of the model. This paper proposes a multi-path network for SISR, known as multi-path deep CNN with residual inception network for single image super-resolution. In detail, a residual/ResNet block with an Inception block supports the main framework of the entire network architecture. In addition, remove the batch normalization layer from the residual network (ResNet) block and max-pooling layer from the Inception block to further reduce the number of parameters to preventing the over-fitting problem during the training. Moreover, a conventional rectified linear unit (ReLU) is replaced with Leaky ReLU activation function to speed up the training process. Specifically, we propose a novel two upscale module, which adopts three paths to upscale the features by jointly using deconvolution and upsampling layers, instead of using single deconvolution layer or upsampling layer alone. The extensive experimental results on image super-resolution (SR) using five publicly available test datasets, which show that the proposed model not only attains the higher score of peak signal-to-noise ratio/structural similarity index matrix (PSNR/SSIM) but also enables faster and more efficient calculations against the existing image SR methods. For instance, we improved our method in terms of overall PSNR on the SET5 dataset with challenging upscale factor 8× as 1.88 dB over the baseline bicubic method and reduced computational cost in terms of number of parameters 62% by deeply-recursive convolutional neural network (DRCN) method.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Author(s):  
Vishal Chudasama ◽  
Kishor Upla ◽  
Kiran Raja ◽  
Raghavendra Ramachandra ◽  
Christoph Busch

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Kai Shao ◽  
Qinglan Fan ◽  
Yunfeng Zhang ◽  
Fangxun Bao ◽  
Caiming Zhang

2021 ◽  
Vol 213 ◽  
pp. 106663
Author(s):  
Yujie Dun ◽  
Zongyang Da ◽  
Shuai Yang ◽  
Yao Xue ◽  
Xueming Qian

Sign in / Sign up

Export Citation Format

Share Document