Edge Enhancement Loss Function for Target Object IR image Super Resolution

Author(s):  
Kuan-Min Lee ◽  
Pei-Jun Lee ◽  
Trong-An Bui
2020 ◽  
Vol 79 (29-30) ◽  
pp. 21265-21278
Author(s):  
Qiong Wu ◽  
Chunxiao Fan ◽  
Yong Li ◽  
Yang Li ◽  
Jiahao Hu

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 57856-57867 ◽  
Author(s):  
Hong Zheng ◽  
Kun Zeng ◽  
Di Guo ◽  
Jiaxi Ying ◽  
Yu Yang ◽  
...  

2021 ◽  
Vol 13 (19) ◽  
pp. 3848
Author(s):  
Yuntao Wang ◽  
Lin Zhao ◽  
Liman Liu ◽  
Huaifei Hu ◽  
Wenbing Tao

It is extremely important and necessary for low computing power or portable devices to design more lightweight algorithms for image super-resolution (SR). Recently, most SR methods have achieved outstanding performance by sacrificing computational cost and memory storage, or vice versa. To address this problem, we introduce a lightweight U-shaped residual network (URNet) for fast and accurate image SR. Specifically, we propose a more effective feature distillation pyramid residual group (FDPRG) to extract features from low-resolution images. The FDPRG can effectively reuse the learned features with dense shortcuts and capture multi-scale information with a cascaded feature pyramid block. Based on the U-shaped structure, we utilize a step-by-step fusion strategy to improve the performance of feature fusion of different blocks. This strategy is different from the general SR methods which only use a single Concat operation to fuse the features of all basic blocks. Moreover, a lightweight asymmetric residual non-local block is proposed to model the global context information and further improve the performance of SR. Finally, a high-frequency loss function is designed to alleviate smoothing image details caused by pixel-wise loss. Simultaneously, the proposed modules and high-frequency loss function can be easily plugged into multiple mature architectures to improve the performance of SR. Extensive experiments on multiple natural image datasets and remote sensing image datasets show the URNet achieves a better trade-off between image SR performance and model complexity against other state-of-the-art SR methods.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2064
Author(s):  
Chunmei Fu ◽  
Yong Yin

Significant progress has been made in single image super-resolution (SISR) based on deep convolutional neural networks (CNNs). The attention mechanism can capture important features well, and the feedback mechanism can realize the fine-tuning of the output to the input. However, they have not been reasonably applied in the existing deep learning-based SISR methods. Additionally, the results of the existing methods still have serious artifacts and edge blurring. To address these issues, we proposed an Edge-enhanced with Feedback Attention Network for image super-resolution (EFANSR), which comprises three parts. The first part is an SR reconstruction network, which adaptively learns the features of different inputs by integrating channel attention and spatial attention blocks to achieve full utilization of the features. We also introduced feedback mechanism to feed high-level information back to the input and fine-tune the input in the dense spatial and channel attention block. The second part is the edge enhancement network, which obtains a sharp edge through adaptive edge enhancement processing on the output of the first SR network. The final part merges the outputs of the first two parts to obtain the final edge-enhanced SR image. Experimental results show that our method achieves performance comparable to the state-of-the-art methods with lower complexity.


Sign in / Sign up

Export Citation Format

Share Document