A Deep Learning Based No-Reference Image Quality Assessment Model for Single-Image Super-Resolution

Author(s):  
Bahetiyaer Bare ◽  
Ke Li ◽  
Bo Yan ◽  
Bailan Feng ◽  
Chunfeng Yao
PLoS ONE ◽  
2020 ◽  
Vol 15 (10) ◽  
pp. e0241313
Author(s):  
Zhengqiang Xiong ◽  
Manhui Lin ◽  
Zhen Lin ◽  
Tao Sun ◽  
Guangyi Yang ◽  
...  

Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


2019 ◽  
Vol 16 (4) ◽  
pp. 413-426 ◽  
Author(s):  
Viet Khanh Ha ◽  
Jin-Chang Ren ◽  
Xin-Ying Xu ◽  
Sophia Zhao ◽  
Gang Xie ◽  
...  

2020 ◽  
Vol 29 (04) ◽  
pp. 1
Author(s):  
Yin Zhang ◽  
Junhua Yan ◽  
Xuan Du ◽  
Xuehan Bai ◽  
Xiyang Zhi ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6457
Author(s):  
Hayat Ullah ◽  
Muhammad Irfan ◽  
Kyungjin Han ◽  
Jong Weon Lee

Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.


Author(s):  
Lujun Lin ◽  
Yiming Fang ◽  
Xiaochen Du ◽  
Zhu Zhou

As the practical applications in other fields, high-resolution images are usually expected to provide a more accurate assessment for the air-coupled ultrasonic (ACU) characterization of wooden materials. This paper investigated the feasibility of applying single image super-resolution (SISR) methods to recover high-quality ACU images from the raw observations that were constructed directly by the on-the-shelf ACU scanners. Four state-of-the-art SISR methods were applied to the low-resolution ACU images of wood products. The reconstructed images were evaluated by visual assessment and objective image quality metrics, including peak signal-to-noise-ratio and structural similarity. Both qualitative and quantitative evaluations indicated that the substantial improvement of image quality can be yielded. The results of the experiments demonstrated the superior performance and high reproducibility of the method for generating high-quality ACU images. Sparse coding based super-resolution and super-resolution convolutional neural network (SRCNN) significantly outperformed other algorithms. SRCNN has the potential to act as an effective tool to generate higher resolution ACU images due to its flexibility.


Sign in / Sign up

Export Citation Format

Share Document