scholarly journals Blind Image Super Resolution Using Deep Unsupervised Learning

Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2591
Author(s):  
Kazuhiro Yamawaki ◽  
YongQing Sun ◽  
Xian-Hua Han

The goal of single image super resolution (SISR) is to recover a high-resolution (HR) image from a low-resolution (LR) image. Deep learning based methods have recently made a remarkable performance gain in terms of both the effectiveness and efficiency for SISR. Most existing methods have to be trained based on large-scale synthetic paired data in a fully supervised manner. With the available HR natural images, the corresponding LR images are usually synthesized with a simple fixed degradation operation, such as bicubic down-sampling. Then, the learned deep models with these training data usually face difficulty to be generalized to real scenarios with unknown and complicated degradation operations. This study exploits a novel blind image super-resolution framework using a deep unsupervised learning network. The proposed method can simultaneously predict the underlying HR image and its specific degradation operation from the observed LR image only without any prior knowledge. The experimental results on three benchmark datasets validate that our proposed method achieves a promising performance under the unknown degradation models.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.



Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1234
Author(s):  
Lei Zha ◽  
Yu Yang ◽  
Zicheng Lai ◽  
Ziwei Zhang ◽  
Juan Wen

In recent years, neural networks for single image super-resolution (SISR) have applied more profound and deeper network structures to extract extra image details, which brings difficulties in model training. To deal with deep model training problems, researchers utilize dense skip connections to promote the model’s feature representation ability by reusing deep features of different receptive fields. Benefiting from the dense connection block, SRDensenet has achieved excellent performance in SISR. Despite the fact that the dense connected structure can provide rich information, it will also introduce redundant and useless information. To tackle this problem, in this paper, we propose a Lightweight Dense Connected Approach with Attention for Single Image Super-Resolution (LDCASR), which employs the attention mechanism to extract useful information in channel dimension. Particularly, we propose the recursive dense group (RDG), consisting of Dense Attention Blocks (DABs), which can obtain more significant representations by extracting deep features with the aid of both dense connections and the attention module, making our whole network attach importance to learning more advanced feature information. Additionally, we introduce the group convolution in DABs, which can reduce the number of parameters to 0.6 M. Extensive experiments on benchmark datasets demonstrate the superiority of our proposed method over five chosen SISR methods.



2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Jianhong Li ◽  
Kanoksak Wattanachote ◽  
Yarong Wu

Prior knowledge plays an important role in the process of image super-resolution reconstruction, which can constrain the solution space efficiently. In this paper, we utilized the fact that clear image exhibits stronger self-similarity property than other degradated version to present a new prior called maximizing nonlocal self-similarity for single image super-resolution. For describing the prior with mathematical language, a joint Gaussian mixture model was trained with LR and HR patch pairs extracted from the input LR image and its lower scale, and the prior can be described as a specific Gaussian distribution by derivation. In our algorithm, a large scale of sophisticated training and time-consuming nearest neighbor searching is not necessary, and the cost function of this algorithm shows closed form solution. The experiments conducted on BSD500 and other popular images demonstrate that the proposed method outperforms traditional methods and is competitive with the current state-of-the-art algorithms in terms of both quantitative metrics and visual quality.



Author(s):  
Yu Weng ◽  
Zehua Chen ◽  
Tianbao Zhou

AbstractDeep learning has shown prominent superiority over other machine learning algorithms in Single Image Super-Resolution (SISR). In order to reduce the efforts and resources cost on manually designing deep architecture, we use differentiable neural architecture search (DARTS) on SISR. Since neural architecture search was originally used for classification tasks, our experiments show that direct usage of DARTS on super-resolutions tasks will give rise to many skip connections in the search architecture, which results in the poor performance of final architecture. Thus, it is necessary for DARTS to have made some improvements for the application in the field of SISR. According to characteristics of SISR, we remove redundant operations and redesign some operations in the cell to achieve an improved DARTS. Then we use the improved DARTS to search convolution cells as a nonlinear mapping part of super-resolution network. The new super-resolution architecture shows its effectiveness on benchmark datasets and DIV2K dataset.



Author(s):  
Yanchun Li ◽  
Jianglian Cao ◽  
Zhetao Li ◽  
Sangyoon Oh ◽  
Nobuyoshi Komuro

Single image super-resolution attempts to reconstruct a high-resolution (HR) image from its corresponding low-resolution (LR) image, which has been a research hotspot in computer vision and image processing for decades. To improve the accuracy of super-resolution images, many works adopt very deep networks to model the translation from LR to HR, resulting in memory and computation consumption. In this article, we design a lightweight dense connection distillation network by combining the feature fusion units and dense connection distillation blocks (DCDB) that include selective cascading and dense distillation components. The dense connections are used between and within the distillation block, which can provide rich information for image reconstruction by fusing shallow and deep features. In each DCDB, the dense distillation module concatenates the remaining feature maps of all previous layers to extract useful information, the selected features are then assessed by the proposed layer contrast-aware channel attention mechanism, and finally the cascade module aggregates the features. The distillation mechanism helps to reduce training parameters and improve training efficiency, and the layer contrast-aware channel attention further improves the performance of model. The quality and quantity experimental results on several benchmark datasets show the proposed method performs better tradeoff in term of accuracy and efficiency.



2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Kai Huang ◽  
Wenhao Wang ◽  
Cheng Pang ◽  
Rushi Lan ◽  
Ji Li ◽  
...  

Convolution neural networks facilitate the significant process of single image super-resolution (SISR). However, most of the existing CNN-based models suffer from numerous parameters and excessively deeper structures. Moreover, these models relying on in-depth features commonly ignore the hints of low-level features, resulting in poor performance. This paper demonstrates an intriguing network for SISR with cascading and residual connections (CASR), which alleviates these problems by extracting features in a small net called head module via the strategies based on the depthwise separable convolution and deformable convolution. Moreover, we also include a cascading residual block (CAS-Block) for the upsampling process, which benefits the gradient propagation and feature learning while easing the model training. Extensive experiments conducted on four benchmark datasets demonstrate that the proposed method is superior to the latest SISR methods in terms of quantitative indicators and realistic visual effects.



Author(s):  
Ahmed Cheikh Sidiya ◽  
Xin Li

Face image synthesis has advanced rapidly in recent years. However, similar success has not been witnessed in related areas such as face single image super-resolution (SISR). The performance of SISR on real-world low-quality face images remains unsatisfactory. In this paper, we demonstrate how to advance the state-of-the-art in face SISR by leveraging style-based generator in unsupervised settings. For real-world low-resolution (LR) face images, we propose a novel unsupervised learning approach by combining style-based generator with relativistic discriminator. With a carefully designed training strategy, we demonstrate our converges faster and better suppresses artifacts than Bulat’s approach. When trained on an ensemble of high-quality datasets (CelebA, AFLW, LS3D-W, and VGGFace2), we report significant visual quality improvements over other competing methods especially for real-world low-quality face images such as those in Widerface. Additionally, we have verified that both our unsupervised approaches are capable of improving the matching performance of widely used face recognition systems such as OpenFace.



Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Zhen Hua ◽  
Haicheng Zhang ◽  
Jinjiang Li

Fractal coding techniques are an effective tool for describing image textures. Considering the shortcomings of the existing image super-resolution (SR) method, the large-scale factor reconstruction performance is poor and the texture details are incomplete. In this paper, we propose an SR method based on error compensation and fractal coding. First, quadtree coding is performed on the image, and the similarity between the range block and the domain block is established to determine the fractal code. Then, through this similarity relationship, the attractor is reconstructed by super-resolution fractal decoding to obtain an interpolated image. Finally, the fractal error of the fractal code is estimated by the depth residual network, and the estimated version of the error image is added as an error compensation term to the interpolation image to obtain the final reconstructed image. The network structure is jointly trained by a deep network and a shallow network. Residual learning is introduced to greatly improve the convergence speed and reconstruction accuracy of the network. Experiments with other state-of-the-art methods on the benchmark datasets Set5, Set14, B100, and Urban100 show that our algorithm achieves competitive performance quantitatively and qualitatively, with subtle edges and vivid textures. Large-scale factor images can also be reconstructed better.



2020 ◽  
Vol 37 (12) ◽  
pp. 2197-2207
Author(s):  
Andrew Geiss ◽  
Joseph C. Hardin

AbstractSuper resolution involves synthetically increasing the resolution of gridded data beyond their native resolution. Typically, this is done using interpolation schemes, which estimate sub-grid-scale values from neighboring data, and perform the same operation everywhere regardless of the large-scale context, or by requiring a network of radars with overlapping fields of view. Recently, significant progress has been made in single-image super resolution using convolutional neural networks. Conceptually, a neural network may be able to learn relations between large-scale precipitation features and the associated sub-pixel-scale variability and outperform interpolation schemes. Here, we use a deep convolutional neural network to artificially enhance the resolution of NEXRAD PPI scans. The model is trained on 6 months of reflectivity observations from the Langley Hill, Washington, radar (KLGX), and we find that it substantially outperforms common interpolation schemes for 4× and 8× resolution increases based on several objective error and perceptual quality metrics.



Algorithms ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 144 ◽  
Author(s):  
Peng Liu ◽  
Ying Hong ◽  
Yan Liu

Recently, algorithms based on the deep neural networks and residual networks have been applied for super-resolution and exhibited excellent performance. In this paper, a multi-branch deep residual network for single image super-resolution (MRSR) is proposed. In the network, we adopt a multi-branch network framework and further optimize the structure of residual network. By using residual blocks and filters reasonably, the model size is greatly expanded while the stable training is also guaranteed. Besides, a perceptual evaluation function, which contains three parts of loss, is proposed. The experiment results show that the evaluation function provides great support for the quality of reconstruction and the competitive performance. The proposed method mainly uses three steps of feature extraction, mapping, and reconstruction to complete the super-resolution reconstruction and shows superior performance than other state-of-the-art super-resolution methods on benchmark datasets.



Sign in / Sign up

Export Citation Format

Share Document