scholarly journals Internal Learning for Image Super-Resolution by Adaptive Feature Transform

Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1686
Author(s):  
Yifan He ◽  
Wei Cao ◽  
Xiaofeng Du ◽  
Changlin Chen

Recent years have witnessed the great success of image super-resolution based on deep learning. However, it is hard to adapt a well-trained deep model for a specific image for further improvement. Since the internal repetition of patterns is widely observed in visual entities, internal self-similarity is expected to help improve image super-resolution. In this paper, we focus on exploiting a complementary relation between external and internal example-based super-resolution methods. Specifically, we first develop a basic network learning external prior from large scale training data and then learn the internal prior from the given low-resolution image for task adaptation. By simply embedding a few additional layers into a pre-trained deep neural network, the image-adaptive super-resolution method exploits the internal prior for a specific image, and the external prior from a well-trained super-resolution model. We achieve 0.18 dB PSNR improvements over the basic network’s results on standard datasets. Extensive experiments under image super-resolution tasks demonstrate that the proposed method is flexible and can be integrated with lightweight networks. The proposed method boosts the performance for images with repetitive structures, and it improves the accuracy of the reconstructed image of the lightweight model.

Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2591
Author(s):  
Kazuhiro Yamawaki ◽  
YongQing Sun ◽  
Xian-Hua Han

The goal of single image super resolution (SISR) is to recover a high-resolution (HR) image from a low-resolution (LR) image. Deep learning based methods have recently made a remarkable performance gain in terms of both the effectiveness and efficiency for SISR. Most existing methods have to be trained based on large-scale synthetic paired data in a fully supervised manner. With the available HR natural images, the corresponding LR images are usually synthesized with a simple fixed degradation operation, such as bicubic down-sampling. Then, the learned deep models with these training data usually face difficulty to be generalized to real scenarios with unknown and complicated degradation operations. This study exploits a novel blind image super-resolution framework using a deep unsupervised learning network. The proposed method can simultaneously predict the underlying HR image and its specific degradation operation from the observed LR image only without any prior knowledge. The experimental results on three benchmark datasets validate that our proposed method achieves a promising performance under the unknown degradation models.


Author(s):  
A. Valli Bhasha ◽  
B. D. Venkatramana Reddy

The image super-resolution methods with deep learning using Convolutional Neural Network (CNN) have been producing admirable advancements. The proposed image resolution model involves the following two main analyses: (i) analysis using Adaptive Discrete Wavelet Transform (ADWT) with Deep CNN and (ii) analysis using Non-negative Structured Sparse Representation (NSSR). The technique termed as NSSR is used to recover the high-resolution (HR) images from the low-resolution (LR) images. The experimental evaluation involves two phases: Training and Testing. In the training phase, the information regarding the residual images of the dataset are trained using the optimized Deep CNN. On the other hand, the testing phase helps to generate the super resolution image using the HR wavelet subbands (HRSB) and residual images. As the main novelty, the filter coefficients of DWT are optimized by the hybrid Fire Fly-based Spotted Hyena Optimization (FF-SHO) to develop ADWT. Finally, a valuable performance evaluation on the two benchmark hyperspectral image datasets confirms the effectiveness of the proposed model over the existing algorithms.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1234
Author(s):  
Lei Zha ◽  
Yu Yang ◽  
Zicheng Lai ◽  
Ziwei Zhang ◽  
Juan Wen

In recent years, neural networks for single image super-resolution (SISR) have applied more profound and deeper network structures to extract extra image details, which brings difficulties in model training. To deal with deep model training problems, researchers utilize dense skip connections to promote the model’s feature representation ability by reusing deep features of different receptive fields. Benefiting from the dense connection block, SRDensenet has achieved excellent performance in SISR. Despite the fact that the dense connected structure can provide rich information, it will also introduce redundant and useless information. To tackle this problem, in this paper, we propose a Lightweight Dense Connected Approach with Attention for Single Image Super-Resolution (LDCASR), which employs the attention mechanism to extract useful information in channel dimension. Particularly, we propose the recursive dense group (RDG), consisting of Dense Attention Blocks (DABs), which can obtain more significant representations by extracting deep features with the aid of both dense connections and the attention module, making our whole network attach importance to learning more advanced feature information. Additionally, we introduce the group convolution in DABs, which can reduce the number of parameters to 0.6 M. Extensive experiments on benchmark datasets demonstrate the superiority of our proposed method over five chosen SISR methods.


Author(s):  
Hengyi Cai ◽  
Hongshen Chen ◽  
Yonghao Song ◽  
Xiaofang Zhao ◽  
Dawei Yin

Humans benefit from previous experiences when taking actions. Similarly, related examples from the training data also provide exemplary information for neural dialogue models when responding to a given input message. However, effectively fusing such exemplary information into dialogue generation is non-trivial: useful exemplars are required to be not only literally-similar, but also topic-related with the given context. Noisy exemplars impair the neural dialogue models understanding the conversation topics and even corrupt the response generation. To address the issues, we propose an exemplar guided neural dialogue generation model where exemplar responses are retrieved in terms of both the text similarity and the topic proximity through a two-stage exemplar retrieval model. In the first stage, a small subset of conversations is retrieved from a training set given a dialogue context. These candidate exemplars are then finely ranked regarding the topical proximity to choose the best-matched exemplar response. To further induce the neural dialogue generation model consulting the exemplar response and the conversation topics more faithfully, we introduce a multi-source sampling mechanism to provide the dialogue model with both local exemplary semantics and global topical guidance during decoding. Empirical evaluations on a large-scale conversation dataset show that the proposed approach significantly outperforms the state-of-the-art in terms of both the quantitative metrics and human evaluations.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Valli Bhasha A. ◽  
Venkatramana Reddy B.D.

Purpose The problems of Super resolution are broadly discussed in diverse fields. Rather than the progression toward the super resolution models for real-time images, operating hyperspectral images still remains a challenging problem. Design/methodology/approach This paper aims to develop the enhanced image super-resolution model using “optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT), and Optimized Deep Convolutional Neural Network”. Once after converting the HR images into LR images, the NSSR images are generated by the optimized NSSR. Then the ADWT is used for generating the subbands of both NSSR and HRSB images. The residual image with this information is obtained by the optimized Deep CNN. All the improvements on the algorithms are done by the Opposition-based Barnacles Mating Optimization (O-BMO), with the objective of attaining the multi-objective function concerning the “Peak Signal-to-Noise Ratio (PSNR), and Structural similarity (SSIM) index”. Extensive analysis on benchmark hyperspectral image datasets shows that the proposed model achieves superior performance over typical other existing super-resolution models. Findings From the analysis, the overall analysis of the suggested and the conventional super resolution models relies that the PSNR of the improved O-BMO-(NSSR+DWT+CNN) was 38.8% better than bicubic, 11% better than NSSR, 16.7% better than DWT+CNN, 1.3% better than NSSR+DWT+CNN, and 0.5% better than NSSR+FF-SHO-(DWT+CNN). Hence, it has been confirmed that the developed O-BMO-(NSSR+DWT+CNN) is performing well in converting LR images to HR images. Originality/value This paper adopts a latest optimization algorithm called O-BMO with optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT) and Optimized Deep Convolutional Neural Network for developing the enhanced image super-resolution model. This is the first work that uses O-BMO-based Deep CNN for image super-resolution model enhancement.


2020 ◽  
Vol 12 (10) ◽  
pp. 1660 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document