scholarly journals A Dual Network for Super-Resolution and Semantic Segmentation of Sentinel-2 Imagery

2021 ◽  
Vol 13 (22) ◽  
pp. 4547
Author(s):  
Saüc Abadal ◽  
Luis Salgueiro ◽  
Javier Marcello ◽  
Verónica Vilaplana

There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this work we propose a deep learning model to generate high-resolution segmentation maps from low-resolution inputs in a multi-task approach. Our proposal is a dual-network model with two branches: the Single Image Super-Resolution branch, that reconstructs a high-resolution version of the input image, and the Semantic Segmentation Super-Resolution branch, that predicts a high-resolution segmentation map with a scaling factor of 2. We performed several experiments to find the best architecture, training and testing on a subset of the S2GLC 2017 dataset. We based our model on the DeepLabV3+ architecture, enhancing the model and achieving an improvement of 5% on IoU and almost 10% on the recall score. Furthermore, our qualitative results demonstrate the effectiveness and usefulness of the proposed approach.

Author(s):  
Vikas Kumar ◽  
Tanupriya Choudhury ◽  
Suresh Chandra Satapathy ◽  
Ravi Tomar ◽  
Archit Aggarwal

Recently, huge progress has been achieved in the field of single image super resolution which augments the resolution of images. The idea behind super resolution is to convert low-resolution images into high-resolution images. SRCNN (Single Resolution Convolutional Neural Network) was a huge improvement over the existing methods of single-image super resolution. However, video super-resolution, despite being an active field of research, is yet to benefit from deep learning. Using still images and videos downloaded from various sources, we explore the possibility of using SRCNN along with image fusion techniques (minima, maxima, average, PCA, DWT) to improve over existing video super resolution methods. Video Super-Resolution has inherent difficulties such as unexpected motion, blur and noise. We propose Video Super Resolution – Image Fusion (VSR-IF) architecture which utilizes information from multiple frames to produce a single high- resolution frame for a video. We use SRCNN as a reference model to obtain high resolution adjacent frames and use a concatenation layer to group those frames into a single frame. Since, our method is data-driven and requires only minimal initial training, it is faster than other video super resolution methods. After testing our program, we find that our technique shows a significant improvement over SCRNN and other single image and frame super resolution techniques.


2017 ◽  
Vol 14 (3) ◽  
pp. 379-386 ◽  
Author(s):  
Sparik Hayrapetyan ◽  
Gevorg Karapetyan ◽  
Viacheslav Voronin ◽  
Hakob Sarukhanyan

Image inpainting, a technique of completing missing or corrupted image regions in undetected form, is an open problem in digital image processing. Inpainting of large regions using Deep Convolutional Generative Adversarial Nets (DCGAN) is a new and powerful approach. In described approaches the size of generated image and size of input image should be the same. In this paper we propose a new method where the size of input image with corrupted region can be up to 4 times larger than generated image.


Author(s):  
M. Galar ◽  
R. Sesma ◽  
C. Ayala ◽  
C. Aranda

<p><strong>Abstract.</strong> Obtaining Sentinel-2 imagery of higher spatial resolution than the native bands while ensuring that output imagery preserves the original radiometry has become a key issue since the deployment of Sentinel-2 satellites. Several studies have been carried out on the upsampling of 20&amp;thinsp;m and 60&amp;thinsp;m Sentinel-2 bands to 10 meters resolution taking advantage of 10&amp;thinsp;m bands. However, how to super-resolve 10&amp;thinsp;m bands to higher resolutions is still an open problem. Recently, deep learning-based techniques has become a de facto standard for single-image super-resolution. The problem is that neural network learning for super-resolution requires image pairs at both the original resolution (10&amp;thinsp;m in Sentinel-2) and the target resolution (e.g., 5&amp;thinsp;m or 2.5&amp;thinsp;m). Since there is no way to obtain higher resolution images for Sentinel-2, we propose to consider images from others sensors having the greatest similarity in terms of spectral bands, which will be appropriately pre-processed. These images, together with Sentinel-2 images, will form our training set. We carry out several experiments using state-of-the-art Convolutional Neural Networks for single-image super-resolution showing that this methodology is a first step toward greater spatial resolution of Sentinel-2 images.</p>


2014 ◽  
Vol 568-570 ◽  
pp. 659-662
Author(s):  
Xue Jun Zhang ◽  
Bing Liang Hu

The paper proposes a new approach to single-image super resolution (SR), which is based on sparse representation. Previous researchers just focus on the global intensive patch, without local intensive patch. The performance of dictionary trained by the local saliency intensive patch is more significant. Motivated by this, we joined the saliency detection to detect marked area in the image. We proposed a sparse representation for saliency patch of the low-resolution input, and used the coefficients of this representation to generate the high-resolution output. Compared to precious approaches which simply sample a large amount of image patch pairs, the saliency dictionary pair is a more compact representation of the patch pairs, reducing the computational cost substantially. Through the experiment, we demonstrate that our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods.


2014 ◽  
Vol 568-570 ◽  
pp. 652-655 ◽  
Author(s):  
Zhao Li ◽  
Le Wang ◽  
Tao Yu ◽  
Bing Liang Hu

This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.


2021 ◽  
Author(s):  
Guosheng Zhao ◽  
Kun Wang

With the development of deep convolutional neural network, recent research on single image super-resolution (SISR) has achieved great achievements. In particular, the networks, which fully utilize features, achieve a better performance. In this paper, we propose an image super-resolution dual features extraction network (SRDFN). Our method uses the dual features extraction blocks (DFBs) to extract and combine low-resolution features, with less noise but less detail, and high-resolution features, with more detail but more noise. The output of DFB contains the advantages of low- and high-resolution features, with more detail and less noise. Moreover, due to that the number of DFB and channels can be set by weighting accuracy against size of model, SRDFN can be designed according to actual situation. The experimental results demonstrate that the proposed SRDFN performs well in comparison with the state-of-the-art methods.


2013 ◽  
Vol 8 (2) ◽  
pp. 768-776
Author(s):  
Dr. Ruikar Sachin D ◽  
Mr. Wadhavane Tushar D

This paper presents the Advance Neighbor embedding (ANE) method for image super resolution. The assumption of the neighbor-embedding (NE) algorithm for single-image super-resolution Reconstruction is that the feature spaces are locally isometric of low-resolution and high-resolution Patches. But, this is not true for Super Resolution because of one to many mappings between Low Resolution and High Resolution patches. Advance NE method minimize the problem occurred in NE using combine learning technique used to train two projection matrices simultaneously and to map the original Low Resolution and High Resolution feature spaces onto a unified feature subspace. The Reconstruction weights of k- Nearest neighbour of Low Resolution image patches is found by performing operation on those Low Resolution patches in unified feature space. Combine learning use a coupled constraint by linking the LR–HR counterparts together with the k-nearest grouping patch pairs to handle a large number of samples. So, Advance neighbour embedding method gives better resolution than NE method


Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 41 ◽  
Author(s):  
Vahid Anari ◽  
Farbod Razzazi ◽  
Rasoul Amirfattahi

In the current study, we were inspired by sparse analysis signal representation theory to propose a novel single-image super-resolution method termed “sparse analysis-based super resolution” (SASR). This study presents and demonstrates mapping between low and high resolution images using a coupled sparse analysis operator learning method to reconstruct high resolution (HR) images. We further show that the proposed method selects more informative high and low resolution (LR) learning patches based on image texture complexity to train high and low resolution operators more efficiently. The coupled high and low resolution operators are used for high resolution image reconstruction at a low computational complexity cost. The experimental results for quantitative criteria peak signal to noise ratio (PSNR), root mean square error (RMSE), structural similarity index (SSIM) and elapsed time, human observation as a qualitative measure, and computational complexity verify the improvements offered by the proposed SASR algorithm.


Sign in / Sign up

Export Citation Format

Share Document