scholarly journals SUPER-RESOLUTION FOR SENTINEL-2 IMAGES

Author(s):  
M. Galar ◽  
R. Sesma ◽  
C. Ayala ◽  
C. Aranda

<p><strong>Abstract.</strong> Obtaining Sentinel-2 imagery of higher spatial resolution than the native bands while ensuring that output imagery preserves the original radiometry has become a key issue since the deployment of Sentinel-2 satellites. Several studies have been carried out on the upsampling of 20&amp;thinsp;m and 60&amp;thinsp;m Sentinel-2 bands to 10 meters resolution taking advantage of 10&amp;thinsp;m bands. However, how to super-resolve 10&amp;thinsp;m bands to higher resolutions is still an open problem. Recently, deep learning-based techniques has become a de facto standard for single-image super-resolution. The problem is that neural network learning for super-resolution requires image pairs at both the original resolution (10&amp;thinsp;m in Sentinel-2) and the target resolution (e.g., 5&amp;thinsp;m or 2.5&amp;thinsp;m). Since there is no way to obtain higher resolution images for Sentinel-2, we propose to consider images from others sensors having the greatest similarity in terms of spectral bands, which will be appropriately pre-processed. These images, together with Sentinel-2 images, will form our training set. We carry out several experiments using state-of-the-art Convolutional Neural Networks for single-image super-resolution showing that this methodology is a first step toward greater spatial resolution of Sentinel-2 images.</p>

Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for &lt;i&gt;single-image super resolution&lt;/i&gt; are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of &lt;i&gt;deep learning&lt;/i&gt; techniques, such as &lt;i&gt;convolutional neural networks&lt;/i&gt; (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, &lt;i&gt;end-to-end learning&lt;/i&gt; is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. &lt;br&gt;&lt;br&gt; We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2021 ◽  
Vol 13 (9) ◽  
pp. 1777
Author(s):  
Yu Tao ◽  
Susan J. Conway ◽  
Jan-Peter Muller ◽  
Alfiah R. D. Putri ◽  
Nicolas Thomas ◽  
...  

The ExoMars Trace Gas Orbiter (TGO)’s Colour and Stereo Surface Imaging System (CaSSIS) provides multi-spectral optical imagery at 4-5m/pixel spatial resolution. Improving the spatial resolution of CaSSIS images would allow greater amounts of scientific information to be extracted. In this work, we propose a novel Multi-scale Adaptive weighted Residual Super-resolution Generative Adversarial Network (MARSGAN) for single-image super-resolution restoration of TGO CaSSIS images, and demonstrate how this provides an effective resolution enhancement factor of about 3 times. We demonstrate with qualitative and quantitative assessments of CaSSIS SRR results over the Mars2020 Perseverance rover’s landing site. We also show examples of similar SRR performance over 8 science test sites mainly selected for being covered by HiRISE at higher resolution for comparison, which include many features unique to the Martian surface. Application of MARSGAN will allow high resolution colour imagery from CaSSIS to be obtained over extensive areas of Mars beyond what has been possible to obtain to date from HiRISE.


2020 ◽  
Vol 12 (11) ◽  
pp. 1757
Author(s):  
Mohammad Pashaei ◽  
Michael J. Starek ◽  
Hamid Kamangir ◽  
Jacob Berryhill

The deep convolutional neural network (DCNN) has recently been applied to the highly challenging and ill-posed problem of single image super-resolution (SISR), which aims to predict high-resolution (HR) images from their corresponding low-resolution (LR) images. In many remote sensing (RS) applications, spatial resolution of the aerial or satellite imagery has a great impact on the accuracy and reliability of information extracted from the images. In this study, the potential of a DCNN-based SISR model, called enhanced super-resolution generative adversarial network (ESRGAN), to predict the spatial information degraded or lost in a hyper-spatial resolution unmanned aircraft system (UAS) RGB image set is investigated. ESRGAN model is trained over a limited number of original HR (50 out of 450 total images) and virtually-generated LR UAS images by downsampling the original HR images using a bicubic kernel with a factor × 4 . Quantitative and qualitative assessments of super-resolved images using standard image quality measures (IQMs) confirm that the DCNN-based SISR approach can be successfully applied on LR UAS imagery for spatial resolution enhancement. The performance of DCNN-based SISR approach for the UAS image set closely approximates performances reported on standard SISR image sets with mean peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index values of around 28 dB and 0.85 dB, respectively. Furthermore, by exploiting the rigorous Structure-from-Motion (SfM) photogrammetry procedure, an accurate task-based IQM for evaluating the quality of the super-resolved images is carried out. Results verify that the interior and exterior imaging geometry, which are extremely important for extracting highly accurate spatial information from UAS imagery in photogrammetric applications, can be accurately retrieved from a super-resolved image set. The number of corresponding keypoints and dense points generated from the SfM photogrammetry process are about 6 and 17 times more than those extracted from the corresponding LR image set, respectively.


2021 ◽  
Vol 13 (24) ◽  
pp. 5007
Author(s):  
Luis Salgueiro ◽  
Javier Marcello ◽  
Verónica Vilaplana

Sentinel-2 satellites have become one of the main resources for Earth observation images because they are free of charge, have a great spatial coverage and high temporal revisit. Sentinel-2 senses the same location providing different spatial resolutions as well as generating a multi-spectral image with 13 bands of 10, 20, and 60 m/pixel. In this work, we propose a single-image super-resolution model based on convolutional neural networks that enhances the low-resolution bands (20 m and 60 m) to reach the maximal resolution sensed (10 m) at the same time, whereas other approaches provide two independent models for each group of LR bands. Our proposed model, named Sen2-RDSR, is made up of Residual in Residual blocks that produce two final outputs at maximal resolution, one for 20 m/pixel bands and the other for 60 m/pixel bands. The training is done in two stages, first focusing on 20 m bands and then on the 60 m bands. Experimental results using six quality metrics (RMSE, SRE, SAM, PSNR, SSIM, ERGAS) show that our model has superior performance compared to other state-of-the-art approaches, and it is very effective and suitable as a preliminary step for land and coastal applications, as studies involving pixel-based classification for Land-Use-Land-Cover or the generation of vegetation indices.


Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Author(s):  
Vishal Chudasama ◽  
Kishor Upla ◽  
Kiran Raja ◽  
Raghavendra Ramachandra ◽  
Christoph Busch

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Kai Shao ◽  
Qinglan Fan ◽  
Yunfeng Zhang ◽  
Fangxun Bao ◽  
Caiming Zhang

Sign in / Sign up

Export Citation Format

Share Document