scholarly journals Sentinel-2 Image Fusion Using a Deep Residual Network

2018 ◽  
Vol 10 (8) ◽  
pp. 1290 ◽  
Author(s):  
Frosti Palsson ◽  
Johannes Sveinsson ◽  
Magnus Ulfarsson

Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments.

Author(s):  
M. Galar ◽  
R. Sesma ◽  
C. Ayala ◽  
L. Albizua ◽  
C. Aranda

Abstract. Copernicus program via its Sentinel missions is making earth observation more accessible and affordable for everybody. Sentinel-2 images provide multi-spectral information every 5 days for each location. However, the maximum spatial resolution of its bands is 10m for RGB and near-infrared bands. Increasing the spatial resolution of Sentinel-2 images without additional costs, would make any posterior analysis more accurate. Most approaches on super-resolution for Sentinel-2 have focused on obtaining 10m resolution images for those at lower resolutions (20m and 60m), taking advantage of the information provided by bands of finer resolutions (10m). Otherwise, our focus is on increasing the resolution of the 10m bands, that is, super-resolving 10m bands to 2.5m resolution, where no additional information is available. This problem is known as single-image super-resolution and deep learning-based approaches have become the state-of-the-art for this problem on standard images. Obviously, models learned for standard images do not translate well to satellite images. Hence, the problem is how to train a deep learning model for super-resolving Sentinel-2 images when no ground truth exist (Sentinel-2 images at 2.5m). We propose a methodology for learning Convolutional Neural Networks for Sentinel-2 image super-resolution making use of images from other sensors having a high similarity with Sentinel-2 in terms of spectral bands, but greater spatial resolution. Our proposal is tested with a state-of-the-art neural network showing that it can be useful for learning to increase the spatial resolution of RGB and near-infrared bands of Sentinel-2.


2020 ◽  
Vol 12 (18) ◽  
pp. 3028
Author(s):  
Wenyan Ge ◽  
Qiuming Cheng ◽  
Linhai Jing ◽  
Fei Wang ◽  
Molei Zhao ◽  
...  

With several bands covering iron-bearing mineral spectral features, Sentinel-2 has advantages for iron mapping. However, due to the inconsistent spatial resolution, the sensitivity of Sentinel-2 data to detect iron-bearing minerals may be decreased by excluding the 60 m bands and neglecting the 20 m vegetation red-edge bands. Hence, the capability of Sentinel-2 for iron-bearing minerals mapping were assessed by applying a multivariate (MV) method to pansharpen Sentinel-2 data. Firstly, the Sentinel-2 bands with spatial resolution 20 m and 60 m (except band 10) were pansharpened to 10 m. Then, extraction of iron-bearing minerals from the MV-fused image was explored in the Cuprite area, Nevada, USA. With the complete set of 12 bands with a fine spatial resolution, three band ratios (6/1, 6/8A and (6 + 7)/8A) of the fused image were proposed for the extraction of hematite + goethite, hematite + jarosite and the mixture of iron-bearing minerals, respectively. Additionally, band ratios of Sentinel-2 data for iron-bearing minerals in previous studies were modified with substitution of narrow near infrared band 8A for band 8. Results demonstrated that the capability for detection of iron-bearing minerals using Sentinel-2 data was improved by consideration of two extra bands and the unified fine spatial resolution.


2021 ◽  
Vol 13 (10) ◽  
pp. 5518
Author(s):  
Honglyun Park ◽  
Jaewan Choi

Worldview-3 satellite imagery provides panchromatic images with a high spatial resolution and visible near infrared (VNIR) and shortwave infrared (SWIR) bands with a low spatial resolution. These images can be used for various applications such as environmental analysis, urban monitoring and surveying for sustainability. In this study, mineral detection was performed using Worldview-3 satellite imagery. A pansharpening technique was applied to the spatial resolution of the panchromatic image to effectively utilize the VNIR and SWIR bands of Worldview-3 satellite imagery. The following representative similarity analysis techniques were implemented for the mineral detection: the spectral angle mapper (SAM), spectral information divergence (SID) and the normalized spectral similarity score (NS3). In addition, pixels that could be estimated to indicate minerals were calculated by applying an empirical threshold to each similarity analysis result. A majority voting technique was applied to the results of each similarity analysis and pixels estimated to indicate minerals were finally selected. The results of each similarity analysis were compared to evaluate the accuracy of the proposed methods. From that comparison, it could be confirmed that false negative and false positive rates decreased when the methods proposed in the present study were applied.


2019 ◽  
Vol 11 (19) ◽  
pp. 2304 ◽  
Author(s):  
Hanna Huryna ◽  
Yafit Cohen ◽  
Arnon Karnieli ◽  
Natalya Panov ◽  
William P. Kustas ◽  
...  

A spatially distributed land surface temperature is important for many studies. The recent launch of the Sentinel satellite programs paves the way for an abundance of opportunities for both large area and long-term investigations. However, the spatial resolution of Sentinel-3 thermal images is not suitable for monitoring small fragmented fields. Thermal sharpening is one of the primary methods used to obtain thermal images at finer spatial resolution at a daily revisit time. In the current study, the utility of the TsHARP method to sharpen the low resolution of Sentinel-3 thermal data was examined using Sentinel-2 visible-near infrared imagery. Compared to Landsat 8 fine thermal images, the sharpening resulted in mean absolute errors of ~1 °C, with errors increasing as the difference between the native and the target resolutions increases. Part of the error is attributed to the discrepancy between the thermal images acquired by the two platforms. Further research is due to test additional sites and conditions, and potentially additional sharpening methods, applied to the Sentinel platforms.


2020 ◽  
Vol 71 (5) ◽  
pp. 593 ◽  
Author(s):  
A. Drozd ◽  
P. de Tezanos Pinto ◽  
V. Fernández ◽  
M. Bazzalo ◽  
F. Bordet ◽  
...  

We used hyperspectral remote sensing with the aim of establishing a monitoring program for cyanobacteria in a South American reservoir. We sampled at a wide temporal (2012–16; 10 seasons) and spatial (30km) gradient, and retrieved 111 field hyperspectral signatures, chlorophyll-a, cyanobacteria densities and total suspended solids. The hyperspectral signatures for cyanobacteria-dominated situations (n=75) were used to select the most suitable spectral bands in seven high- and medium-spatial resolution satellites (Sentinel 2, Landsat 5, 7 and 8, SPOT-4/5 and -6/7, WorldView 2), and for the development of chlorophyll and cyanobacteria cell abundance algorithms (λ550 – λ650+λ800) ÷ (λ550+λ650+λ800). The best-performing chlorophyll algorithm was Sentinel 2 ((λ560 – λ660+λ703) ÷ (λ560+λ660+λ703); R2=0.80), followed by WorldView 2 ((λ550 – λ660+λ720) ÷ (λ550+λ660+λ720); R2=0.78), Landsat and the SPOT series ((λ550 – λ650+λ800) ÷ (λ550+λ650+λ800); R2=0.67–0.74). When these models were run for cyanobacteria abundance, the coefficient of determination remained similar, but the root mean square error increased. This could affect the estimate of cyanobacteria cell abundance by ~20%, yet it still enable assessment of the alert level categories for risk assessment. The results of this study highlight the importance of the red and near-infrared region for identifying cyanobacteria in hypereutrophic waters, demonstrating coherence with field cyanobacteria abundance and enabling assessment of bloom distribution in this ecosystem.


Terr Plural ◽  
2021 ◽  
Vol 15 ◽  
pp. 1-25
Author(s):  
Isadora Taborda Silva ◽  
Jéssica Rabito Chaves ◽  
Helen Rezende Figueiredo ◽  
Bruno Silva Ferreira ◽  
César Claudio Cáceres Encina ◽  
...  

This paper evaluates the potential of false-color composite images, from 3 different remote sensing satellites, for the identification of continental wetlands. Landsat 8, Sentinel-2 and CBERS-4 scenes from three different Ramsar sites (i.e., sites designated to be of international importance) two sites located within the Mato-Grossense Pantanal and one within the Sul-mato-grossense were used for analyses. For each site, images from both the dry and rainy seasons were analyzed using Near-Infrared (NIR), Shortwave Infrared (SWIR), and visible (VIS) bands. The results show that false-color composite images from both the Landsat 8 and the Sentinel-2 satellites, with both SWIR 2-NIR-BLUE and NIR-SWIR-RED spectral band combinations, allow the identification of wetlands.


Author(s):  
Danang Surya Candra

Image fusion is a process to generate higher spatial resolution multispectral images by fusion of lower resolution multispectral images and higher resolution panchromatic images. It is used to generate not only visually appealing images but also provide detailed images to support applications in remote sensing field, including rural area. The aim of this study was to evaluate the performance of SPOT-6 data fusion using Gram-Schmidt Spectral Sharpening (GS) method on rural areas. GS method was compared with Principle Component Spectral Sharpening (PC) method to evaluate the reliability of GS method. In this study, the performance of GS was presented based on multispectral and panchromatic of SPOT-6 images. The spatial resolution of the multispectral (MS) image was enhanced by merging the high resolution Panchromatic (Pan) image in GS method. The fused image of GS and PC were assessed visually and statistically. Relative Mean Difference (RMD), Relative Variation Difference (RVD), and Peak Signal to Noise Ratio (PSNR) Index were used to assess the fused image statistically. The test sites of rural areas were devided into four main areas i.e., whole area, rice field area, forest area, and settlement. Based on the results, the visual quality of the fused image using GS method was better than using PC method. The color of the fused image using GS was better and more natural than using PC. In the statistical assessment, the RMD results of both methods were similar. In the RVD results, GS method was better then PC method especially in band 1 and band 3. GS method was better than PC method in PSNR result for each test site. It was observed that the Gram-Schmidt method provides the best performance for each band and test site. Thus, GS was a robust method for SPOT-6 data fusion especially on rural areas.


2021 ◽  
Vol 118 (9) ◽  
pp. e2011160118
Author(s):  
Ruben Ramo ◽  
Ekhi Roteta ◽  
Ioannis Bistinas ◽  
Dave van Wees ◽  
Aitor Bastarrika ◽  
...  

Fires are a major contributor to atmospheric budgets of greenhouse gases and aerosols, affect soils and vegetation properties, and are a key driver of land use change. Since the 1990s, global burned area (BA) estimates based on satellite observations have provided critical insights into patterns and trends of fire occurrence. However, these global BA products are based on coarse spatial-resolution sensors, which are unsuitable for detecting small fires that burn only a fraction of a satellite pixel. We estimated the relevance of those small fires by comparing a BA product generated from Sentinel-2 MSI (Multispectral Instrument) images (20-m spatial resolution) with a widely used global BA product based on Moderate Resolution Imaging Spectroradiometer (MODIS) images (500 m) focusing on sub-Saharan Africa. For the year 2016, we detected 80% more BA with Sentinel-2 images than with the MODIS product. This difference was predominately related to small fires: we observed that 2.02 Mkm2 (out of a total of 4.89 Mkm2) was burned by fires smaller than 100 ha, whereas the MODIS product only detected 0.13 million km2 BA in that fire-size class. This increase in BA subsequently resulted in increased estimates of fire emissions; we computed 31 to 101% more fire carbon emissions than current estimates based on MODIS products. We conclude that small fires are a critical driver of BA in sub-Saharan Africa and that including those small fires in emission estimates raises the contribution of biomass burning to global burdens of (greenhouse) gases and aerosols.


2019 ◽  
Vol 11 (22) ◽  
pp. 2635 ◽  
Author(s):  
Massimiliano Gargiulo ◽  
Antonio Mazza ◽  
Raffaele Gaetano ◽  
Giuseppe Ruello ◽  
Giuseppe Scarpa

Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their associated open access policy. Due to a sensor design trade-off, images are acquired (and delivered) at different spatial resolutions (10, 20 and 60 m) according to specific sets of wavelengths, with only the four visible and near infrared bands provided at the highest resolution (10 m). Although this is not a limiting factor in general, many applications seem to emerge in which the resolution enhancement of 20 m bands may be beneficial, motivating the development of specific super-resolution methods. In this work, we propose to leverage Convolutional Neural Networks (CNNs) to provide a fast, upscalable method for the single-sensor fusion of Sentinel-2 (S2) data, whose aim is to provide a 10 m super-resolution of the original 20 m bands. Experimental results demonstrate that the proposed solution can achieve better performance with respect to most of the state-of-the-art methods, including other deep learning based ones with a considerable saving of computational burden.


2020 ◽  
Vol 12 (15) ◽  
pp. 2406 ◽  
Author(s):  
Zhongbin Li ◽  
Hankui K. Zhang ◽  
David P. Roy ◽  
Lin Yan ◽  
Haiyan Huang

Combination of near daily 3 m red, green, blue, and near infrared (NIR) Planetscope reflectance with lower temporal resolution 10 m and 20 m red, green, blue, NIR, red-edge, and shortwave infrared (SWIR) Sentinel-2 reflectance provides potential for improved global monitoring. Sharpening the Sentinel-2 reflectance with the Planetscope reflectance may enable near-daily 3 m monitoring in the visible, red-edge, NIR, and SWIR. However, there are two major issues, namely the different and spectrally nonoverlapping bands between the two sensors and surface changes that may occur in the period between the different sensor acquisitions. They are examined in this study that considers Sentinel-2 and Planetscope imagery acquired one day apart over three sites where land surface changes due to biomass burning occurred. Two well-established sharpening methods, high pass modulation (HPM) and Model 3 (M3), were used as they are multiresolution analysis methods that preserve the spectral properties of the low spatial resolution Sentinel-2 imagery (that are better radiometrically calibrated than Planetscope) and are relatively computationally efficient so that they can be applied at large scale. The Sentinel-2 point spread function (PSF) needed for the sharpening was derived analytically from published modulation transfer function (MTF) values. Synthetic Planetscope red-edge and SWIR bands were derived by linear regression of the Planetscope visible and NIR bands with the Sentinel-2 red-edge and SWIR bands. The HPM and M3 sharpening results were evaluated visually and quantitatively using the Q2n metric that quantifies spectral and spatial distortion. The HPM and M3 sharpening methods provided visually coherent and spatially detailed visible and NIR wavelength sharpened results with low distortion (Q2n values > 0.91). The sharpened red-edge and SWIR results were also coherent but had greater distortion (Q2n values > 0.76). Detailed examination at locations where surface changes between the Sentinel-2 and the Planetscope acquisitions occurred revealed that the HPM method, unlike the M3 method, could reliably sharpen the bands affected by the change. This is because HPM sharpening uses a per-pixel reflectance ratio in the spatial detail modulation which is relatively stable to reflectance changes. The paper concludes with a discussion of the implications of this research and the recommendation that the HPM sharpening be used considering its better performance when there are surface changes.


Sign in / Sign up

Export Citation Format

Share Document