scholarly journals Canonical Analysis of Sentinel-1 Radar and Sentinel-2 Optical Data

Author(s):  
Allan A. Nielsen ◽  
Rasmus Larsen
2020 ◽  
Vol 5 (1) ◽  
pp. 13
Author(s):  
Negar Tavasoli ◽  
Hossein Arefi

Assessment of forest above ground biomass (AGB) is critical for managing forest and understanding the role of forest as source of carbon fluxes. Recently, satellite remote sensing products offer the chance to map forest biomass and carbon stock. The present study focuses on comparing the potential use of combination of ALOSPALSAR and Sentinel-1 SAR data, with Sentinel-2 optical data to estimate above ground biomass and carbon stock using Genetic-Random forest machine learning (GA-RF) algorithm. Polarimetric decompositions, texture characteristics and backscatter coefficients of ALOSPALSAR and Sentinel-1, and vegetation indices, tasseled cap, texture parameters and principal component analysis (PCA) of Sentinel-2 based on measured AGB samples were used to estimate biomass. The overall coefficient (R2) of AGB modelling using combination of ALOSPALSAR and Sentinel-1 data, and Sentinel-2 data were respectively 0.70 and 0.62. The result showed that Combining ALOSPALSAR and Sentinel-1 data to predict AGB by using GA-RF model performed better than Sentinel-2 data.


2017 ◽  
Author(s):  
Andreas Kääb ◽  
Bas Altena ◽  
Joseph Mascaro

Abstract. Satellite measurements of coseismic displacements are typically based on Synthetic Aperture Radar (SAR) interferometry or amplitude tracking, or based on optical data such as from Landsat, Sentinel-2, SPOT, ASTER, very-high resolution satellites, or airphotos. Here, we evaluate a new class of optical satellite images for this purpose – data from cubesats. More specific, we investigate the PlanetScope cubesat constellation for horizontal surface displacements by the 14 November 2016 Mw7.8 Kaikoura, New Zealand, earthquake. Single PlanetScope scenes are 2–4 m resolution visible and near-infrared frame images of approximately 20–30 km × 9–15 km in size, acquired in continuous sequence along an orbit of approximately 375–475 km height. From single scenes or mosaics from before and after the earthquake we observe surface displacements of up to almost 10 m and estimate a matching accuracy from PlanetScope data of up to ±0.2 pixels (~ ±0.6 m). This accuracy, the daily revisit anticipated for the PlanetScope constellation for the entire land surface of Earth, and a number of other features, together offer new possibilities for investigating coseismic and other Earth surface displacements and managing related hazards and disasters, and complement existing SAR and optical methods. For comparison and for a better regional overview we also match the coseismic displacements by the 2016 Kaikoura earthquake using Landsat8 and Sentinel-2 data.


2018 ◽  
Vol 10 (10) ◽  
pp. 1642 ◽  
Author(s):  
Kristof Van Tricht ◽  
Anne Gobin ◽  
Sven Gilliams ◽  
Isabelle Piccard

A timely inventory of agricultural areas and crop types is an essential requirement for ensuring global food security and allowing early crop monitoring practices. Satellite remote sensing has proven to be an increasingly more reliable tool to identify crop types. With the Copernicus program and its Sentinel satellites, a growing source of satellite remote sensing data is publicly available at no charge. Here, we used joint Sentinel-1 radar and Sentinel-2 optical imagery to create a crop map for Belgium. To ensure homogenous radar and optical inputs across the country, Sentinel-1 12-day backscatter mosaics were created after incidence angle normalization, and Sentinel-2 normalized difference vegetation index (NDVI) images were smoothed to yield 10-daily cloud-free mosaics. An optimized random forest classifier predicted the eight crop types with a maximum accuracy of 82% and a kappa coefficient of 0.77. We found that a combination of radar and optical imagery always outperformed a classification based on single-sensor inputs, and that classification performance increased throughout the season until July, when differences between crop types were largest. Furthermore, we showed that the concept of classification confidence derived from the random forest classifier provided insight into the reliability of the predicted class for each pixel, clearly showing that parcel borders have a lower classification confidence. We concluded that the synergistic use of radar and optical data for crop classification led to richer information increasing classification accuracies compared to optical-only classification. Further work should focus on object-level classification and crop monitoring to exploit the rich potential of combined radar and optical observations.


2019 ◽  
Vol 11 (7) ◽  
pp. 752 ◽  
Author(s):  
Zhongchang Sun ◽  
Ru Xu ◽  
Wenjie Du ◽  
Lei Wang ◽  
Dengsheng Lu

Accurate and timely urban land mapping is fundamental to supporting large area environmental and socio-economic research. Most of the available large-area urban land products are limited to a spatial resolution of 30 m. The fusion of optical and synthetic aperture radar (SAR) data for large-area high-resolution urban land mapping has not yet been widely explored. In this study, we propose a fast and effective urban land extraction method using ascending/descending orbits of Sentinel-1A SAR data and Sentinel-2 MSI (MultiSpectral Instrument, Level 1C) optical data acquired from 1 January 2015 to 30 June 2016. Potential urban land (PUL) was identified first through logical operations on yearly mean and standard deviation composites from a time series of ascending/descending orbits of SAR data. A Yearly Normalized Difference Vegetation Index (NDVI) maximum and modified Normalized Difference Water Index (MNDWI) mean composite were generated from Sentinel-2 imagery. The slope image derived from SRTM DEM data was used to mask mountain pixels and reduce the false positives in SAR data over these regions. We applied a region-specific threshold on PUL to extract the target urban land (TUL) and a global threshold on the MNDWI mean, and slope image to extract water bodies and high-slope regions. A majority filter with a three by three window was applied on previously extracted results and the main processing was carried out on the Google Earth Engine (GEE) platform. China was chosen as the testing region to validate the accuracy and robustness of our proposed method through 224,000 validation points randomly selected from high-resolution Google Earth imagery. Additionally, a total of 735 blocks with a size of 900 × 900 m were randomly selected and used to compare our product’s accuracy with the global human settlement layer (GHSL, 2014), GlobeLand30 (2010), and Liu (2015) products. Our method demonstrated the effectiveness of using a fusion of optical and SAR data for large area urban land extraction especially in areas where optical data fail to distinguish urban land from spectrally similar objects. Results show that the average overall, producer’s and user’s accuracies are 88.03%, 94.50% and 82.22%, respectively.


Author(s):  
Aliaksei Makarau ◽  
Rudolf Richter ◽  
Viktoria Zekoll ◽  
Peter Reinartz

Cirrus is one of the most common artifacts in the remotely sensed optical data. Contrary to the low altitude (1-3 km) cloud the cirrus cloud (8-20 km) is semitransparent and the extinction (cirrus influence) of the upward reflected solar radiance can be compensated. The widely employed and almost ’de-facto’ method for cirrus compensation is based on the 1.38μm spectral channel measuring the upwelling radiance reflected by the cirrus cloud. The knowledge on the cirrus spatial distribution allows to estimate the per spectral channel cirrus attenuation and to compensate the spectral channels. A wide range of existing and expected sensors have no 1.38μm spectral channel. These sensors data can be corrected by the recently developed haze/cirrus removal method. The additive model of the estimated cirrus thickness map (CTM) is applicable for cirrus-conditioned extinction compensation. Numeric and statistic evaluation of the CTM-based cirrus removal on more than 80 Landsat-8 OLI and 30 Sentinel-2 scenes demonstrates a close agreement with the 1.38μm channel based cirrus removal.


2017 ◽  
Vol 9 (2) ◽  
pp. 110 ◽  
Author(s):  
Kathrin Naegeli ◽  
Alexander Damm ◽  
Matthias Huss ◽  
Hendrik Wulf ◽  
Michael Schaepman ◽  
...  
Keyword(s):  

Author(s):  
E. Elmoussaoui ◽  
A. Moumni ◽  
A. Lahrouni

Abstract. Forest tree species mapping became easier due to the global availability of high spatio-temporal resolution images acquired from multiple sensors. Such data can lead to better forest resources management. Machine-learning pixel based analysis was performed to multi-spectral Sentinel-2 and Synthetic Aperture Radar Sentinel-1 time series integrated with Digital Elevation Model acquired over Argan forest of Essaouira province, Morocco. The argan tree constitutes a fundamental resource for the populations of this arid area of Morocco. This research aims to use the potential of the combination of multi-sensor data to detect, map and identify argan tree from other forest species using three Machine Learning algorithms: Support Vector Machine (SVM), Maximum Likelihood (ML) and Artificial Neural Networks (ANN). The exploited datasets included Sentinel-1 (S1), Sentinel-2 (S2) time series, Shuttle Radar Topographic Missing Digital Elevation Model (DEM) layer and Ground truth data. We tested several sets of scenarios, including single S1 derived features, single S2 time series and combined S1 and S2 derived layers with DEM scene acquisition. The best results (overall accuracy OA and Kappa coefficient K) obtained from time series of optical data (NDVI): OA = 86.87%, K = 0.84, from time series of SAR data (VV+VH/VV): OA = 45.90%, K = 0.36, from the combination of optical and SAR time series (NDVI+VH+DEM): OA = 93.01%, K = 0.914, and from the fusion of optical time series and DEM layer (NDVI+DEM): OA = 93.25%, K = 0.91. These results indicate that single-sensor (S2) integrated with the DEM layer led us to obtain the highest classification results.


Author(s):  
B. Tavus ◽  
S. Kocaman ◽  
H. A. Nefeslioglu ◽  
C. Gokceoglu

Abstract. The frequency of flood events has increased in recent years most probably due to the climate change. Flood mapping is thus essential for flood modelling, hazard and risk analyses and can be performed by using the data of optical and microwave satellite sensors. Although optical imagery-based flood analysis methods have been often used for the flood assessments before, during and after the event; they have the limitation of cloud coverage. With the increasing temporal availability and spatial resolution of SAR (Synthetic Aperture Radar) satellite sensors, they became popular in data provision for flood detection. On the other hand, their processing may require high level of expertise and visual interpretation of the data is also difficult. In this study, a fusion approach for Sentinel-1 SAR and Sentinel-2 optical data for flood extent mapping was applied for the flood event occurred on August 8th, 2018, in Ordu Province of Turkey. The features obtained from Sentinel-1 and Sentinel-2 processing results were fused in random forest supervised classifier. The results show that Sentinel-2 optical data ease the training sample selection for the flooded areas. In addition, the settlement areas can be extracted from the optical data better. However, the Sentinel-2 data suffer from clouds which prevent from mapping of the full flood extent, which can be carried out with the Sentinel-1 data. Different feature combinations were evaluated and the results were assessed visually. The results are provided in this paper.


2020 ◽  
Vol 12 (2) ◽  
pp. 302 ◽  
Author(s):  
Kai Heckel ◽  
Marcel Urban ◽  
Patrick Schratz ◽  
Miguel Mahecha ◽  
Christiane Schmullius

The fusion of microwave and optical data sets is expected to provide great potential for the derivation of forest cover around the globe. As Sentinel-1 and Sentinel-2 are now both operating in twin mode, they can provide an unprecedented data source to build dense spatial and temporal high-resolution time series across a variety of wavelengths. This study investigates (i) the ability of the individual sensors and (ii) their joint potential to delineate forest cover for study sites in two highly varied landscapes located in Germany (temperate dense mixed forests) and South Africa (open savanna woody vegetation and forest plantations). We used multi-temporal Sentinel-1 and single time steps of Sentinel-2 data in combination to derive accurate forest/non-forest (FNF) information via machine-learning classifiers. The forest classification accuracies were 90.9% and 93.2% for South Africa and Thuringia, respectively, estimated while using autocorrelation corrected spatial cross-validation (CV) for the fused data set. Sentinel-1 only classifications provided the lowest overall accuracy of 87.5%, while Sentinel-2 based classifications led to higher accuracies of 91.9%. Sentinel-2 short-wave infrared (SWIR) channels, biophysical parameters (Leaf Area Index (LAI), and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)) and the lower spectrum of the Sentinel-1 synthetic aperture radar (SAR) time series were found to be most distinctive in the detection of forest cover. In contrast to homogenous forests sites, Sentinel-1 time series information improved forest cover predictions in open savanna-like environments with heterogeneous regional features. The presented approach proved to be robust and it displayed the benefit of fusing optical and SAR data at high spatial resolution.


Sign in / Sign up

Export Citation Format

Share Document