“sen2r”: An R toolbox for automatically downloading and preprocessing Sentinel-2 satellite data

2020 ◽  
Vol 139 ◽  
pp. 104473 ◽  
Author(s):  
Luigi Ranghetti ◽  
Mirco Boschetti ◽  
Francesco Nutini ◽  
Lorenzo Busetto
Keyword(s):  
Author(s):  
Gordana Kaplan ◽  
Ugur Avdan

Wetlands benefits can be summarized but are not limited to their ability to store floodwaters and improve water quality, providing habitats for wildlife and supporting biodiversity, as well as aesthetic values. Over the past few decades, remote sensing and geographical information technologies has proven to be a useful and frequent applications in monitoring and mapping wetlands. Combining both optical and microwave satellite data can give significant information about the biophysical characteristics of wetlands and wetlands` vegetation. Also, fusing data from different sensors, such as radar and optical remote sensing data, can increase the wetland classification accuracy. In this paper we investigate the ability of fusion two fine spatial resolution satellite data, Sentinel-2 and the Synthetic Aperture Radar Satellite, Sentinel-1, for mapping wetlands. As a study area in this paper, Balikdami wetland located in the Anatolian part of Turkey has been selected. Both Sentinel-1 and Sentinel-2 images require pre-processing before their use. After the pre-processing, several vegetation indices calculated from the Sentinel-2 bands were included in the data set. Furthermore, an object-based classification was performed. For the accuracy assessment of the obtained results, number of random points were added over the study area. In addition, the results were compared with data from Unmanned Aerial Vehicle collected on the same data of the overpass of the Sentinel-2, and three days before the overpass of Sentinel-1 satellite. The accuracy assessment showed that the results significant and satisfying in the wetland classification using both multispectral and microwave data. The statistical results of the fusion of the optical and radar data showed high wetland mapping accuracy, with an overall classification accuracy of approximately 90% in the object-based classification. Compared with the high resolution UAV data, the classification results give promising results for mapping and monitoring not just wetlands, but also the sub-classes of the study area. For future research, multi-temporal image use and terrain data collection are recommended.


Author(s):  
Mostafa Kabolizadeh ◽  
Kazem Rangzan ◽  
Sajad Zareie ◽  
Mohsen Rashidian ◽  
Hossein Delfan

Author(s):  
Radha Saradhi Inteti ◽  
Venkata Ravibabu Mandla ◽  
Jagadeeswara Rao Peddada ◽  
Nedun Ramesh

2020 ◽  
Vol 12 (19) ◽  
pp. 3209
Author(s):  
Yunan Luo ◽  
Kaiyu Guan ◽  
Jian Peng ◽  
Sibo Wang ◽  
Yizhi Huang

Remote sensing datasets with both high spatial and high temporal resolution are critical for monitoring and modeling the dynamics of land surfaces. However, no current satellite sensor could simultaneously achieve both high spatial resolution and high revisiting frequency. Therefore, the integration of different sources of satellite data to produce a fusion product has become a popular solution to address this challenge. Many methods have been proposed to generate synthetic images with rich spatial details and high temporal frequency by combining two types of satellite datasets—usually frequent coarse-resolution images (e.g., MODIS) and sparse fine-resolution images (e.g., Landsat). In this paper, we introduce STAIR 2.0, a new fusion method that extends the previous STAIR fusion framework, to fuse three types of satellite datasets, including MODIS, Landsat, and Sentinel-2. In STAIR 2.0, input images are first processed to impute missing-value pixels that are due to clouds or sensor mechanical issues using a gap-filling algorithm. The multiple refined time series are then integrated stepwisely, from coarse- to fine- and high-resolution, ultimately providing a synthetic daily, high-resolution surface reflectance observations. We applied STAIR 2.0 to generate a 10-m, daily, cloud-/gap-free time series that covers the 2017 growing season of Saunders County, Nebraska. Moreover, the framework is generic and can be extended to integrate more types of satellite data sources, further improving the quality of the fusion product.


2019 ◽  
Vol 11 (19) ◽  
pp. 2191 ◽  
Author(s):  
Encarni Medina-Lopez ◽  
Leonardo Ureña-Fuentes

The aim of this work is to obtain high-resolution values of sea surface salinity (SSS) and temperature (SST) in the global ocean by using raw satellite data (i.e., without any band data pre-processing or atmospheric correction). Sentinel-2 Level 1-C Top of Atmosphere (TOA) reflectance data is used to obtain accurate SSS and SST information. A deep neural network is built to link the band information with in situ data from different buoys, vessels, drifters, and other platforms around the world. The neural network used in this paper includes shortcuts, providing an improved performance compared with the equivalent feed-forward architecture. The in situ information used as input for the network has been obtained from the Copernicus Marine In situ Service. Sentinel-2 platform-centred band data has been processed using Google Earth Engine in areas of 100 m × 100 m. Accurate salinity values are estimated for the first time independently of temperature. Salinity results rely only on direct satellite observations, although it presented a clear dependency on temperature ranges. Results show the neural network has good interpolation and extrapolation capabilities. Test results present correlation coefficients of 82 % and 84 % for salinity and temperature, respectively. The most common error for both SST and SSS is 0.4 ∘ C and 0 . 4 PSU. The sensitivity analysis shows that outliers are present in areas where the number of observations is very low. The network is finally applied over a complete Sentinel-2 tile, presenting sensible patterns for river-sea interaction, as well as seasonal variations. The methodology presented here is relevant for detailed coastal and oceanographic applications, reducing the time for data pre-processing, and it is applicable to a wide range of satellites, as the information is directly obtained from TOA data.


2020 ◽  
Author(s):  
Victor Bacu ◽  
Teodor Stefanut ◽  
Dorian Gorgan

<p>Agricultural management relies on good, comprehensive and reliable information on the environment and, in particular, the characteristics of the soil. The soil composition, humidity and temperature can fluctuate over time, leading to migration of plant crops, changes in the schedule of agricultural work, and the treatment of soil by chemicals. Various techniques are used to monitor soil conditions and agricultural activities but most of them are based on field measurements. Satellite data opens up a wide range of solutions based on higher resolution images (i.e. spatial, spectral and temporal resolution). Due to this high resolution, satellite data requires powerful computing resources and complex algorithms. The need for up-to-date and high-resolution soil maps and direct access to this information in a versatile and convenient manner is essential for pedology and agriculture experts, farmers and soil monitoring organizations.</p><p>Unfortunately, the satellite image processing and interpretation are very particular to each area, time and season, and must be calibrated by the real field measurements that are collected periodically. In order to obtain a fairly good accuracy of soil classification at a very high resolution, without using interpolation methods of an insufficient number of measurements, the prediction based on artificial intelligence techniques could be used. The use of machine learning techniques is still largely unexplored, and one of the major challenges is the scalability of the soil classification models toward three main directions: (a) adding new spatial features (i.e. satellite wavelength bands, geospatial parameters, spatial features); (b) scaling from local to global geographical areas; (c) temporal complementarity (i.e. build up the soil description by samples of satellite data acquired along the time, on spring, on summer, in another year, etc.).</p><p>The presentation analysis some experiments and highlights the main issues on developing a soil classification model based on Sentinel-2 satellite data, machine learning techniques and high-performance computing infrastructures. The experiments concern mainly on the features and temporal scalability of the soil classification models. The research is carried out using the HORUS platform [1] and the HorusApp application [2], [3], which allows experts to scale the computation over cloud infrastructure.</p><p> </p><p>References:</p><p>[1] Gorgan D., Rusu T., Bacu V., Stefanut T., Nandra N., “Soil Classification Techniques in Transylvania Area Based on Satellite Data”. World Soils 2019 Conference, 2 - 3 July 2019, ESA-ESRIN, Frascati, Italy (2019).</p><p>[2] Bacu V., Stefanut T., Gorgan D., “Building soil classification maps using HorusApp and Sentinel-2 Products”, Proceedings of the Intelligent Computer Communication and Processing Conference – ICCP, in IEEE press (2019).</p><p>[3] Bacu V., Stefanut T., Nandra N., Rusu T., Gorgan D., “Soil classification based on Sentinel-2 Products using HorusApp application”, Geophysical Research Abstracts, Vol. 21, EGU2019-15746, 2019, EGU General Assembly (2019).</p>


Sign in / Sign up

Export Citation Format

Share Document