Mapping and monitoring of landslide-dammed lakes using Sentinel-2 time series - a case study after the 2016 Kaikōura Earthquake in New Zealand

Author(s):  
Lorena Abad ◽  
Daniel Hölbling ◽  
Raphael Spiekermann ◽  
Zahra Dabiri ◽  
Günther Prasicek ◽  
...  

<p>On November 14, 2016, a 7.8 magnitude earthquake struck the Kaikōura region on the South Island of New Zealand. The event triggered numerous landslides, which dammed rivers in the area and led to the formation of hundreds of dammed lakes. Landslide-dammed lakes constitute a natural risk, given their propensity to breach, which can lead to flooding of downstream settlements and infrastructure. Hence, detecting and monitoring dammed lakes is a key step for risk management strategies. Aerial photographs and helicopter reconnaissance are frequently used for damage assessments following natural hazard events. However, repeated acquisitions of aerial photographs and on-site examinations are time-consuming and expensive. Moreover, such assessments commonly only take place immediately after an event, and long-term monitoring is rarely performed at larger scales.</p><p>Satellite imagery can support mapping and monitoring tasks by providing an overview of the affected area in multiple time steps following the main triggering event without deploying major resources. In this study, we present an automated approach to detect landslide-dammed lakes using Sentinel-2 optical data through the Google Earth Engine (GEE). Our approach consists of a water detection algorithm adapted from Donchyts et al., 2016 [1], where a dynamic threshold is applied to the Normalized Difference Water Index (NDWI). The water bodies are detected on pre- and post-event monthly mosaics, where the cloud coverage of the composed images is below 30 %, resulting in one pre-event (December 2015) and 14 post-event monthly mosaics. Subsequently, a differencing change detection method is performed between pre- and post-event mosaics. This allows for continuous monitoring of the lake status, and for the detection of new lakes forming in the area at different points in time.</p><p>A random sample of lakes delineated from Google Earth high-resolution imagery, acquired right after the Kaikōura earthquake, was used for validation. The pixels categorized as ‘dammed lakes’ were intersected with the validation data set, resulting in a detection rate of 70 % of the delineated lakes. Ten key dams, identified by local authorities as a potential hazard, were further examined and monitored to identify lake area changes in multiple time steps, from December 2016 to March 2019. Taking advantage of the GEE cloud computing capabilities, the proposed automated approach allows fast time series analysis of large areas. It can be applied to other regions where landslide-dammed lakes need to be monitored over long time scales (months – years). Furthermore, the approach could be combined with outburst flood modeling and simulation to support initial rapid risk assessment.</p><p> [1]   Donchyts, G., Schellekens, J., Winsemius, H., Eisemann, E., & van de Giesen, N. (2016). A 30 m resolution surface water mask including estimation of positional and thematic differences using Landsat 8, SRTM and OpenStreetMap: A case study in the Murray-Darling basin, Australia. Remote Sensing, 8(5).</p><div> <div> </div> </div>

2019 ◽  
Vol 171 ◽  
pp. 36-50 ◽  
Author(s):  
Laura Piedelobo ◽  
David Hernández-López ◽  
Rocío Ballesteros ◽  
Amal Chakhar ◽  
Susana Del Pozo ◽  
...  

2020 ◽  
Vol 12 (4) ◽  
pp. 727 ◽  
Author(s):  
Manuela Hirschmugl ◽  
Janik Deutscher ◽  
Carina Sobe ◽  
Alexandre Bouvet ◽  
Stéphane Mermoz ◽  
...  

Frequent cloud cover and fast regrowth often hamper topical forest disturbance monitoring with optical data. This study aims at overcoming these limitations by combining dense time series of optical (Sentinel-2 and Landsat 8) and SAR data (Sentinel-1) for forest disturbance mapping at test sites in Peru and Gabon. We compare the accuracies of the individual disturbance maps from optical and SAR time series with the accuracies of the combined map. We further evaluate the detection accuracies by disturbance patch size and by an area-based sampling approach. The results show that the individual optical and SAR based forest disturbance detections are highly complementary, and their combination improves all accuracy measures. The overall accuracies increase by about 3% in both areas, producer accuracies of the disturbed forest class increase by up to 25% in Peru when compared to only using one sensor type. The assessment by disturbance patch size shows that the amount of detections of very small disturbances (< 0.2 ha) can almost be doubled by using both data sets: for Gabon 30% as compared to 15.7–17.5%, for Peru 80% as compared to 48.6–65.7%.


2021 ◽  
Vol 13 (9) ◽  
pp. 1853
Author(s):  
Xing Jin ◽  
Ping Tang ◽  
Zheng Zhang

Remote-sensing time-series datasets are significant for global change research and a better understanding of the Earth. However, remote-sensing acquisitions often provide sparse time series due to sensor resolution limitations and environmental factors such as cloud noise for optical data. Image transformation is the method that is often used to deal with this issue. This paper considers the deep convolution networks to learn the complex mapping between sequence images, called adaptive filter generation network (AdaFG), convolution long short-term memory network (CLSTM), and cycle-consistent generative adversarial network (CyGAN) for construction of sequence image datasets. AdaFG network uses a separable 1D convolution kernel instead of 2D kernels to capture the spatial characteristics of input sequence images and then is trained end-to-end using sequence images. CLSTM network can map between different images using the state information of multiple time-series images. CyGAN network can map an image from a source domain to a target domain without additional information. Our experiments, which were performed with unmanned aerial vehicle (UAV) and Landsat-8 datasets, show that the deep convolution networks are effective to produce high-quality time-series image datasets, and the data-driven deep convolution networks can better simulate complex and diverse nonlinear data information.


Author(s):  
J. P. Clemente ◽  
G. Fontanelli ◽  
G. G. Ovando ◽  
Y. L. B. Roa ◽  
A. Lapini ◽  
...  

Abstract. Remote sensing has become an important mean to assess crop areas, specially for the identification of crop types. Google Earth Engine (GEE) is a free platform that provides a large number of satellite images from different constellations. Moreover, GEE provides pixel-based classifiers, which are used for mapping agricultural areas. The objective of this work is to evaluate the performance of different classification algorithms such as Minimum Distance (MD), Random Forest (RF), Support Vector Machine (SVM), Classification and Regression Trees (CART) and Na¨ıve Bayes (NB) on an agricultural area in Tuscany (Italy). Four different scenarios were implemented in GEE combining different information such as optical and Synthetic Aperture Radar (SAR) data, indices and time series. Among the five classifiers used the best performers were RF and SVM. Integrating Sentinel-1 (S1) and Sentinel-2 (S2) slightly improves the classification in comparison to the only S2 image classifications. The use of time series substantially improves supervised classifications. The analysis carried out so far lays the foundation for the integration of time series of SAR and optical data.


2021 ◽  
Author(s):  
Hongye Cao ◽  
Ling Han ◽  
Liangzhi Li

Abstract Remote sensing dynamic monitoring methods often benefit from a dense time series of observations. To enhance these time series, it is sometimes necessary to integrate data from multiple satellite systems. For more than 40 years, Landsat has provided the longest time record of space-based land surface observations, and the successful launch of the Landsat-8 Operational Land Imager (OLI) sensor in 2013 continues this tradition. However, the 16-day observation period of Landsat images has challenged the ability to measure subtle and transient changes like never before. The European Space Agency (ESA) launched the Sentinel-2A satellite in 2015. The satellite carries a Multispectral Instrument (MSI) sensor that provides a 10-20m spatial resolution data source providing an opportunity to complement the Landsat data record. The collection of Sentinel-2A MSI, Landsat-7 ETM+, and Landsat-8 OLI data provide multispectral global coverage from 10m to 30m with further reduced data revisit intervals. There are many differences between sensor data that need to be taken into account to use these data together reliably. The purpose of this study is to evaluate the potential of integrating surface reflectance data from Landsat-7, Landsat-8 and Sentinel-2 archived in the Google Earth Engine (GEE) cloud platform. To test and quantify the differences between these sensors, hundreds of thousands of surface reflectance data from sensor pairs were collected over China. In this study, some differences in the surface reflectance of the sensor pairs were identified, based upon which a cross-sensor conversion model was proposed, i.e., a suitable adjustment equation was fitted using an ordinary least squares (OLS) linear regression method to convert the Sentinel-2 reflectance values closer to the Landsat-7 or Landsat-8 values. The regression results show that the Sentinel MSI data are spectrally comparable to both types of Landsat image data, just as the Landsat sensors are comparable to each other. The root mean square error (RMSE) values between MSI and Landsat spectral values before coordinating the sensors ranged from 0.014 to 0.037, and the RMSE values between OLI and ETM + ranged from 0.019 to 0.039. After coordination, RMSE values between MSI and Landsat spectral values ranged from 0.011 to 0.026, and RMSD values between OLI and ETM + ranged from 0.013 to 0.034. The fitted adjustment equations were also compared to the HLS (Harmonized Landsat-8 Sentinel-2) global fitted equations (Sentinel-2 to Landsat-8) published by the National Aeronautics and Space Administration (NASA) and were found to be significantly different, increasing the likelihood that such adjustments would need to be fitted on a regional basis. This study believes that despite the differences in these datasets, it appears feasible to integrate these datasets by applying a linear regression correction between the bands.


2017 ◽  
Vol 17 (5) ◽  
pp. 627-639 ◽  
Author(s):  
Andreas Kääb ◽  
Bas Altena ◽  
Joseph Mascaro

Abstract. Satellite measurements of coseismic displacements are typically based on synthetic aperture radar (SAR) interferometry or amplitude tracking, or based on optical data such as from Landsat, Sentinel-2, SPOT, ASTER, very high-resolution satellites, or air photos. Here, we evaluate a new class of optical satellite images for this purpose – data from cubesats. More specific, we investigate the PlanetScope cubesat constellation for horizontal surface displacements by the 14 November 2016 Mw 7.8 Kaikoura, New Zealand, earthquake. Single PlanetScope scenes are 2–4 m-resolution visible and near-infrared frame images of approximately 20–30 km  ×  9–15 km in size, acquired in continuous sequence along an orbit of approximately 375–475 km height. From single scenes or mosaics from before and after the earthquake, we observe surface displacements of up to almost 10 m and estimate matching accuracies from PlanetScope data between ±0.25 and ±0.7 pixels (∼ ±0.75 to ±2.0 m), depending on time interval and image product type. Thereby, the most optimistic accuracy estimate of ±0.25 pixels might actually be typical for the final, sun-synchronous, and near-polar-orbit PlanetScope constellation when unrectified data are used for matching. This accuracy, the daily revisit anticipated for the PlanetScope constellation for the entire land surface of Earth, and a number of other features, together offer new possibilities for investigating coseismic and other Earth surface displacements and managing related hazards and disasters, and complement existing SAR and optical methods. For comparison and for a better regional overview we also match the coseismic displacements by the 2016 Kaikoura earthquake using Landsat 8 and Sentinel-2 data.


2020 ◽  
Vol 12 (11) ◽  
pp. 1876 ◽  
Author(s):  
Katsuto Shimizu ◽  
Tetsuji Ota ◽  
Nobuya Mizoue ◽  
Hideki Saito

Developing accurate methods for estimating forest structures is essential for efficient forest management. The high spatial and temporal resolution data acquired by CubeSat satellites have desirable characteristics for mapping large-scale forest structural attributes. However, most studies have used a median composite or single image for analyses. The multi-temporal use of CubeSat data may improve prediction accuracy. This study evaluates the capabilities of PlanetScope CubeSat data to estimate canopy height derived from airborne Light Detection and Ranging (LiDAR) by comparing estimates using Sentinel-2 and Landsat 8 data. Random forest (RF) models using a single composite, multi-seasonal composites, and time-series data were investigated at different spatial resolutions of 3, 10, 20, and 30 m. The highest prediction accuracy was obtained by the PlanetScope multi-seasonal composites at 3 m (relative root mean squared error: 51.3%) and Sentinel-2 multi-seasonal composites at the other spatial resolutions (40.5%, 35.2%, and 34.2% for 10, 20, and 30 m, respectively). The results show that RF models using multi-seasonal composites are 1.4% more accurate than those using harmonic metrics from time-series data in the median. PlanetScope is recommended for canopy height mapping at finer spatial resolutions. However, the unique characteristics of PlanetScope data in a spatial and temporal context should be further investigated for operational forest monitoring.


Sign in / Sign up

Export Citation Format

Share Document