scholarly journals High temporal resolution rainfall–runoff modeling using long-short-term-memory (LSTM) networks

Author(s):  
Wei Li ◽  
Amin Kiaghadi ◽  
Clint Dawson
2021 ◽  
Vol 3 ◽  
Author(s):  
Amir Sahraei ◽  
Tobias Houska ◽  
Lutz Breuer

Recent advances in laser spectroscopy has made it feasible to measure stable isotopes of water in high temporal resolution (i.e., sub-daily). High-resolution data allow the identification of fine-scale, short-term transport and mixing processes that are not detectable at coarser resolutions. Despite such advantages, operational routine and long-term sampling of stream and groundwater sources in high temporal resolution is still far from being common. Methods that can be used to interpolate infrequently measured data at multiple sampling sites would be an important step forward. This study investigates the application of a Long Short-Term Memory (LSTM) deep learning model to predict complex and non-linear high-resolution (3 h) isotope concentrations of multiple stream and groundwater sources under different landuse and hillslope positions in the Schwingbach Environmental Observatory (SEO), Germany. The main objective of this study is to explore the prediction performance of an LSTM that is trained on multiple sites, with a set of explanatory data that are more straightforward and less expensive to measure compared to the stable isotopes of water. The explanatory data consist of meteorological data, catchment wetness conditions, and natural tracers (i.e., water temperature, pH and electrical conductivity). We analyse the model's sensitivity to different input data and sequence lengths. To ensure an efficient model performance, a Bayesian optimization approach is employed to optimize the hyperparameters of the LSTM. Our main finding is that the LSTM allows for predicting stable isotopes of stream and groundwater by using only short-term sequence (6 h) of measured water temperature, pH and electrical conductivity. The best performing LSTM achieved, on average of all sampling sites, an RMSE of 0.7‰, MAE of 0.4‰, R2 of 0.9 and NSE of 0.7. The LSTM can be utilized to predict and interpolate the continuous isotope concentration time series either for data gap filling or in case where no continuous data acquisition is feasible. This is very valuable in practice because measurements of these tracers are still much cheaper than stable isotopes of water and can be continuously conducted with relatively minor maintenance.


2019 ◽  
Vol 11 (23) ◽  
pp. 2784 ◽  
Author(s):  
Alysha van Duynhoven ◽  
Suzana Dragićević

Land cover change (LCC) is typically characterized by infrequent changes over space and time. Data-driven methods such as deep learning (DL) approaches have proven effective in many domains for predictive and classification tasks. When applied to geospatial data, sequential DL methods such as long short-term memory (LSTM) have yielded promising results in remote sensing and GIScience studies. However, the characteristics of geospatial datasets selected for use with these methods have demonstrated important implications on method performance. The number of data layers available, the rate of LCC, and inherent errors resulting from classification procedures are expected to influence model performance. Yet, it is unknown how these can affect compatibility with the LSTM method. As such, the main objective of this study is to explore the capacity of LSTM to forecast patterns that have emerged from LCC dynamics given varying temporal resolutions, persistent land cover classes, and auxiliary data layers pertaining to classification confidence. Stacked LSTM modeling approaches are applied to 17-year MODIS land cover datasets focused on the province of British Columbia, Canada. This geospatial data is reclassified to four major land cover (LC) classes during pre-processing procedures. The evaluation considers the dataset at variable temporal resolutions to demonstrate the significance of geospatial data characteristics on LSTM method performance in several scenarios. Results indicate that LSTM can be utilized for forecasting LCC patterns when there are few limitations on temporal intervals of the datasets provided. Likewise, this study demonstrates improved performance measures when there are classes that do not change. Furthermore, providing classification confidence data as ancillary input also demonstrated improved results when the number of timesteps or temporal resolution is limited. This study contributes to future applications of DL and LSTM methods for forecasting LCC.


2019 ◽  
Author(s):  
WEIHONG LIAO ◽  
ZHAOKAI YIN ◽  
RUOJIA WANG ◽  
XIAOHUI LEI

2021 ◽  
Author(s):  
Pai-Feng Teng ◽  
John Nieber

<p>Flooding is one of the most financially devastating natural hazards in the world. Studying storage-discharge relations can have the potential to improve existing flood forecasting systems, which are based on rainfall-runoff models. This presentation will assess the non-linear relation between daily water storage (ΔS) and discharge (Q) simulated by physical-based hydrological models at the Rum River Watershed, a HUC8 watershed in Minnesota, between 1995-2015, by training Long Short-Term Memory (LSTM) networks and other machine learning (ML) algorithms. Currently, linear regression models do not adequately represent the relationship between the simulated total ΔS and total Q at the HUC-8 watershed (R<sup>2</sup> = 0.3667). Since ML algorithms have been used for predicting the outputs that represent arbitrary non-linear functions between predictors and predictands, they will be used for improving the accuracy of the non-linear relation of the storage-discharge dynamics. This research will mainly use LSTM networks, the time-series deep learning neural network that has already been used for predicting rainfall-runoff relations. The LSTM network will be trained to evaluate the storage-discharge relationship by comparing two sets of non-linear hydrological variables simulated by the semi-distributed Hydrological Simulated Program-Fortran (HSPF): the relationship between the simulated discharges and input hydrological variables at selected HUC-8 watersheds, including air temperatures, cloud covers, dew points, potential evapotranspiration, precipitations, solar radiations, wind speeds, and total water storage, and the dynamics between simulated discharge and input variables that do not include the total water storage. The result of this research will lay the foundation for assessing the accuracy of downscaled storage-discharge dynamics by applying similar methods to evaluate the storage-discharge dynamics at small-scaled, HUC-12 watersheds. Furthermore, its results have the potentials for us to evaluate whether downscaling of storage-discharge dynamics at the HUC-12 watershed can improve the accuracy of predicting discharge by comparing the result from the HUC-8 and the HUC-12 watersheds.</p>


2021 ◽  
Vol 25 (10) ◽  
pp. 5517-5534
Author(s):  
Thomas Lees ◽  
Marcus Buechel ◽  
Bailey Anderson ◽  
Louise Slater ◽  
Steven Reece ◽  
...  

Abstract. Long short-term memory (LSTM) models are recurrent neural networks from the field of deep learning (DL) which have shown promise for time series modelling, especially in conditions when data are abundant. Previous studies have demonstrated the applicability of LSTM-based models for rainfall–runoff modelling; however, LSTMs have not been tested on catchments in Great Britain (GB). Moreover, opportunities exist to use spatial and seasonal patterns in model performances to improve our understanding of hydrological processes and to examine the advantages and disadvantages of LSTM-based models for hydrological simulation. By training two LSTM architectures across a large sample of 669 catchments in GB, we demonstrate that the LSTM and the Entity Aware LSTM (EA LSTM) models simulate discharge with median Nash–Sutcliffe efficiency (NSE) scores of 0.88 and 0.86 respectively. We find that the LSTM-based models outperform a suite of benchmark conceptual models, suggesting an opportunity to use additional data to refine conceptual models. In summary, the LSTM-based models show the largest performance improvements in the north-east of Scotland and in south-east of England. The south-east of England remained difficult to model, however, in part due to the inability of the LSTMs configured in this study to learn groundwater processes, human abstractions and complex percolation properties from the hydro-meteorological variables typically employed for hydrological modelling.


2021 ◽  
Vol 25 (4) ◽  
pp. 2045-2062
Author(s):  
Martin Gauch ◽  
Frederik Kratzert ◽  
Daniel Klotz ◽  
Grey Nearing ◽  
Jimmy Lin ◽  
...  

Abstract. Long Short-Term Memory (LSTM) networks have been applied to daily discharge prediction with remarkable success. Many practical applications, however, require predictions at more granular timescales. For instance, accurate prediction of short but extreme flood peaks can make a lifesaving difference, yet such peaks may escape the coarse temporal resolution of daily predictions. Naively training an LSTM on hourly data, however, entails very long input sequences that make learning difficult and computationally expensive. In this study, we propose two multi-timescale LSTM (MTS-LSTM) architectures that jointly predict multiple timescales within one model, as they process long-past inputs at a different temporal resolution than more recent inputs. In a benchmark on 516 basins across the continental United States, these models achieved significantly higher Nash–Sutcliffe efficiency (NSE) values than the US National Water Model. Compared to naive prediction with distinct LSTMs per timescale, the multi-timescale architectures are computationally more efficient with no loss in accuracy. Beyond prediction quality, the multi-timescale LSTM can process different input variables at different timescales, which is especially relevant to operational applications where the lead time of meteorological forcings depends on their temporal resolution.


Sign in / Sign up

Export Citation Format

Share Document