scholarly journals Assessing the benefit of snow data assimilation for runoff modeling in Alpine catchments

2016 ◽  
Vol 20 (9) ◽  
pp. 3895-3905 ◽  
Author(s):  
Nena Griessinger ◽  
Jan Seibert ◽  
Jan Magnusson ◽  
Tobias Jonas

Abstract. In Alpine catchments, snowmelt is often a major contribution to runoff. Therefore, modeling snow processes is important when concerned with flood or drought forecasting, reservoir operation and inland waterway management. In this study, we address the question of how sensitive hydrological models are to the representation of snow cover dynamics and whether the performance of a hydrological model can be enhanced by integrating data from a dedicated external snow monitoring system. As a framework for our tests we have used the hydrological model HBV (Hydrologiska Byråns Vattenbalansavdelning) in the version HBV-light, which has been applied in many hydrological studies and is also in use for operational purposes. While HBV originally follows a temperature-index approach with time-invariant calibrated degree-day factors to represent snowmelt, in this study the HBV model was modified to use snowmelt time series from an external and spatially distributed snow model as model input. The external snow model integrates three-dimensional sequential assimilation of snow monitoring data with a snowmelt model, which is also based on the temperature-index approach but uses a time-variant degree-day factor. The following three variations of this external snow model were applied: (a) the full model with assimilation of observational snow data from a dense monitoring network, (b) the same snow model but with data assimilation switched off and (c) a downgraded version of the same snow model representing snowmelt with a time-invariant degree-day factor. Model runs were conducted for 20 catchments at different elevations within Switzerland for 15 years. Our results show that at low and mid-elevations the performance of the runoff simulations did not vary considerably with the snow model version chosen. At higher elevations, however, best performance in terms of simulated runoff was obtained when using the snowmelt time series from the snow model, which utilized data assimilation. This was especially true for snow-rich years. These findings suggest that with increasing elevation and the correspondingly increased contribution of snowmelt to runoff, the accurate estimation of snow water equivalent (SWE) and snowmelt rates has gained importance.

2016 ◽  
Author(s):  
Nena Griessinger ◽  
Jan Seibert ◽  
Jan Magnusson ◽  
Tobias Jonas

Abstract. Snow models have been developed with a wide range of complexity depending on the purpose of application. In alpine catchments, snowmelt often is a major contribution to runoff; therefore, modeling snow processes is important when concerned with flood or drought forecasting, reservoir operation and inland waterway management. In this study, we address the question, whether the performance of a hydrological model can be enhanced by integrating data from a dedicated external snow monitoring system. As a framework for our tests we used the hydrological model HBV (in the version HBV light), which has been applied in many hydrological studies and is also in use for operational purposes. While HBV originally follows a temperature index approach with time-invariant calibrated degree-day factors to represent snowmelt, in this study the HBV model was modified to use snowmelt time series from an external and spatially distributed snow model as model input. The external snow model integrates three-dimensional sequential assimilation of snow monitoring data with a snowmelt model, which is also based on the temperature-index approach but uses a time-variant degree-day factor. The following three variations of this external snow model were applied: a) the full model with assimilation of observational snow data from a dense monitoring network, b) the same snow model but with data assimilation switched off, c) a downgraded version of the same snow model representing snowmelt with a time-invariant degree-day factor. Model runs were conducted for 20 catchments at different elevations within Switzerland for 15 years. Our results show that at low and mid elevations the performance of the runoff simulations did not vary considerably with the snow model version chosen. At higher elevations, however, best performance in terms of simulated runoff was obtained when using the snowmelt time series from the snow model, which utilized data assimilation. This was especially true for snow-rich years. These findings suggest that with increasing elevation and correspondingly increased contribution of snowmelt to runoff, the accurate estimation of snowmelt rates gains importance.


2017 ◽  
Author(s):  
Kristoffer Aalstad ◽  
Sebastian Westermann ◽  
Thomas Vikhamar Schuler ◽  
Julia Boike ◽  
Laurent Bertino

Abstract. Snow, with high albedo, low thermal conductivity and large water holding capacity strongly modulates the surface energy and water balance, thus making it a critical factor in high-latitude and mountain environments. At the same time, already at medium spatial resolutions of 1 km, estimating the average and subgrid variability of the snow water equivalent (SWE) is challenging in remote sensing applications. In this study, we demonstrate an ensemble-based data assimilation scheme to estimate peak SWE distributions at such scales from a simple snow model driven by downscaled reanalysis data. The basic idea is to relate the timing of the snow cover depletion (that is accessible from satellite products) to pre-melt SWE, while at the same time obtaining the subgrid scale distribution. Subgrid SWE is assumed to be lognormally distributed, which can be translated to a modeled time series of fractional snow covered area (fSCA) by means of the snow model. Assimilation of satellite-derived fSCA hence facilitates the constrained estimation of the average SWE and coefficient of variation, while taking into account uncertainties in both the model and assimilated data sets. Our method makes use of the ensemble-smoother with multiple data assimilation (ES-MDA) combined with analytical Gaussian anamorphosis to assimilate time series of MODIS and Sentinel-2 fSCA retrievals. The scheme is applied to high-Arctic sites near Ny Ålesund (79° N, Svalbard, Norway) where in-situ observations of fSCA and SWE distributions are available. The method is able to successfully recover accurate estimates of peak subgrid SWE distributions on most of the occasions considered. Through the ES-MDA assimilation, the root mean squared error (RMSE) for the fSCA, peak mean SWE and subgrid coefficient of variation is improved by around 75 %, 60 % and 20 % respectively when compared to the prior, yielding RMSEs of 0.01, 0.09 m water equivalent (w.e.) and 0.13 respectively. By comparing the performance of the ES-MDA to that of other ensemble-based batch smoother schemes, it was found that the ES-MDA either outperforms or at least nearly matches the performance of the other schemes with regards to various evaluation metrics. Given the modularity of the method, it could prove valuable for a range of satellite-era hydrometeorological reanalyses.


2021 ◽  
Author(s):  
Johannes Aschauer ◽  
Christoph Marty

Abstract. Historic measurements are often temporally incomplete and may contain longer periods of missing data whereas climatological analyses require continuous measurement records. This is also valid for historic manual snow depth (HS) measurement time series, where even whole winters can be missing in a station record and suitable methods have to be found to reconstruct the missing data. Daily in-situ HS data from 126 nivo-meteorological stations in Switzerland in an altitudinal range of 230 to 2536 m above sea level is used to compare six different methods for reconstructing long gaps in manual HS time series by performing a "leave-one-winter-out" cross-validation in 21 winters at 33 evaluation stations. Synthetic gaps of one winter length are filled with bias corrected data from the best correlated neighboring station (BSC), inverse distance weighted (IDW) spatial interpolation, a weighted normal ratio (WNR) method, Elastic Net (ENET) regression, Random Forest (RF) regression and a temperature index snow model (SM). Methods that use neighboring station data are tested in two station networks with different density. The ENET, RF, SM and WNR methods are able to reconstruct missing data with a coefficient of determination (r2) above 0.8 regardless of the two station networks used. Median RMSE in the filled winters is below 5 cm for all methods. The two annual climate indicators, average snow depth in a winter (HSavg) and maximum snow depth in a winter (HSmax), can be well reproduced by ENET, RF, SM and WNR with r2 above 0.85 in both station networks. For the inter-station approaches, scores for the number of snow days with HS ≥ 1 cm (dHS1) are clearly weaker and except for BCS positively biased with RMSE of 18–33 days. SM reveals the best performance with r2 of 0.93 and RMSE of 15 days for dHS1. Snow depth seems to be a relatively good-natured parameter when it comes to gap filling of HS data with neighboring stations in a climatological use case. However, when station networks get sparse and if the focus is set on dHS1, temperature index snow models can serve as a suitable alternative to classic inter-station gap filling approaches.


2018 ◽  
Vol 12 (1) ◽  
pp. 247-270 ◽  
Author(s):  
Kristoffer Aalstad ◽  
Sebastian Westermann ◽  
Thomas Vikhamar Schuler ◽  
Julia Boike ◽  
Laurent Bertino

Abstract. With its high albedo, low thermal conductivity and large water storing capacity, snow strongly modulates the surface energy and water balance, which makes it a critical factor in mid- to high-latitude and mountain environments. However, estimating the snow water equivalent (SWE) is challenging in remote-sensing applications already at medium spatial resolutions of 1 km. We present an ensemble-based data assimilation framework that estimates the peak subgrid SWE distribution (SSD) at the 1 km scale by assimilating fractional snow-covered area (fSCA) satellite retrievals in a simple snow model forced by downscaled reanalysis data. The basic idea is to relate the timing of the snow cover depletion (accessible from satellite products) to the peak SSD. Peak subgrid SWE is assumed to be lognormally distributed, which can be translated to a modeled time series of fSCA through the snow model. Assimilation of satellite-derived fSCA facilitates the estimation of the peak SSD, while taking into account uncertainties in both the model and the assimilated data sets. As an extension to previous studies, our method makes use of the novel (to snow data assimilation) ensemble smoother with multiple data assimilation (ES-MDA) scheme combined with analytical Gaussian anamorphosis to assimilate time series of Moderate Resolution Imaging Spectroradiometer (MODIS) and Sentinel-2 fSCA retrievals. The scheme is applied to Arctic sites near Ny-Ålesund (79° N, Svalbard, Norway) where field measurements of fSCA and SWE distributions are available. The method is able to successfully recover accurate estimates of peak SSD on most of the occasions considered. Through the ES-MDA assimilation, the root-mean-square error (RMSE) for the fSCA, peak mean SWE and peak subgrid coefficient of variation is improved by around 75, 60 and 20 %, respectively, when compared to the prior, yielding RMSEs of 0.01, 0.09 m water equivalent (w.e.) and 0.13, respectively. The ES-MDA either outperforms or at least nearly matches the performance of other ensemble-based batch smoother schemes with regards to various evaluation metrics. Given the modularity of the method, it could prove valuable for a range of satellite-era hydrometeorological reanalyses.


2021 ◽  
Vol 10 (2) ◽  
pp. 297-312
Author(s):  
Johannes Aschauer ◽  
Christoph Marty

Abstract. Historic measurements are often temporally incomplete and may contain longer periods of missing data, whereas climatological analyses require continuous measurement records. This is also valid for historic manual snow depth (HS) measurement time series, for which even whole winters can be missing in a station record, and suitable methods have to be found to reconstruct the missing data. Daily in situ HS data from 126 nivo-meteorological stations in Switzerland in an altitudinal range of 230 to 2536 m above sea level are used to compare six different methods for reconstructing long gaps in manual HS time series by performing a “leave-one-winter-out” cross-validation in 21 winters at 33 evaluation stations. Synthetic gaps of one winter length are filled with bias-corrected data from the best-correlated neighboring station (BSC), inverse distance-weighted (IDW) spatial interpolation, a weighted normal ratio (WNR) method, elastic net (ENET) regression, random forest (RF) regression and a temperature index snow model (SM). Methods that use neighboring station data are tested in two station networks with different density. The ENET, RF, SM and WNR methods are able to reconstruct missing data with a coefficient of determination (r2) above 0.8 regardless of the two station networks used. The median root mean square error (RMSE) in the filled winters is below 5 cm for all methods. The two annual climate indicators, average snow depth in a winter (HSavg) and maximum snow depth in a winter (HSmax), can be reproduced by ENET, RF, SM and WNR well, with r2 above 0.85 in both station networks. For the inter-station approaches, scores for the number of snow days with HS>1 cm (dHS1) are clearly weaker and, except for BCS, positively biased with RMSE of 18–33 d. SM reveals the best performance with r2 of 0.93 and RMSE of 15 d for dHS1. Snow depth seems to be a relatively good-natured parameter when it comes to gap filling of HS data with neighboring stations in a climatological use case. However, when station networks get sparse and if the focus is set on dHS1, temperature index snow models can serve as a suitable alternative to classic inter-station gap filling approaches.


2009 ◽  
Vol 13 (5) ◽  
pp. 639-649 ◽  
Author(s):  
C. Corbari ◽  
G. Ravazzani ◽  
J. Martinelli ◽  
M. Mancini

Abstract. The most widely used method for snow dynamic simulation relies on temperature index approach, that makes snow melt and accumulation processes depend on air temperature related parameters. A recently used approach to calibrate these parameters is to compare model results with snow coverage retrieved from satellite images. In area with complex topography and heterogeneous land cover, snow coverage may be affected by the presence of shaded area or dense forest that make pixels to be falsely classified as uncovered. These circumstances may have, in turn, an influence on calibration of model parameters. In this paper we propose a simple procedure to correct snow coverage retrieved from satellite images. We show that using raw snow coverage to calibrate snow model may lead to parameter values out of the range accepted by literature, so that the timing of snow dynamics measured at two ground stations is not correctly simulated. Moreover, when the snow model is implemented into a continuous distributed hydrological model, we show that calibration against corrected snow coverage reduces the error in the simulation of river flow in an Alpine catchment.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Masayuki Kano ◽  
Shin’ichi Miyazaki ◽  
Yoichi Ishikawa ◽  
Kazuro Hirahara

Abstract Postseismic Global Navigation Satellite System (GNSS) time series followed by megathrust earthquakes can be interpreted as a result of afterslip on the plate interface, especially in its early phase. Afterslip is a stress release process accumulated by adjacent coseismic slip and can be considered a recovery process for future events during earthquake cycles. Spatio-temporal evolution of afterslip often triggers subsequent earthquakes through stress perturbation. Therefore, it is important to quantitatively capture the spatio-temporal evolution of afterslip and related postseismic crustal deformation and to predict their future evolution with a physics-based simulation. We developed an adjoint data assimilation method, which directly assimilates GNSS time series into a physics-based model to optimize the frictional parameters that control the slip behavior on the fault. The developed method was validated with synthetic data. Through the optimization of frictional parameters, the spatial distributions of afterslip could roughly (but not in detail) be reproduced if the observation noise was included. The optimization of frictional parameters reproduced not only the postseismic displacements used for the assimilation, but also improved the prediction skill of the following time series. Then, we applied the developed method to the observed GNSS time series for the first 15 days following the 2003 Tokachi-oki earthquake. The frictional parameters in the afterslip regions were optimized to A–B ~ O(10 kPa), A ~ O(100 kPa), and L ~ O(10 mm). A large afterslip is inferred on the shallower side of the coseismic slip area. The optimized frictional parameters quantitatively predicted the postseismic GNSS time series for the following 15 days. These characteristics can also be detected if the simulation variables can be simultaneously optimized. The developed data assimilation method, which can be directly applied to GNSS time series following megathrust earthquakes, is an effective quantitative evaluation method for assessing risks of subsequent earthquakes and for monitoring the recovery process of megathrust earthquakes.


2016 ◽  
Vol 9 (12) ◽  
pp. 4491-4519 ◽  
Author(s):  
Aurélien Gallice ◽  
Mathias Bavay ◽  
Tristan Brauchli ◽  
Francesco Comola ◽  
Michael Lehning ◽  
...  

Abstract. Climate change is expected to strongly impact the hydrological and thermal regimes of Alpine rivers within the coming decades. In this context, the development of hydrological models accounting for the specific dynamics of Alpine catchments appears as one of the promising approaches to reduce our uncertainty of future mountain hydrology. This paper describes the improvements brought to StreamFlow, an existing model for hydrological and stream temperature prediction built as an external extension to the physically based snow model Alpine3D. StreamFlow's source code has been entirely written anew, taking advantage of object-oriented programming to significantly improve its structure and ease the implementation of future developments. The source code is now publicly available online, along with a complete documentation. A special emphasis has been put on modularity during the re-implementation of StreamFlow, so that many model aspects can be represented using different alternatives. For example, several options are now available to model the advection of water within the stream. This allows for an easy and fast comparison between different approaches and helps in defining more reliable uncertainty estimates of the model forecasts. In particular, a case study in a Swiss Alpine catchment reveals that the stream temperature predictions are particularly sensitive to the approach used to model the temperature of subsurface flow, a fact which has been poorly reported in the literature to date. Based on the case study, StreamFlow is shown to reproduce hourly mean discharge with a Nash–Sutcliffe efficiency (NSE) of 0.82 and hourly mean temperature with a NSE of 0.78.


2013 ◽  
Vol 13 (3) ◽  
pp. 583-596 ◽  
Author(s):  
M. Coustau ◽  
S. Ricci ◽  
V. Borrell-Estupina ◽  
C. Bouvier ◽  
O. Thual

Abstract. Mediterranean catchments in southern France are threatened by potentially devastating fast floods which are difficult to anticipate. In order to improve the skill of rainfall-runoff models in predicting such flash floods, hydrologists use data assimilation techniques to provide real-time updates of the model using observational data. This approach seeks to reduce the uncertainties present in different components of the hydrological model (forcing, parameters or state variables) in order to minimize the error in simulated discharges. This article presents a data assimilation procedure, the best linear unbiased estimator (BLUE), used with the goal of improving the peak discharge predictions generated by an event-based hydrological model Soil Conservation Service lag and route (SCS-LR). For a given prediction date, selected model inputs are corrected by assimilating discharge data observed at the basin outlet. This study is conducted on the Lez Mediterranean basin in southern France. The key objectives of this article are (i) to select the parameter(s) which allow for the most efficient and reliable correction of the simulated discharges, (ii) to demonstrate the impact of the correction of the initial condition upon simulated discharges, and (iii) to identify and understand conditions in which this technique fails to improve the forecast skill. The correction of the initial moisture deficit of the soil reservoir proves to be the most efficient control parameter for adjusting the peak discharge. Using data assimilation, this correction leads to an average of 12% improvement in the flood peak magnitude forecast in 75% of cases. The investigation of the other 25% of cases points out a number of precautions for the appropriate use of this data assimilation procedure.


2014 ◽  
Vol 18 (1) ◽  
pp. 353-365 ◽  
Author(s):  
U. Haberlandt ◽  
I. Radtke

Abstract. Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the application for derived flood frequency analysis.


Sign in / Sign up

Export Citation Format

Share Document