Results of the IACS Debris-covered Glaciers Working Group melt model intercomparison

Author(s):  
Francesca Pellicciotti ◽  
Adria Fontrodona-Bach ◽  
David Rounce ◽  
Lindsey Nicholson

<p>Many mountain ranges across the globe support abundant debris-covered glaciers, and the proportion of glacierised area covered by debris is expected to increase under continuing negative mass balance. Within the activities of a newly established IACS Working Group (WG) on debris-covered glaciers, we have been carrying out an intercomparison of melt models for debris-covered ice, to identify the level of model complexity required to estimate sub-debris melt. This is a first necessary step to advance understanding of how debris impacts glacier response to climate at the local, regional, and global scale and accurately represent debris-covered glaciers in models of regional runoff and sea-level change projections.</p><p>We compare ice melt rates simulated by 15 models of different complexity, forced at the point scale using data from nine automatic weather stations in distinct climatic regimes across the globe. We include energy-balance models with a variety of structural choices and model components as well as a range of simplified approaches. Empirical models are run twice: with values from literature and after recalibration at the sites. We then calculate uncertainty bounds for all simulations by prescribing a range of plausible parameters and varying them in a Monte Carlo framework. We restrict the comparison to the melt season and exclude conditions as few current models have the capability to account for them.</p><p>Model results vary across sites considerably, with some sites where most models show a consistently good performance (e.g. in the Alps) which is also similar for energy-balance and empirical models, and sites where models diverge widely and the performance is overall poorer (e.g. in New Zealand and the Caucasus). It is also evident that with a few exceptions, most of the simpler, more empirical models have poor performance without recalibration. A few of the energy-balance models consistently give results different to the others, and we investigate structural differences, the impact of temporal resolution on the calculations (hourly versus daily) and the calculation of turbulent fluxes in particular.    </p><p>We provide a final assessment of model performance under different climate forcing, and evaluate models strengths and limitations against independent validation data from the same sites. We also provide suggestions for future model improvements and identify missing model components and crucial knowledge gaps and which require further attention by the debris-covered glacier community.</p>

2015 ◽  
Vol 8 (8) ◽  
pp. 2379-2398 ◽  
Author(s):  
I. Gouttevin ◽  
M. Lehning ◽  
T. Jonas ◽  
D. Gustafsson ◽  
M. Mölder

Abstract. A new, two-layer canopy module with thermal inertia as part of the detailed snow model SNOWPACK (version 3.2.1) is presented and evaluated. As a by-product of these new developments, an exhaustive description of the canopy module of the SNOWPACK model is provided, thereby filling a gap in the existing literature. In its current form, the two-layer canopy module is suited for evergreen needleleaf forest, with or without snow cover. It is designed to reproduce the difference in thermal response between leafy and woody canopy elements, and their impact on the underlying snowpack or ground surface energy balance. Given the number of processes resolved, the SNOWPACK model with its enhanced canopy module constitutes a sophisticated physics-based modeling chain of the continuum going from atmosphere to soil through the canopy and snow. Comparisons of modeled sub-canopy thermal radiation to stand-scale observations at an Alpine site (Alptal, Switzerland) demonstrate improvements induced by the new canopy module. Both thermal heat mass and the two-layer canopy formulation contribute to reduce the daily amplitude of the modeled canopy temperature signal, in agreement with observations. Particularly striking is the attenuation of the nighttime drop in canopy temperature, which was a key model bias. We specifically show that a single-layered canopy model is unable to produce this limited temperature drop correctly. The impact of the new parameterizations on the modeled dynamics of the sub-canopy snowpack is analyzed. The new canopy module yields consistent results but the frequent occurrence of mixed-precipitation events at Alptal prevents a conclusive assessment of model performance against snow data. The new model is also successfully tested without specific tuning against measured tree temperature and biomass heat-storage fluxes at the boreal site of Norunda (Sweden). This provides an independent assessment of its physical consistency and stresses the robustness and transferability of the chosen parameterizations. The SNOWPACK code including the new canopy module, is available under Gnu General Public License (GPL) license and upon creation of an account at https://models.slf.ch/.


2015 ◽  
Vol 8 (1) ◽  
pp. 209-262 ◽  
Author(s):  
I. Gouttevin ◽  
M. Lehning ◽  
T. Jonas ◽  
D. Gustafsson ◽  
M. Mölder

Abstract. A new, two-layer canopy module with thermal inertia as part of the detailed snow model SNOWPACK (version 3.2.1) is presented and evaluated. This module is designed to reproduce the difference in thermal response between leafy and woody canopy elements, and their impact on the underlying snowpack energy balance. Given the number of processes resolved, the SNOWPACK model with its enhanced canopy module constitutes a very advanced, physics-based atmosphere-to-soil-through-canopy-and-snow modelling chain. Comparisons of modelled sub-canopy thermal radiation to stand-scale observations at an Alpine site (Alptal, Switzerland) demonstrate the improvements of the new canopy module. Both thermal heat mass and the two-layer canopy formulation contribute to reduce the daily amplitude of the modelled canopy temperature signal, in agreement with observations. Particularly striking is the attenuation of the night-time drop in canopy temperature, which was a key model bias. We specifically show that a single-layered canopy model is unable to produce this limited temperature drop correctly. The impact of the new parameterizations on the modelled dynamics of the sub-canopy snowpack is analysed and yields consistent results but the frequent occurrence of mixed-precipitation events at Alptal prevents a conclusive assessment of model performance against snow data. The new model is also successfully tested without specific tuning against measured tree temperatures and biomass heat storage fluxes at the boreal site of Norunda (Sweden). This provides an independent assessment of its physical consistency and stresses the robustness and transferability of the parameterizations used.


Water ◽  
2021 ◽  
Vol 13 (24) ◽  
pp. 3489
Author(s):  
Saeid Mehdizadeh ◽  
Babak Mohammadi ◽  
Quoc Bao Pham ◽  
Zheng Duan

Proper irrigation scheduling and agricultural water management require a precise estimation of crop water requirement. In practice, reference evapotranspiration (ETo) is firstly estimated, and used further to calculate the evapotranspiration of each crop. In this study, two new coupled models were developed for estimating daily ETo. Two optimization algorithms, the shuffled frog-leaping algorithm (SFLA) and invasive weed optimization (IWO), were coupled on an adaptive neuro-fuzzy inference system (ANFIS) to develop and implement the two novel hybrid models (ANFIS-SFLA and ANFIS-IWO). Additionally, four empirical models with varying complexities, including Hargreaves–Samani, Romanenko, Priestley–Taylor, and Valiantzas, were used and compared with the developed hybrid models. The performance of all investigated models was evaluated using the ETo estimates with the FAO-56 recommended method as a benchmark, as well as multiple statistical indicators including root-mean-square error (RMSE), relative RMSE (RRMSE), mean absolute error (MAE), coefficient of determination (R2), and Nash–Sutcliffe efficiency (NSE). All models were tested in Tabriz and Shiraz, Iran as the two studied sites. Evaluation results showed that the developed coupled models yielded better results than the classic ANFIS, with the ANFIS-SFLA outperforming the ANFIS-IWO. Among empirical models, generally the Valiantzas model in its original and calibrated versions presented the best performance. In terms of model complexity (the number of predictors), the model performance was obviously enhanced by an increasing number of predictors. The most accurate estimates of the daily ETo for the study sites were achieved via the hybrid ANFIS-SFLA models using full predictors, with RMSE within 0.15 mm day−1, RRMSE within 4%, MAE within 0.11 mm day−1, and both a high R2 and NSE of 0.99 in the test phase at the two studied sites.


2021 ◽  
Author(s):  
Sarv Priya ◽  
Tanya Aggarwal ◽  
Caitlin Ward ◽  
Girish Bathla ◽  
Mathews Jacob ◽  
...  

Abstract Side experiments are performed on radiomics models to improve their reproducibility. We measure the impact of myocardial masks, radiomic side experiments and data augmentation for information transfer (DAFIT) approach to differentiate patients with and without pulmonary hypertension (PH) using cardiac MRI (CMRI) derived radiomics. Feature extraction was performed from the left ventricle (LV) and right ventricle (RV) myocardial masks using CMRI in 82 patients (42 PH and 40 controls). Various side study experiments were evaluated: Original data without and with intraclass correlation (ICC) feature-filtering and DAFIT approach (without and with ICC feature-filtering). Multiple machine learning and feature selection strategies were evaluated. Primary analysis included all PH patients with subgroup analysis including PH patients with preserved LVEF (≥ 50%). For both primary and subgroup analysis, DAFIT approach without feature-filtering was the highest performer (AUC 0.957–0.958). ICC approaches showed poor performance compared to DAFIT approach. The performance of combined LV and RV masks was superior to individual masks alone. There was variation in top performing models across all approaches (AUC 0.862–0.958). DAFIT approach with features from combined LV and RV masks provide superior performance with poor performance of feature filtering approaches. Model performance varies based upon the feature selection and model combination.


2001 ◽  
Vol 25 (1) ◽  
pp. 80-108 ◽  
Author(s):  
C. W. Dawson ◽  
R. L. Wilby

This review considers the application of artificial neural networks (ANNs) to rainfall-runoff modelling and flood forecasting. This is an emerging field of research, characterized by a wide variety of techniques, a diversity of geographical contexts, a general absence of intermodel comparisons, and inconsistent reporting of model skill. This article begins by outlining the basic principles of ANN modelling, common network architectures and training algorithms. The discussion then addresses related themes of the division and preprocessing of data for model calibration/validation; data standardization techniques; and methods of evaluating ANN model performance. A literature survey underlines the need for clear guidance in current modelling practice, as well as the comparison of ANN methods with more conventional statistical models. Accordingly, a template is proposed in order to assist the construction of future ANN rainfall-runoff models. Finally, it is suggested that research might focus on the extraction of hydrological ‘rules’ from ANN weights, and on the development of standard performance measures that penalize unnecessary model complexity.


2020 ◽  
Author(s):  
Niall Origo ◽  
Joanne Nightingale ◽  
Kim Calders ◽  
Mathias Disney

<p>fAPAR is a radiometric quantity describing the fraction of photosynthetically active radiation (PAR) absorbed by a plant canopy. It is an important component of carbon cycle and energy balance models and has been named as one of the 50 Global Climate Observing System (GCOS) essential climate variables (ECVs). Space agencies such as the ESA and NASA produce satellite fAPAR products in order to address the need for spatially explicit global data to address environmental and climate change issues. Given the derived nature of satellite fAPAR products it is essential to independently verify the results they produce. In order to do this, validation sites (or networks of sites) are needed that directly correspond to the measurands. Further to this, in order to understand divergences between product and validation data, uncertainty information should be provided with all measurement results.</p><p>The canopy radiative transfer models which are used in satellite-derived fAPAR products implement simplistic assumptions about the state of the plant canopy and illumination conditions in order to retrieve an fAPAR estimate in a computationally feasible time. This contribution assesses the impact of the assumptions made by the Sentinel-2 SNAP-derived fAPAR and includes it in a validation of the product over a field site (Wytham Woods, UK), which also has concurrent fAPAR measurements. This is achieved using a 3D model of Wytham Woods which is used to simulate biases associated with specific assumption types. These are used to convert the in situ measurements to the same quantity assumed by the satellite product. The measurement network which provides the fAPAR data is also traceable to SI through sensor calibrations and has associated uncertainty estimates. To our knowledge, these latter points have not been implemented in the biophysical product validation literature, which may explain some of the large discrepancies seen between validation and satellite-derived fAPAR data.</p><p>The ultimate aim of this work is to demonstrate a validation framework for derived biophysical variables such as fAPAR which properly considers the quantity estimated by the satellite and that measured by the in situ sensors, whilst providing metrologically derived uncertainties on the in situ data. This will help to properly inform users as to the quality of the data and determine whether the GCOS requirements set for fAPAR are attainable, ultimately improving carbon cycle and energy balance estimates.</p>


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nicholas Garside ◽  
Hamed Zaribafzadeh ◽  
Ricardo Henao ◽  
Royce Chung ◽  
Daniel Buckland

AbstractMethods used to predict surgical case time often rely upon the current procedural terminology (CPT) code as a nominal variable to train machine-learned models, however this limits the ability of the model to incorporate new procedures and adds complexity as the number of unique procedures increases. The relative value unit (RVU, a consensus-derived billing indicator) can serve as a proxy for procedure workload and could replace the CPT code as a primary feature for models that predict surgical case length. Using 11,696 surgical cases from Duke University Health System electronic health records data, we compared boosted decision tree models that predict individual case length, changing the method by which the model coded procedure type; CPT, RVU, and CPT–RVU combined. Performance of each model was assessed by inference time, MAE, and RMSE compared to the actual case length on a test set. Models were compared to each other and to the manual scheduler method that currently exists. RMSE for the RVU model (60.8 min) was similar to the CPT model (61.9 min), both of which were lower than scheduler (90.2 min). 65.2% of our RVU model’s predictions (compared to 43.2% from the current human scheduler method) fell within 20% of actual case time. Using RVUs reduced model prediction time by ninefold and reduced the number of training features from 485 to 44. Replacing pre-operative CPT codes with RVUs maintains model performance while decreasing overall model complexity in the prediction of surgical case length.


2021 ◽  
Author(s):  
Ann E. Caldwell ◽  
Sarah A. Purcell ◽  
Bethany Gray ◽  
Hailey Smieja ◽  
Victoria A. Catenacci

Sign in / Sign up

Export Citation Format

Share Document