How sensitive are rainfall interception models to the canopy parameters of semi-arid forests?

Author(s):  
Marinos Eliades ◽  
Adriana Bruggeman ◽  
Hakan Djuma ◽  
Maciek W. Lubczynski

<p>Quantifying rainfall interception can be a difficult task because the canopy storage has high spatial and temporal variability. The aim of this study is to examine the sensitivity of three commonly used rainfall interception models (Rutter, Gash and Liu) to the canopy storage capacity (S) and to the free throughfall coefficient (p).  The research was carried out in a semi-arid Pinus brutia forest, located in Cyprus. One meteorological station and 15 manual throughfall gauges were used to measure throughfall and to compute rainfall interception for the period between January 2008 and July 2016. Additionally, one automatic and 28 manual throughfall gauges were installed in July 2016. We ran the models for different sets of canopy parameter values and evaluated their performances with the Nash-Sutcliffe Efficiency (NSE) and the bias, for the calibration period (July 2016 - December 2019). We validated the models for the period between January 2008 and July 2016. During the calibration period, the models were tested with different temporal resolutions (hourly and daily). Total rainfall and rainfall interception during the calibration period were 1272 and 264 mm, respectively. The simplified Rutter model with the hourly interval showed a decrease of the NSE with an increase of the free throughfall coefficient. The bias of the model was near zero for a canopy storage between 2 and 2.5 mm and a free throughfall coefficient between 0.4 and 0.7. The Rutter model was less sensitive to changes in the canopy parameters than the other two models. The bias of the daily Gash and Liu models was more sensitive to the free throughfall coefficients than to the canopy storage capacity. The bias of these models was near zero for free throughfall coefficients over 0.7. The daily Gash and Liu models show high NSE values (0.93 – 0.96) for a range of different canopy parameter values (S: 0.5 – 4.0, p: 0 – 0.9). Zero bias was achieved for a canopy storage capacity of 2 mm and above and a free throughfall coefficient between 0 and 0.7. Total rainfall and rainfall interception during the validation period were 3488 and 1039 mm, respectively. The Gash model performed better than the Liu model when the optimal parameter set (highest NSE, zero bias) was used. The interception computed with the Gash model was 987 mm, while 829 mm with the Liu model. This study showed that there is a range of canopy parameter values that can be used to achieve high model performance of rainfall interception models.</p>

1991 ◽  
Vol 22 (1) ◽  
pp. 15-36 ◽  
Author(s):  
Joakim Harlin

A process oriented calibration scheme (POC), developed for the HBV hydrological model is presented. Twelve parameters were calibrated in two steps. Firstly, initial parameter estimates were made from recession analysis of observed runoff. Secondly, the parameters were calibrated individually in an iteration loop starting with the snow routine, over the soil routine and finally the runoff-response function. This was done by minimizing different objective functions for different parameters and only over subperiods where the parameters were active. Approximately three hundred and fifty objective function evaluations were needed to find the optimal parameter set, which resulted in a computer time of about 17 hours on a 386 processor PC for a ten-year calibration period. Experiments were also performed with fine tuning as well as direct search of the response surface, where the parameters were allowed to change simultaneously. A calibration period length of between two and six years was found sufficient to find optimal parameters in the test basins. The POC scheme yielded as good model performance as after a manual calibration.


2021 ◽  
Vol 11 (15) ◽  
pp. 6955
Author(s):  
Andrzej Rysak ◽  
Magdalena Gregorczyk

This study investigates the use of the differential transform method (DTM) for integrating the Rössler system of the fractional order. Preliminary studies of the integer-order Rössler system, with reference to other well-established integration methods, made it possible to assess the quality of the method and to determine optimal parameter values that should be used when integrating a system with different dynamic characteristics. Bifurcation diagrams obtained for the Rössler fractional system show that, compared to the RK4 scheme-based integration, the DTM results are more resistant to changes in the fractionality of the system.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ryan B. Patterson-Cross ◽  
Ariel J. Levine ◽  
Vilas Menon

Abstract Background Generating and analysing single-cell data has become a widespread approach to examine tissue heterogeneity, and numerous algorithms exist for clustering these datasets to identify putative cell types with shared transcriptomic signatures. However, many of these clustering workflows rely on user-tuned parameter values, tailored to each dataset, to identify a set of biologically relevant clusters. Whereas users often develop their own intuition as to the optimal range of parameters for clustering on each data set, the lack of systematic approaches to identify this range can be daunting to new users of any given workflow. In addition, an optimal parameter set does not guarantee that all clusters are equally well-resolved, given the heterogeneity in transcriptomic signatures in most biological systems. Results Here, we illustrate a subsampling-based approach (chooseR) that simultaneously guides parameter selection and characterizes cluster robustness. Through bootstrapped iterative clustering across a range of parameters, chooseR was used to select parameter values for two distinct clustering workflows (Seurat and scVI). In each case, chooseR identified parameters that produced biologically relevant clusters from both well-characterized (human PBMC) and complex (mouse spinal cord) datasets. Moreover, it provided a simple “robustness score” for each of these clusters, facilitating the assessment of cluster quality. Conclusion chooseR is a simple, conceptually understandable tool that can be used flexibly across clustering algorithms, workflows, and datasets to guide clustering parameter selection and characterize cluster robustness.


2018 ◽  
Vol 19 (11) ◽  
pp. 1835-1852 ◽  
Author(s):  
Grey S. Nearing ◽  
Benjamin L. Ruddell ◽  
Martyn P. Clark ◽  
Bart Nijssen ◽  
Christa Peters-Lidard

Abstract We propose a conceptual and theoretical foundation for information-based model benchmarking and process diagnostics that provides diagnostic insight into model performance and model realism. We benchmark against a bounded estimate of the information contained in model inputs to obtain a bounded estimate of information lost due to model error, and we perform process-level diagnostics by taking differences between modeled versus observed transfer entropy networks. We use this methodology to reanalyze the recent Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) land model intercomparison project that includes the following models: CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, and ORCHIDEE. We report that these models (i) use only roughly half of the information available from meteorological inputs about observed surface energy fluxes, (ii) do not use all information from meteorological inputs about long-term Budyko-type water balances, (iii) do not capture spatial heterogeneities in surface processes, and (iv) all suffer from similar patterns of process-level structural error. Because the PLUMBER intercomparison project did not report model parameter values, it is impossible to know whether process-level error patterns are due to model structural error or parameter error, although our proposed information-theoretic methodology could distinguish between these two issues if parameter values were reported. We conclude that there is room for significant improvement to the current generation of land models and their parameters. We also suggest two simple guidelines to make future community-wide model evaluation and intercomparison experiments more informative.


2015 ◽  
Vol 7 (1) ◽  
pp. 16-28 ◽  
Author(s):  
Andrijana Todorovic ◽  
Jasna Plavsic

Assessment of climate change (CC) impact on hydrologic regime requires a calibrated rainfall-runoff model, defined by its structure and parameters. The parameter values depend, inter alia, on the calibration period. This paper investigates influence of the calibration period on parameter values, model efficiency and streamflow projections under CC. To this end, a conceptual HBV-light model of the Kolubara River catchment in Serbia is calibrated against flows observed within 5 consecutive wettest, driest, warmest and coldest years and in the complete record period. The optimised parameters reveal high sensitivity towards calibration period. Hydrologic projections under climate change are developed by employing (1) five hydrologic models with outputs of one GCM–RCM chain (Global and Regional Climate Models) and (2) one hydrologic model with five GCM–RCM outputs. Sign and magnitude of change in projected variables, compared to the corresponding values simulated over the baseline period, vary with the hydrologic model used. This variability is comparable in magnitude to variability stemming from climate models. Models calibrated over periods with similar precipitation as the projected ones may result in less uncertain projections, while warmer climate is not expected to contribute to the uncertainty in flow projections. Simulations over prolonged dry periods are expected to be uncertain.


2020 ◽  
Author(s):  
Kyesam Jung ◽  
Simon B. Eickhoff ◽  
Oleksandr V. Popovych

AbstractDynamical modeling of the resting-state brain dynamics essentially relies on the empirical neuroimaging data utilized for the model derivation and validation. There is however still no standardized data processing for magnetic resonance imaging pipelines and the structural and functional connectomes involved in the models. In this study, we thus address how the parameters of diffusion-weighted data processing for structural connectivity (SC) can influence the validation results of the whole-brain mathematical models and search for the optimal parameter settings. On this way, we simulate the functional connectivity by systems of coupled oscillators, where the underlying network is constructed from the empirical SC and evaluate the performance of the models for varying parameters of data processing. For this, we introduce a set of simulation conditions including the varying number of total streamlines of the whole-brain tractography (WBT) used for extraction of SC, cortical parcellations based on functional and anatomical brain properties and distinct model fitting modalities. We observed that the graph-theoretical network properties of structural connectome can be affected by varying tractography density and strongly relate to the model performance. We explored free parameters of the considered models and found the optimal parameter configurations, where the model dynamics closely replicates the empirical data. We also found that the optimal number of the total streamlines of WBT can vary for different brain atlases. Consequently, we suggest a way how to improve the model performance based on the network properties and the optimal parameter configurations from multiple WBT conditions. Furthermore, the population of subjects can be stratified into subgroups with divergent behaviors induced by the varying number of WBT streamlines such that different recommendations can be made with respect to the data processing for individual subjects and brain parcellations.Author summaryThe human brain connectome at macro level provides an anatomical constitution of inter-regional connections through the white matter in the brain. Understanding the brain dynamics grounded on the structural architecture is one of the most studied and important topics actively debated in the neuroimaging research. However, the ground truth for the adequate processing and reconstruction of the human brain connectome in vivo is absent, which is crucial for evaluation of the results of the data-driven as well as model-based approaches to brain investigation. In this study we thus evaluate the effect of the whole-brain tractography density on the structural brain architecture by varying the number of total axonal fiber streamlines. The obtained results are validated throughout the dynamical modeling of the resting-state brain dynamics. We found that the tractography density may strongly affect the graph-theoretical network properties of the structural connectome. The obtained results also show that a dense whole-brain tractography is not always the best condition for the modeling, which depends on a selected brain parcellation used for the calculation of the structural connectivity and derivation of the model network. Our findings provide suggestions for the optimal data processing for neuroimaging research and brain modeling.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


2021 ◽  
Vol 14 (2) ◽  
pp. 905-921
Author(s):  
Shoma Yamanouchi ◽  
Camille Viatte ◽  
Kimberly Strong ◽  
Erik Lutsch ◽  
Dylan B. A. Jones ◽  
...  

Abstract. Ammonia (NH3) is a major source of nitrates in the atmosphere and a major source of fine particulate matter. As such, there have been increasing efforts to measure the atmospheric abundance of NH3 and its spatial and temporal variability. In this study, long-term measurements of NH3 derived from multiscale datasets are examined. These NH3 datasets include 16 years of total column measurements using Fourier transform infrared (FTIR) spectroscopy, 3 years of surface in situ measurements, and 10 years of total column measurements from the Infrared Atmospheric Sounding Interferometer (IASI). The datasets were used to quantify NH3 temporal variability over Toronto, Canada. The multiscale datasets were also compared to assess the representativeness of the FTIR measurements. All three time series showed positive trends in NH3 over Toronto: 3.34 ± 0.89 %/yr from 2002 to 2018 in the FTIR columns, 8.88 ± 5.08 %/yr from 2013 to 2017 in the surface in situ data, and 8.38 ± 1.54 %/yr from 2008 to 2018 in the IASI columns. To assess the representative scale of the FTIR NH3 columns, correlations between the datasets were examined. The best correlation between FTIR and IASI was obtained with coincidence criteria of ≤25 km and ≤20 min, with r=0.73 and a slope of 1.14 ± 0.06. Additionally, FTIR column and in situ measurements were standardized and correlated. Comparison of 24 d averages and monthly averages resulted in correlation coefficients of r=0.72 and r=0.75, respectively, although correlation without averaging to reduce high-frequency variability led to a poorer correlation, with r=0.39. The GEOS-Chem model, run at 2∘ × 2.5∘ resolution, was compared to FTIR and IASI to assess model performance and investigate the correlation of observational data and model output, both with local column measurements (FTIR) and measurements on a regional scale (IASI). Comparisons on a regional scale (a domain spanning 35 to 53∘ N and 93.75 to 63.75∘ W) resulted in r=0.57 and thus a coefficient of determination, which is indicative of the predictive capacity of the model, of r2=0.33, but comparing a single model grid point against the FTIR resulted in a poorer correlation, with r2=0.13, indicating that a finer spatial resolution is needed for modeling NH3.


2012 ◽  
Vol 7 (2) ◽  
pp. 025202 ◽  
Author(s):  
Xiaolu Ling ◽  
Weidong Guo ◽  
Qianfei Zhao ◽  
Yanling Sun ◽  
Yuhao Zou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document