scholarly journals Simultaneous Retrieval of Soil, Leaf, and Canopy Parameters from Sentinel-3 OLCI and SLSTR Multi-spectral Top-of-Canopy Reflectances

Author(s):  
Simon Blessing ◽  
Ralf Giering

Multi- and hyper-spectral, multi-angular top-of-canopy reflectance data call for an efficient retrieval system which can improve the retrieval of standard canopy parameters (as albedo, LAI, fAPAR), and exploit the information to retrieve additional parameters (e.g. leaf pigments). Furthermore consistency between the retrieved parameters and quantification of uncertainties are required for many applications. % (2) methods We present a retrieval system for canopy and sub-canopy parameters (OptiSAIL), which is based on a model comprising SAIL, PROSPECT-D (leaf properties), TARTES (snow properties), a soil model (BRDF, moisture), and a cloud contamination model. The inversion is gradient based and uses codes % created by Automatic Differentiation. The full per pixel covariance-matrix of the retrieved parameters is computed. For this demonstration, single observation data from the Sentinel-3 SY_2_SYN (synergy) product is used. The results are compared with the MODIS 4-day LAI/fPAR product and PhenoCam site photography. OptiSAIL produces generally consistent and credible results, at least matching the quality of the technically quite different MODIS product. For most of the sites, the PhenoCam images support the OptiSAIL retrievals. The system is computationally efficient with a rate of 150 pixel per second (7 millisecond per pixel) for a single thread on a current desktop CPU using observations on 26 bands. Not all of the model parameters are well determined in all situations. Significant correlations between the parameters are found, which can change sign and magnitude over time. OptiSAIL appears to meet the design goals, puts real-time processing with this kind of system into reach, seamlessly extends to hyper-spectral and multi-sensor retrievals, and promises to be a good platform for sensitivity studies. The incorporated cloud and snow detection adds to the robustness of the system.

1996 ◽  
Vol 33 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Jian Hua Lei ◽  
Wolfgang Schilling

Physically-based urban rainfall-runoff models are mostly applied without parameter calibration. Given some preliminary estimates of the uncertainty of the model parameters the associated model output uncertainty can be calculated. Monte-Carlo simulation followed by multi-linear regression is used for this analysis. The calculated model output uncertainty can be compared to the uncertainty estimated by comparing model output and observed data. Based on this comparison systematic or spurious errors can be detected in the observation data, the validity of the model structure can be confirmed, and the most sensitive parameters can be identified. If the calculated model output uncertainty is unacceptably large the most sensitive parameters should be calibrated to reduce the uncertainty. Observation data for which systematic and/or spurious errors have been detected should be discarded from the calibration data. This procedure is referred to as preliminary uncertainty analysis; it is illustrated with an example. The HYSTEM program is applied to predict the runoff volume from an experimental catchment with a total area of 68 ha and an impervious area of 20 ha. Based on the preliminary uncertainty analysis, for 7 of 10 events the measured runoff volume is within the calculated uncertainty range, i.e. less than or equal to the calculated model predictive uncertainty. The remaining 3 events include most likely systematic or spurious errors in the observation data (either in the rainfall or the runoff measurements). These events are then discarded from further analysis. After calibrating the model the predictive uncertainty of the model is estimated.


Author(s):  
Marcello Pericoli ◽  
Marco Taboga

Abstract We propose a general method for the Bayesian estimation of a very broad class of non-linear no-arbitrage term-structure models. The main innovation we introduce is a computationally efficient method, based on deep learning techniques, for approximating no-arbitrage model-implied bond yields to any desired degree of accuracy. Once the pricing function is approximated, the posterior distribution of model parameters and unobservable state variables can be estimated by standard Markov Chain Monte Carlo methods. As an illustrative example, we apply the proposed techniques to the estimation of a shadow-rate model with a time-varying lower bound and unspanned macroeconomic factors.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2013 ◽  
Vol 135 (12) ◽  
Author(s):  
Arun V. Kolanjiyil ◽  
Clement Kleinstreuer

This is the second article of a two-part paper, combining high-resolution computer simulation results of inhaled nanoparticle deposition in a human airway model (Kolanjiyil and Kleinstreuer, 2013, “Nanoparticle Mass Transfer From Lung Airways to Systemic Regions—Part I: Whole-Lung Aerosol Dynamics,” ASME J. Biomech. Eng., 135(12), p. 121003) with a new multicompartmental model for insoluble nanoparticle barrier mass transfer into systemic regions. Specifically, it allows for the prediction of temporal nanoparticle accumulation in the blood and lymphatic systems and in organs. The multicompartmental model parameters were determined from experimental retention and clearance data in rat lungs and then the validated model was applied to humans based on pharmacokinetic cross-species extrapolation. This hybrid simulator is a computationally efficient tool to predict the nanoparticle kinetics in the human body. The study provides critical insight into nanomaterial deposition and distribution from the lungs to systemic regions. The quantitative results are useful in diverse fields such as toxicology for exposure-risk analysis of ubiquitous nanomaterial and pharmacology for nanodrug development and targeting.


Author(s):  
Alfonso Callejo ◽  
Daniel Dopico

Algorithms for the sensitivity analysis of multibody systems are quickly maturing as computational and software resources grow. Indeed, the area has made substantial progress since the first academic methods and examples were developed. Today, sensitivity analysis tools aimed at gradient-based design optimization are required to be as computationally efficient and scalable as possible. This paper presents extensive verification of one of the most popular sensitivity analysis techniques, namely the direct differentiation method (DDM). Usage of such method is recommended when the number of design parameters relative to the number of outputs is small and when the time integration algorithm is sensitive to accumulation errors. Verification is hereby accomplished through two radically different computational techniques, namely manual differentiation and automatic differentiation, which are used to compute the necessary partial derivatives. Experiments are conducted on an 18-degree-of-freedom, 366-dependent-coordinate bus model with realistic geometry and tire contact forces, which constitutes an unusually large system within general-purpose sensitivity analysis of multibody systems. The results are in good agreement; the manual technique provides shorter runtimes, whereas the automatic differentiation technique is easier to implement. The presented results highlight the potential of manual and automatic differentiation approaches within general-purpose simulation packages, and the importance of formulation benchmarking.


Author(s):  
Suryanarayana R. Pakalapati ◽  
Hayri Sezer ◽  
Ismail B. Celik

Dual number arithmetic is a well-known strategy for automatic differentiation of computer codes which gives exact derivatives, to the machine accuracy, of the computed quantities with respect to any of the involved variables. A common application of this concept in Computational Fluid Dynamics, or numerical modeling in general, is to assess the sensitivity of mathematical models to the model parameters. However, dual number arithmetic, in theory, finds the derivatives of the actual mathematical expressions evaluated by the computer code. Thus the sensitivity to a model parameter found by dual number automatic differentiation is essentially that of the combination of the actual mathematical equations, the numerical scheme and the grid used to solve the equations not just that of the model equations alone as implied by some studies. This aspect of the sensitivity analysis of numerical simulations using dual number auto derivation is explored in the current study. A simple one-dimensional advection diffusion equation is discretized using different schemes of finite volume method and the resulting systems of equations are solved numerically. Derivatives of the numerical solutions with respect to parameters are evaluated automatically using dual number automatic differentiation. In addition the derivatives are also estimated using finite differencing for comparison. The analytical solution was also found for the original PDE and derivatives of this solution are also computed analytically. It is shown that a mathematical model could potentially show different sensitivity to a model parameter depending on the numerical method employed to solve the equations and the grid resolution used. This distinction is important since such inter-dependence needs to be carefully addressed to avoid confusion when reporting the sensitivity of predictions to a model parameter using a computer code. A systematic assessment of numerical uncertainty in the sensitivities computed using automatic differentiation is presented.


2014 ◽  
Vol 7 (1) ◽  
pp. 1535-1600
Author(s):  
M. Scherstjanoi ◽  
J. O. Kaplan ◽  
H. Lischke

Abstract. To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.


2021 ◽  
Vol 21 (10) ◽  
pp. 263
Author(s):  
Yun-Chuan Xiang ◽  
Ze-Jun Jiang ◽  
Yun-Yong Tang

Abstract In this work, we reanalyzed 11 years of spectral data from the Fermi Large Area Telescope (Fermi-LAT) of currently observed starburst galaxies (SBGs) and star-forming galaxies (SFGs). We used a one-zone model provided by NAIMA and the hadronic origin to explain the GeV observation data of the SBGs and SFGs. We found that a protonic distribution of a power-law form with an exponential cutoff can explain the spectra of most SBGs and SFGs. However, it cannot explain the spectral hardening components of NGC 1068 and NGC 4945 in the GeV energy band. Therefore, we considered the two-zone model to well explain these phenomena. We summarized the features of two model parameters, including the spectral index, cutoff energy, and proton energy budget. Similar to the evolution of supernova remnants (SNRs) in the Milky Way, we estimated the protonic acceleration limitation inside the SBGs to be the order of 102 TeV using the one-zone model; this is close to those of SNRs in the Milky Way.


Author(s):  
Y Chen ◽  
C Muratov ◽  
V Matveev

ABSTRACTWe consider the stationary solution for the Ca2+ concentration near a point Ca2+ source describing a single-channel Ca2+ nanodomain, in the presence of a single mobile Ca2+ buffer with one-to-one Ca2+ binding. We present computationally efficient approximants that estimate stationary single-channel Ca2+ nanodomains with great accuracy in broad regions of parameter space. The presented approximants have a functional form that combines rational and exponential functions, which is similar to that of the well-known Excess Buffer Approximation and the linear approximation, but with parameters estimated using two novel (to our knowledge) methods. One of the methods involves interpolation between the short-range Taylor series of the buffer concentration and its long-range asymptotic series in inverse powers of distance from the channel. Although this method has already been used to find Padé (rational-function) approximants to single-channel Ca2+ and buffer concentration, extending this method to interpolants combining exponential and rational functions improves accuracy in a significant fraction of the relevant parameter space. A second method is based on the variational approach, and involves a global minimization of an appropriate functional with respect to parameters of the chosen approximations. Extensive parameter sensitivity analysis is presented, comparing these two methods with previously developed approximants. Apart from increased accuracy, the strength of these approximants is that they can be extended to more realistic buffers with multiple binding sites characterized by cooperative Ca2+ binding, such as calmodulin and calretinin.STATEMENT OF SIGNIFICANCEMathematical and computational modeling plays an important role in the study of local Ca2+ signals underlying vesicle exocysosis, muscle contraction and other fundamental physiological processes. Closed-form approximations describing steady-state distribution of Ca2+ in the vicinity of an open Ca2+ channel have proved particularly useful for the qualitative modeling of local Ca2+ signals. We present simple and efficient approximants for the Ca2+ concentration in the presence of a mobile Ca2+ buffer, which achieve great accuracy over a wide range of model parameters. Such approximations provide an efficient method for estimating Ca2+ and buffer concentrations without resorting to numerical simulations, and allow to study the qualitative dependence of nanodomain Ca2+ distribution on the buffer’s Ca2+ binding properties and its diffusivity.


Author(s):  
Ahmad Bani Younes ◽  
James Turner

In general, the behavior of science and engineering is predicted based on nonlinear math models. Imprecise knowledge of the model parameters alters the system response from the assumed nominal model data. We propose an algorithm for generating insights into the range of variability that can be the expected due to model uncertainty. An Automatic differentiation tool builds exact partial derivative models to develop State Transition Tensor Series-based (STTS) solution for mapping initial uncertainty models into instantaneous uncertainty models. Development of nonlinear transformations for mapping an initial probability distribution function into a current probability distribution function for computing fully nonlinear statistical system properties. This also demands the inverse mapping of the series. The resulting nonlinear probability distribution function (pdf) represents a Liouiville approximation for the stochastic Fokker Planck equation. Numerical examples are presented that demonstrate the effectiveness of the proposed methodology.


Sign in / Sign up

Export Citation Format

Share Document