scholarly journals A Method for Joint Estimation of Homogeneous Model Parameters and Heterogeneous Desired Speeds

2020 ◽  
Vol 5 ◽  
Author(s):  
Fredrik Johansson

One of the main strengths of microscopic pedestrian simulation models is the ability to explicitly represent the heterogeneity of the pedestrian population. Most pedestrian populations are heterogeneous with respect to the desired speed, and the outputs of microscopic models are naturally sensitive to the desired speed; it has a direct effect on the flow and travel time, thus strongly affecting results that are of interest when applying pedestrian simulation models in practice. An inaccurate desired speed distribution will in most cases lead to inaccurate simulation results. In this paper we propose a method to estimate the desired speed distribution by treating the desired speeds as model parameters to be adjusted in the calibration together with other model parameters. This leads to an optimization problem that is computationally costly to solve for large data sets. We propose a heuristic method to solve this optimization problem by decomposing the original problem in simpler parts that are solved separately. We demonstrate the method on trajectory data from Stockholm central station and analyze the results to conclude that the method is able to produce a plausible desired speed distribution under slightly congested conditions.

Author(s):  
Ankit Anil Chaudhari ◽  
Karthik K. Srinivasan ◽  
Bhargava Rama Chilukuri ◽  
Martin Treiber ◽  
Ostap Okhrin

We propose a new methodology for calibrating Wiedemann-99 vehicle-following parameters for mixed traffic (different conventional vehicle classes) based on trajectory data. The existing acceleration equations of the Wiedemann model are modified to represent more realistic driving behavior. Exploratory analysis of simulation data revealed that different Wiedemann-99 model parameters could lead to similar macroscopic behavior, highlighting the importance of calibration at the microscopic level. Therefore, the proposed methodology is based on optimizing performance measures at the microscopic level (acceleration, speed, and trajectory profiles) to estimate suitable calibration parameters. Further, the goodness of fit for the observed data is sensitive to the numerical integration method used to compute vehicles’ velocity and position. We found that the calibrated parameters using the proposed methodology perform better than other approaches for calibrating mixed traffic. The results reveal that the calibrated parameter values and, consequently, the thresholds that delineate closing, following, emergency braking, and opening regimes, vary between two-wheelers and cars. The window (in the relative speed versus gap plot) for the unconscious following is larger for cars while the free-flow regime is more extensive for two-wheelers. Moreover, under the same relative speed and gap stimulus, two-wheelers and cars may be in different regimes and display different acceleration responses. Thus, accurate calibration of each vehicle’s parameters is essential for developing micro-simulation models for mixed traffic. The calibration analysis results of strict and overlapping staggered car following signify an impact of staggered car following compared with strict car following which demands separate calibration for strict and staggered following.


2019 ◽  
Vol 2019 ◽  
pp. 1-18 ◽  
Author(s):  
Martijn Sparnaaij ◽  
Dorine C. Duives ◽  
Victor L. Knoop ◽  
Serge P. Hoogendoorn

Ideally, a multitude of steps has to be taken before a commercial implementation of a pedestrian model is used in practice. Calibration, the main goal of which is to increase the accuracy of the predictions by determining the set of values for the model parameters that allows for the best replication of reality, has an important role in this process. Yet, up to recently, calibration has received relatively little attention within the field of pedestrian modelling. Most studies focus only on one specific movement base case and/or use a single metric. It is questionable how generally applicable a pedestrian simulation model is that has been calibrated using a limited set of movement base cases and one metric. The objective of this research is twofold, namely, to (1) determine the effect of the choice of movement base cases, metrics, and density levels on the calibration results and (2) to develop a multiple-objective calibration approach to determine the aforementioned effects. In this paper a multiple-objective calibration scheme is presented for pedestrian simulation models, in which multiple normalized metrics (i.e., flow, spatial distribution, effort, and travel time) are combined by means of weighted sum method that accounts for the stochastic nature of the model. Based on the analysis of the calibration results, it can be concluded that (1) it is necessary to use multiple movement base cases when calibrating a model to capture all relevant behaviours, (2) the level of density influences the calibration results, and (3) the choice of metric or combinations of metrics influence the results severely.


Author(s):  
Ronan Keane ◽  
H. Oliver Gao

Before a car-following model can be applied in practice, it must first be validated against real data in a process known as calibration. This paper discusses the formulation of calibration as an optimization problem and compares different algorithms for its solution. The optimization consists of an arbitrary car following model, posed as either an ordinary or delay differential equation, being calibrated to an arbitrary source of trajectory data that may include lane changes. Typically, the calibration problem is solved using gradient free optimization. In this work, the gradient of the optimization problem is derived analytically using the adjoint method. The computational cost of the adjoint method does not scale with the number of model parameters, which makes it more efficient than evaluating the gradient numerically using finite differences. Numerical results are presented that show that quasi-Newton algorithms using the adjoint method are significantly faster than a genetic algorithm and also achieve slightly better accuracy of the calibrated model.


2021 ◽  
Vol 143 (9) ◽  
Author(s):  
Yi-Ping Chen ◽  
Kuei-Yuan Chan

Abstract Simulation models play crucial roles in efficient product development cycles, therefore many studies aim to improve the confidence of a model during the validation stage. In this research, we proposed a dynamic model validation to provide accurate parameter settings for minimal output errors between simulation models and real model experiments. The optimal operations for setting parameters are developed to maximize the effects by specific model parameters while minimizing interactions. To manage the excessive costs associated with simulations of complex systems, we propose a procedure with three main features: the optimal excitation based on global sensitivity analysis (GSA) is done via metamodel techniques, for estimating parameters with the polynomial chaos-based Kalman filter, and validating the updated model based on hypothesis testing. An illustrative mathematical model was used to demonstrate the detail processes in our proposed method. We also apply our method on a vehicle dynamic case with a composite maneuver for exciting unknown model parameters such as inertial and coefficients of the tire model; the unknown model parameters were successfully estimated within a 95% credible interval. The contributions of this research are also underscored through multiple cases.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257958
Author(s):  
Miguel Navascués ◽  
Costantino Budroni ◽  
Yelena Guryanova

In the context of epidemiology, policies for disease control are often devised through a mixture of intuition and brute-force, whereby the set of logically conceivable policies is narrowed down to a small family described by a few parameters, following which linearization or grid search is used to identify the optimal policy within the set. This scheme runs the risk of leaving out more complex (and perhaps counter-intuitive) policies for disease control that could tackle the disease more efficiently. In this article, we use techniques from convex optimization theory and machine learning to conduct optimizations over disease policies described by hundreds of parameters. In contrast to past approaches for policy optimization based on control theory, our framework can deal with arbitrary uncertainties on the initial conditions and model parameters controlling the spread of the disease, and stochastic models. In addition, our methods allow for optimization over policies which remain constant over weekly periods, specified by either continuous or discrete (e.g.: lockdown on/off) government measures. We illustrate our approach by minimizing the total time required to eradicate COVID-19 within the Susceptible-Exposed-Infected-Recovered (SEIR) model proposed by Kissler et al. (March, 2020).


2021 ◽  
Author(s):  
Peter J. Gawthrop ◽  
Michael Pan ◽  
Edmund J. Crampin

AbstractRenewed interest in dynamic simulation models of biomolecular systems has arisen from advances in genome-wide measurement and applications of such models in biotechnology and synthetic biology. In particular, genome-scale models of cellular metabolism beyond the steady state are required in order to represent transient and dynamic regulatory properties of the system. Development of such whole-cell models requires new modelling approaches. Here we propose the energy-based bond graph methodology, which integrates stoichiometric models with thermo-dynamic principles and kinetic modelling. We demonstrate how the bond graph approach intrinsically enforces thermodynamic constraints, provides a modular approach to modelling, and gives a basis for estimation of model parameters leading to dynamic models of biomolecular systems. The approach is illustrated using a well-established stoichiometric model of E. coli and published experimental data.


2020 ◽  
Vol 5 ◽  
Author(s):  
Abdullah Alhawsawi ◽  
Majid Sarvi ◽  
Milad Haghani ◽  
Abbas Rajabifard

Modelling and simulating pedestrian motions are standard ways to investigate crowd dynamics aimed to enhance pedestrians’ safety. Movement of people is affected by interactions with one another and with the physical environment that it may be a worthy line of research. This paper studies the impact of speed on how pedestrians respond to the obstacles (i.e. Obstacles avoidance behaviour). A field experiment was performed in which a group of people were instructed to perform some obstacles avoidance tasks at two levels of normal and high speeds. Trajectories of the participants are extracted from the video recordings for the subsequent intentions:(i) to seek out the impact of total speed, x and yaxis (ii) to observe the impact of the speed on the movement direction, x-axis, (iii) to find out the impact of speed on the lateral direction, y-axis. The results of the experiments could be used to enhance the current pedestrian simulation models.


2011 ◽  
Vol 12 (1) ◽  
pp. 92-98
Author(s):  
Aušra Klimavičienė

The article examines the problem of determining asset allocation to sustainable retirement portfolio. The article attempts to apply heuristic method – 100 minus age in stocks rule – to determine asset allocation to sustainable retirement portfolio. Using dynamic stochastic simulation and stochastic optimization techniques the optimization of heuristic method rule is presented and the optimal alternative to „100“ is found. Seeking to reflect the stochastic nature of stock and bond returns and the human lifespan, the dynamic stochastic simulation models incorporate both the stochastic returns and the probability of living another year based on Lithuania‘s population mortality tables. The article presents the new method – adjusted heuristic method – to be used to determine asset allocation to retirement portfolio and highlights its advantages.


2021 ◽  
Author(s):  
Soham Sheth ◽  
Francois McKee ◽  
Kieran Neylon ◽  
Ghazala Fazil

Abstract We present a novel reservoir simulator time-step selection approach which uses machine-learning (ML) techniques to analyze the mathematical and physical state of the system and predict time-step sizes which are large while still being efficient to solve, thus making the simulation faster. An optimal time-step choice avoids wasted non-linear and linear equation set-up work when the time-step is too small and avoids highly non-linear systems that take many iterations to solve. Typical time-step selectors use a limited set of features to heuristically predict the size of the next time-step. While they have been effective for simple simulation models, as model complexity increases, there is an increasing need for robust data-driven time-step selection algorithms. We propose two workflows – static and dynamic – that use a diverse set of physical (e.g., well data) and mathematical (e.g., CFL) features to build a predictive ML model. This can be pre-trained or dynamically trained to generate an inference model. The trained model can also be reinforced as new data becomes available and efficiently used for transfer learning. We present the application of these workflows in a commercial reservoir simulator using distinct types of simulation model including black oil, compositional and thermal steam-assisted gravity drainage (SAGD). We have found that history-match and uncertainty/optimization studies benefit most from the static approach while the dynamic approach produces optimum step-sizes for prediction studies. We use a confidence monitor to manage the ML time-step selector at runtime. If the confidence level falls below a threshold, we switch to traditional heuristic method for that time-step. This avoids any degradation in the performance when the model features are outside the training space. Application to several complex cases, including a large field study, shows a significant speedup for single simulations and even better results for multiple simulations. We demonstrate that any simulation can take advantage of the stored state of the trained model and even augment it when new situations are encountered, so the system becomes more effective as it is exposed to more data.


Sign in / Sign up

Export Citation Format

Share Document