scholarly journals Parameter Estimation of the Macroscopic Fundamental Diagram: A Maximum Likelihood Approach

Author(s):  
Rafegh Aghamohammadi ◽  
Jorge Laval

This paper extends the Stochastic Method of Cuts (SMoC) to approximate of the Macroscopic Fundamental Diagram (MFD) of urban networks and uses Maximum Likelihood Estimation (MLE) method to estimate the model parameters based on empirical data from a corridor and 30 cities around the world. For the corridor case, the estimated values are in good agreement with the measured values of the parameters. For the network datasets, the results indicate that the method yields satisfactory parameter estimates and graphical fits for roughly 50\% of the studied networks, where estimations fall within the expected range of the parameter values. The satisfactory estimates are mostly for the datasets which (i) cover a relatively wider range of densities and (ii) the average flow values at different densities are approximately normally distributed similar to the probability density function of the SMoC. The estimated parameter values are compared to the real or expected values and any discrepancies and their potential causes are discussed in depth to identify the challenges in the MFD estimation both analytically and empirically. In particular, we find that the most important issues needing further investigation are: (i) the distribution of loop detectors within the links, (ii) the distribution of loop detectors across the network, and (iii) the treatment of unsignalized intersections and their impact on the block length.

2021 ◽  
pp. 125-148
Author(s):  
Timothy E. Essington

The chapter “Likelihood and Its Applications” introduces the likelihood concept and the concept of maximum likelihood estimation of model parameters. Likelihood is the link between data and models. It is used to estimate model parameters, judge the degree of precision of parameter estimates, and weight support for alternative models. Likelihood is therefore a crucial concept that underlies the ability to test multiple models. The chapter contains several worked examples that progress the reader through increasingly complex problems, ending at likelihood profiles for models with multiple parameters. Importantly, it illustrates how one can take any dynamic model and data and use likelihood to link the data (random variables) to a probability function that depends on the dynamic model.


2017 ◽  
Vol 12 (02) ◽  
pp. 1750010 ◽  
Author(s):  
K. FERGUSSON

A discounted equity index is computed as the ratio of an equity index to the accumulated savings account denominated in the same currency. In this way, discounting provides a natural way of separating the modeling of the short rate from the market price of risk component of the equity index. In this vein, we investigate the applicability of maximum likelihood estimation to stochastic models of a discounted equity index, providing explicit formulae for parameter estimates. We restrict our consideration to two important index models, namely the Black–Scholes model and the minimal market model of Platen, each having an explicit formula for the transition density function. Explicit formulae for estimates of the model parameters and their standard errors are derived and are used in fitting the two models to US data. Further, we demonstrate the effect of the model choice on the no-arbitrage assumption employed in risk neutral pricing.


Behaviour ◽  
2007 ◽  
Vol 144 (11) ◽  
pp. 1315-1332 ◽  
Author(s):  
Sebastián Luque ◽  
Christophe Guinet

AbstractForaging behaviour frequently occurs in bouts, and considerable efforts to properly define those bouts have been made because they partly reflect different scales of environmental variation. Methods traditionally used to identify such bouts are diverse, include some level of subjectivity, and their accuracy and precision is rarely compared. Therefore, the applicability of a maximum likelihood estimation method (MLM) for identifying dive bouts was investigated and compared with a recently proposed sequential differences analysis (SDA). Using real data on interdive durations from Antarctic fur seals (Arctocephalus gazella Peters, 1875), the MLM-based model produced briefer bout ending criterion (BEC) and more precise parameter estimates than the SDA approach. The MLM-based model was also in better agreement with real data, as it predicted the cumulative frequency of differences in interdive duration more accurately. Using both methods on simulated data showed that the MLM-based approach produced less biased estimates of the given model parameters than the SDA approach. Different choices of histogram bin widths involved in SDA had a systematic effect on the estimated BEC, such that larger bin widths resulted in longer BECs. These results suggest that using the MLM-based procedure with the sequential differences in interdive durations, and possibly other dive characteristics, may be an accurate, precise, and objective tool for identifying dive bouts.


Methodology ◽  
2005 ◽  
Vol 1 (2) ◽  
pp. 81-85 ◽  
Author(s):  
Stefan C. Schmukle ◽  
Jochen Hardt

Abstract. Incremental fit indices (IFIs) are regularly used when assessing the fit of structural equation models. IFIs are based on the comparison of the fit of a target model with that of a null model. For maximum-likelihood estimation, IFIs are usually computed by using the χ2 statistics of the maximum-likelihood fitting function (ML-χ2). However, LISREL recently changed the computation of IFIs. Since version 8.52, IFIs reported by LISREL are based on the χ2 statistics of the reweighted least squares fitting function (RLS-χ2). Although both functions lead to the same maximum-likelihood parameter estimates, the two χ2 statistics reach different values. Because these differences are especially large for null models, IFIs are affected in particular. Consequently, RLS-χ2 based IFIs in combination with conventional cut-off values explored for ML-χ2 based IFIs may lead to a wrong acceptance of models. We demonstrate this point by a confirmatory factor analysis in a sample of 2449 subjects.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


2020 ◽  
Vol 9 (1) ◽  
pp. 61-81
Author(s):  
Lazhar BENKHELIFA

A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.


Author(s):  
Tu Xu ◽  
Jorge Laval

This paper analyzes the impact of uphill grades on the acceleration drivers choose to impose on their vehicles. Statistical inference is made based on the maximum likelihood estimation of a two-regime stochastic car-following model using Next Generation SIMulation (NGSIM) data. Previous models assume that the loss in acceleration on uphill grades is given by the effects of gravity. We find evidence that this is not the case for car drivers, who tend to overcome half of the gravitational effects by using more engine power. Truck drivers only compensate for 5% of the loss, possibly because of limited engine power. This indicates not only that current models are severely overestimating the operational impacts that uphill grades have on regular vehicles, but also underestimating their environmental impacts. We also find that car-following model parameters are significantly different among shoulder, median and middle lanes but more data is needed to understand clearly why this happens.


2019 ◽  
Vol 36 (10) ◽  
pp. 2352-2357
Author(s):  
David A Shaw ◽  
Vu C Dinh ◽  
Frederick A Matsen

Abstract Maximum likelihood estimation in phylogenetics requires a means of handling unknown ancestral states. Classical maximum likelihood averages over these unknown intermediate states, leading to provably consistent estimation of the topology and continuous model parameters. Recently, a computationally efficient approach has been proposed to jointly maximize over these unknown states and phylogenetic parameters. Although this method of joint maximum likelihood estimation can obtain estimates more quickly, its properties as an estimator are not yet clear. In this article, we show that this method of jointly estimating phylogenetic parameters along with ancestral states is not consistent in general. We find a sizeable region of parameter space that generates data on a four-taxon tree for which this joint method estimates the internal branch length to be exactly zero, even in the limit of infinite-length sequences. More generally, we show that this joint method only estimates branch lengths correctly on a set of measure zero. We show empirically that branch length estimates are systematically biased downward, even for short branches.


Sign in / Sign up

Export Citation Format

Share Document