Likelihood and Its Applications

2021 ◽  
pp. 125-148
Author(s):  
Timothy E. Essington

The chapter “Likelihood and Its Applications” introduces the likelihood concept and the concept of maximum likelihood estimation of model parameters. Likelihood is the link between data and models. It is used to estimate model parameters, judge the degree of precision of parameter estimates, and weight support for alternative models. Likelihood is therefore a crucial concept that underlies the ability to test multiple models. The chapter contains several worked examples that progress the reader through increasingly complex problems, ending at likelihood profiles for models with multiple parameters. Importantly, it illustrates how one can take any dynamic model and data and use likelihood to link the data (random variables) to a probability function that depends on the dynamic model.

2017 ◽  
Vol 12 (02) ◽  
pp. 1750010 ◽  
Author(s):  
K. FERGUSSON

A discounted equity index is computed as the ratio of an equity index to the accumulated savings account denominated in the same currency. In this way, discounting provides a natural way of separating the modeling of the short rate from the market price of risk component of the equity index. In this vein, we investigate the applicability of maximum likelihood estimation to stochastic models of a discounted equity index, providing explicit formulae for parameter estimates. We restrict our consideration to two important index models, namely the Black–Scholes model and the minimal market model of Platen, each having an explicit formula for the transition density function. Explicit formulae for estimates of the model parameters and their standard errors are derived and are used in fitting the two models to US data. Further, we demonstrate the effect of the model choice on the no-arbitrage assumption employed in risk neutral pricing.


2021 ◽  
pp. 263-280
Author(s):  
Timothy E. Essington

The chapter “Skills for Fitting Models to Data” provides worked examples of the model parameter estimation and model-selection examples presented in Part 2, both in spreadsheets and in R. This chapter presumes that the reader is reasonably comfortable setting up spreadsheets and R code and applying the modeling skills presented in Chapter 15. It begins with maximum likelihood estimation, presenting first a direct method and then numerical methods that yield more precise (usually) estimates of maximum likelihood parameter estimates. It then examines how to estimate parameters that do not appear in probability functions (e.g. a model in which survivorship rate is density dependent). The chapter concludes by discussing likelihood profiles.


Author(s):  
Rafegh Aghamohammadi ◽  
Jorge Laval

This paper extends the Stochastic Method of Cuts (SMoC) to approximate of the Macroscopic Fundamental Diagram (MFD) of urban networks and uses Maximum Likelihood Estimation (MLE) method to estimate the model parameters based on empirical data from a corridor and 30 cities around the world. For the corridor case, the estimated values are in good agreement with the measured values of the parameters. For the network datasets, the results indicate that the method yields satisfactory parameter estimates and graphical fits for roughly 50\% of the studied networks, where estimations fall within the expected range of the parameter values. The satisfactory estimates are mostly for the datasets which (i) cover a relatively wider range of densities and (ii) the average flow values at different densities are approximately normally distributed similar to the probability density function of the SMoC. The estimated parameter values are compared to the real or expected values and any discrepancies and their potential causes are discussed in depth to identify the challenges in the MFD estimation both analytically and empirically. In particular, we find that the most important issues needing further investigation are: (i) the distribution of loop detectors within the links, (ii) the distribution of loop detectors across the network, and (iii) the treatment of unsignalized intersections and their impact on the block length.


Behaviour ◽  
2007 ◽  
Vol 144 (11) ◽  
pp. 1315-1332 ◽  
Author(s):  
Sebastián Luque ◽  
Christophe Guinet

AbstractForaging behaviour frequently occurs in bouts, and considerable efforts to properly define those bouts have been made because they partly reflect different scales of environmental variation. Methods traditionally used to identify such bouts are diverse, include some level of subjectivity, and their accuracy and precision is rarely compared. Therefore, the applicability of a maximum likelihood estimation method (MLM) for identifying dive bouts was investigated and compared with a recently proposed sequential differences analysis (SDA). Using real data on interdive durations from Antarctic fur seals (Arctocephalus gazella Peters, 1875), the MLM-based model produced briefer bout ending criterion (BEC) and more precise parameter estimates than the SDA approach. The MLM-based model was also in better agreement with real data, as it predicted the cumulative frequency of differences in interdive duration more accurately. Using both methods on simulated data showed that the MLM-based approach produced less biased estimates of the given model parameters than the SDA approach. Different choices of histogram bin widths involved in SDA had a systematic effect on the estimated BEC, such that larger bin widths resulted in longer BECs. These results suggest that using the MLM-based procedure with the sequential differences in interdive durations, and possibly other dive characteristics, may be an accurate, precise, and objective tool for identifying dive bouts.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Aisha Fayomi ◽  
Ali Algarni ◽  
Abdullah M. Almarashi

This paper introduces a new family of distributions by combining the sine produced family and the inverse Lomax generated family. The new proposed family is very interested and flexible more than some old and current families. It has many new models which have many applications in physics, engineering, and medicine. Some fundamental statistical properties of the sine inverse Lomax generated family of distributions as moments, generating function, and quantile function are calculated. Four special models as sine inverse Lomax-exponential, sine inverse Lomax-Rayleigh, sine inverse Lomax-Frèchet and sine inverse Lomax-Lomax models are proposed. Maximum likelihood estimation of model parameters is proposed in this paper. For the purpose of evaluating the performance of maximum likelihood estimates, a simulation study is conducted. Two real life datasets are analyzed by the sine inverse Lomax-Lomax model, and we show that providing flexibility and more fitting than known nine models derived from other generated families.


Methodology ◽  
2005 ◽  
Vol 1 (2) ◽  
pp. 81-85 ◽  
Author(s):  
Stefan C. Schmukle ◽  
Jochen Hardt

Abstract. Incremental fit indices (IFIs) are regularly used when assessing the fit of structural equation models. IFIs are based on the comparison of the fit of a target model with that of a null model. For maximum-likelihood estimation, IFIs are usually computed by using the χ2 statistics of the maximum-likelihood fitting function (ML-χ2). However, LISREL recently changed the computation of IFIs. Since version 8.52, IFIs reported by LISREL are based on the χ2 statistics of the reweighted least squares fitting function (RLS-χ2). Although both functions lead to the same maximum-likelihood parameter estimates, the two χ2 statistics reach different values. Because these differences are especially large for null models, IFIs are affected in particular. Consequently, RLS-χ2 based IFIs in combination with conventional cut-off values explored for ML-χ2 based IFIs may lead to a wrong acceptance of models. We demonstrate this point by a confirmatory factor analysis in a sample of 2449 subjects.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


Sign in / Sign up

Export Citation Format

Share Document