A DETERMINISTIC METHODOLOGY FOR ESTIMATION OF PARAMETERS IN DYNAMIC MARKOV CHAIN MODELS

2011 ◽  
Vol 19 (01) ◽  
pp. 71-100 ◽  
Author(s):  
A. R. ORTIZ ◽  
H. T. BANKS ◽  
C. CASTILLO-CHAVEZ ◽  
G. CHOWELL ◽  
X. WANG

A method for estimating parameters in dynamic stochastic (Markov Chain) models based on Kurtz's limit theory coupled with inverse problem methods developed for deterministic dynamical systems is proposed and illustrated in the context of disease dynamics. This methodology relies on finding an approximate large-population behavior of an appropriate scaled stochastic system. The approach leads to a deterministic approximation obtained as solutions of rate equations (ordinary differential equations) in terms of the large sample size average over sample paths or trajectories (limits of pure jump Markov processes). Using the resulting deterministic model, we select parameter subset combinations that can be estimated using an ordinary-least-squares (OLS) or generalized-least-squares (GLS) inverse problem formulation with a given data set. The selection is based on two criteria of the sensitivity matrix: the degree of sensitivity measured in the form of its condition number and the degree of uncertainty measured in the form of its parameter selection score. We illustrate the ideas with a stochastic model for the transmission of vancomycin-resistant enterococcus (VRE) in hospitals and VRE surveillance data from an oncology unit.

2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


2018 ◽  
Vol 22 (5) ◽  
pp. 358-371 ◽  
Author(s):  
Radoslaw Trojanek ◽  
Michal Gluszak ◽  
Justyna Tanas

In the paper, we analysed the impact of proximity to urban green areas on apartment prices in Warsaw. The data-set contained in 43 075 geo-coded apartment transactions for the years 2010 to 2015. In this research, the hedonic method was used in Ordinary Least Squares (OLS), Weighted Least Squares (WLS) and Median Quantile Regression (Median QR) models. We found substantial evidence that proximity to an urban green area is positively linked with apartment prices. On an average presence of a green area within 100 meters from an apartment increases the price of a dwelling by 2,8% to 3,1%. The effect of park/forest proximity on house prices is more significant for newer apartments than those built before 1989. We found that proximity to a park or a forest is particularly important (and has a higher implicit price as a result) in the case of buildings constructed after 1989. The impact of an urban green was particularly high in the case of a post-transformation housing estate. Close vicinity (less than 100 m distance) to an urban green increased the sales prices of apartments in new residential buildings by 8,0–8,6%, depending on a model.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mao-Feng Kao ◽  
Lynn Hodgkinson ◽  
Aziz Jaafar

Purpose Using a data set of Taiwanese listed firms from 2002 to 2015, this paper aims to examine the determinants to voluntarily appoint independent directors. Design/methodology/approach This study uses panel estimation to exploit both the cross-section and time-series nature of the data. Further, this paper uses Tobit regression, generalized linear model (GLM) in the additional analysis and the two-stage least squares to mitigate for a possible endogeneity issue. Findings The main findings show that Taiwanese firms with large board sizes tend to voluntarily appoint independent directors and firms that already have independent supervisors more willingly to accept additional independent directors onto the board. Furthermore, ownership concentration and institutional ownership are positively associated with the voluntary appointment of independent directors. On the contrary, firms controlled by family members are generally reluctant to voluntarily appoint independent directors. Research limitations/implications The findings are important for managers, shareholders, creditors and policymakers. In particular, when considering the determinants of the voluntary appointment of independent directors, the results indicate that independent supervisors, outside shareholders and institutional investors are significant factors in influencing effective internal and external corporate governance mechanisms. This research work focuses on the voluntary appointment of independent directors. It would be interesting to compare the effectiveness of voluntary appointments with a mandatory appointment within Taiwan and with other jurisdictions. Originality/value This study incrementally contributes to the corporate governance literature in several ways. First, this study extends the earlier research by using a more comprehensive data set of non-financial Taiwanese firms and using alternative methodologies to investigate the determinants of voluntary appointment of independent directors. Second, prior studies tend to neglect the possible issue of using a censored and fractional dependent variable, the proportion of independent directors, which might yield biased and inconsistent parameter estimates when using ordinary least squares regression estimation. Finally, this study addresses the relevant econometric issues by using the Tobit, GLM and the two-stage least squares for a possible endogeneity concern.


2010 ◽  
Vol 62 (4) ◽  
pp. 875-882 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
B. Barillon

Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.


2019 ◽  
Vol 31 (3) ◽  
pp. 257-280
Author(s):  
Zhongyu Li ◽  
Ka Ho Tsang ◽  
Hoi Ying Wong

Abstract This paper proposes a regression-based simulation algorithm for multi-period mean-variance portfolio optimization problems with constraints under a high-dimensional setting. For a high-dimensional portfolio, the least squares Monte Carlo algorithm for portfolio optimization can perform less satisfactorily with finite sample paths due to the estimation error from the ordinary least squares (OLS) in the regression steps. Our algorithm, which resolves this problem e, that demonstrates significant improvements in numerical performance for the case of finite sample path and high dimensionality. Specifically, we replace the OLS by the least absolute shrinkage and selection operator (lasso). Our major contribution is the proof of the asymptotic convergence of the novel lasso-based simulation in a recursive regression setting. Numerical experiments suggest that our algorithm achieves good stability in both low- and higher-dimensional cases.


Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. VE13-VE23 ◽  
Author(s):  
Frank Adler ◽  
Reda Baina ◽  
Mohamed Amine Soudani ◽  
Pierre Cardon ◽  
Jean-Baptiste Richard

Velocity-model estimation with seismic reflection tomography is a nonlinear inverse problem. We present a new method for solving the nonlinear tomographic inverse problem using 3D prestack-depth-migrated reflections as the input data, i.e., our method requires that prestack depth migration (PSDM) be performed before tomography. The method is applicable to any type of seismic data acquisition that permits seismic imaging with Kirchhoff PSDM. A fundamental concept of the method is that we dissociate the possibly incorrect initial migration velocity model from the tomographic velocity model. We take the initial migration velocity model and the residual moveout in the associated PSDM common-image gathers as the reference data. This allows us to consider the migrated depth of the initial PSDM as the invariant observation for the tomographic inverse problem. We can therefore formulate the inverse problem within the general framework of inverse theory as a nonlinear least-squares data fitting between observed and modeled migrated depth. The modeled migrated depth is calculated by ray tracing in the tomographic model, followed by a finite-offset map migration in the initial migration model. The inverse problem is solved iteratively with a Gauss-Newton algorithm. We applied the method to a North Sea data set to build an anisotropic layer velocity model.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012099
Author(s):  
Elena Rodríguez-Rojo ◽  
Javier Cubas ◽  
Santiago Pindado

Abstract In the present work, a method for magnetometer calibration through least squares fitting is presented. This method has been applied over the magnetometer’s data set obtained during the integration tests of the Attitude Determination and Control Subsystem (ADCS) of UPMSat-2. The UPMSat-2 mission is a 50-kg satellite designed and manufactured by the Technical University of Madrid (Universidad Politécnica de Madrid), and finally launched in September 2020. The satellite has three fluxgate magnetometers (one of them experimental) whose calibration is critical to obtain correct measurements to be used by the ADCS. Among several mathematical methods suitable to obtain the calibration parameters, an ordinary least squares fitting algorithm is selected as a first step of the calibration process. The surface estimated is an ellipsoid, surface represented by the magnetometer’s measures of the Earth magnetic field in a point of the space. The calibration elements of the magnetometers are related to the coefficients of the estimated ellipsoid.


2011 ◽  
Vol 7 (4) ◽  
pp. 36
Author(s):  
MaryAnne Atkinson ◽  
Scott Jones

This paper reports the results of an experiment in which individuals visually fitted a cost function to data. The inclusion or omission of unusual data points within the data set was experimentally manipulated. The results indicate that individuals omit outliers from their visual fits, but do not omit influential points. Evidence also suggests that the weighting rule used by individuals is more robust that the weighting rule used in the ordinary least squares criterion.


Least squares minimization is by nature global and, hence, vulnerable to distortion by outliers. We present a novel technique to reject outliers from an m -dimensional data set when the underlying model is a hyperplane (a line in two dimensions, a plane in three dimensions). The technique has a sound statistical basis and assumes that Gaussian noise corrupts the otherwise valid data. The majority of alternative techniques available in the literature focus on ordinary least squares , where a single variable is designated to be dependent on all others - a model that is often unsuitable in practice. The method presented here operates in the more general framework of orthogonal regression , and uses a new regression diagnostic based on eigendecomposition. It subsumes the traditional residuals scheme and, using matrix perturbation theory, provides an error model for the solution once the contaminants have been removed.


BMJ Open ◽  
2020 ◽  
Vol 10 (3) ◽  
pp. e033483
Author(s):  
Gwyn Bevan ◽  
Chiara De Poli ◽  
Mi Jun Keng ◽  
Rosalind Raine

ObjectivesTo examine validity of prevalence-based models giving projections of prevalence of diabetes in adults, in England and the UK, and of Markov chain models giving estimates of economic impacts of interventions to prevent type 2 diabetes (T2D).MethodsRapid reviews of both types of models. Estimation of the future prevalence of T2D in England by Markov chain models; and from the trend in the prevalence of diabetes, as reported in the Quality and Outcomes Framework (QOF), estimated by ordinary least squares regression analysis.SettingAdult population in England and UK.Main outcome measurePrevalence of T2D in England and UK in 2025.ResultsThe prevalence-based models reviewed use sample estimates of past prevalence rates by age and sex and projected population changes. Three most recent models, including that of Public Health England (PHE), neither take account of increases in obesity, nor report Confidence Intervals (CIs). The Markov chain models reviewed use transition probabilities between states of risk and death, estimated from various sources. None of their accounts give the full matrix of transition probabilities, and only a minority report tests of validation. Their primary focus is on estimating the ratio of costs to benefits of preventive interventions in those with hyperglycaemia, only one reported estimates of those developing T2D in the absence of a preventive intervention in the general population.Projections of the prevalence of T2D in England in 2025 were (in millions) by PHE, 3.95; from the QOF trend, 4.91 and by two Markov chain models, based on our review, 5.64 and 9.07.ConclusionsTo inform national policies on preventing T2D, governments need validated models, designed to use available data, which estimate the scale of incidence of T2D and survival in the general population, with and without preventive interventions.


Sign in / Sign up

Export Citation Format

Share Document