scholarly journals Beta regression model nonlinear in the parameters with additive measurement errors in variables

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254103
Author(s):  
Daniele de Brito Trindade ◽  
Patrícia Leone Espinheira ◽  
Klaus Leite Pinto Vasconcellos ◽  
Jalmar Manuel Farfán Carrasco ◽  
Maria do Carmo Soares de Lima

We propose in this paper a general class of nonlinear beta regression models with measurement errors. The motivation for proposing this model arose from a real problem we shall discuss here. The application concerns a usual oil refinery process where the main covariate is the concentration of a typically measured in error reagent and the response is a catalyst’s percentage of crystallinity involved in the process. Such data have been modeled by nonlinear beta and simplex regression models. Here we propose a nonlinear beta model with the possibility of the chemical reagent concentration being measured with error. The model parameters are estimated by different methods. We perform Monte Carlo simulations aiming to evaluate the performance of point and interval estimators of the model parameters. Both results of simulations and the application favors the method of estimation by maximum pseudo-likelihood approximation.

2017 ◽  
Vol 47 (1) ◽  
pp. 229-248 ◽  
Author(s):  
Eveliny Barroso Da Silva ◽  
Carlos Alberto Ribeiro Diniz ◽  
Jalmar Manuel Farfan Carrasco ◽  
Mário De Castro

2017 ◽  
Vol 22 (2) ◽  
Author(s):  
Terence Tai-Leung Chong ◽  
Haiqiang Chen ◽  
Tsz-Nga Wong ◽  
Isabel Kit-Ming Yan

Abstract An important assumption underlying standard threshold regression models and their variants in the extant literature is that the threshold variable is perfectly measured. Such an assumption is crucial for consistent estimation of model parameters. This paper provides the first theoretical framework for the estimation and inference of threshold regression models with measurement errors. A new estimation method that reduces the bias of the coefficient estimates and a Hausman-type test to detect the presence of measurement errors are proposed. Monte Carlo evidence is provided and an empirical application is given.


2014 ◽  
Vol 41 (7) ◽  
pp. 1530-1547 ◽  
Author(s):  
Jalmar M.F. Carrasco ◽  
Silvia L.P. Ferrari ◽  
Reinaldo B. Arellano-Valle

2003 ◽  
Vol 5 (3) ◽  
pp. 363 ◽  
Author(s):  
Slamet Sugiri

The main objective of this study is to examine a hypothesis that the predictive content of normal income disaggregated into operating income and nonoperating income outperforms that of aggregated normal income in predicting future cash flow. To test the hypothesis, linear regression models are developed. The model parameters are estimated based on fifty-five manufacturing firms listed in the Jakarta Stock Exchange (JSX) up to the end of 1997.This study finds that empirical evidence supports the hypothesis. This evidence supports arguments that, in reporting income from continuing operations, multiple-step approach is preferred to single-step one.


Author(s):  
Geir Evensen

AbstractIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.


Author(s):  
Mohammad-Reza Ashory ◽  
Farhad Talebi ◽  
Heydar R Ghadikolaei ◽  
Morad Karimpour

This study investigated the vibrational behaviour of a rotating two-blade propeller at different rotational speeds by using self-tracking laser Doppler vibrometry. Given that a self-tracking method necessitates the accurate adjustment of test setups to reduce measurement errors, a test table with sufficient rigidity was designed and built to enable the adjustment and repair of test components. The results of the self-tracking test on the rotating propeller indicated an increase in natural frequency and a decrease in the amplitude of normalized mode shapes as rotational speed increases. To assess the test results, a numerical model created in ABAQUS was used. The model parameters were tuned in such a way that the natural frequency and associated mode shapes were in good agreement with those derived using a hammer test on a stationary propeller. The mode shapes obtained from the hammer test and the numerical (ABAQUS) modelling were compared using the modal assurance criterion. The examination indicated a strong resemblance between the hammer test results and the numerical findings. Hence, the model can be employed to determine the other mechanical properties of two-blade propellers in test scenarios.


2018 ◽  
Vol 620 ◽  
pp. A168 ◽  
Author(s):  
G. Valle ◽  
M. Dell’Omodarme ◽  
P. G. Prada Moroni ◽  
S. Degl’Innocenti

Aims. We aim to perform a theoretical investigation on the direct impact of measurement errors in the observational constraints on the recovered age for stars in main sequence (MS) and red giant branch (RGB) phases. We assumed that a mix of classical (effective temperature Teff and metallicity [Fe/H]) and asteroseismic (Δν and νmax) constraints were available for the objects. Methods. Artificial stars were sampled from a reference isochrone and subjected to random Gaussian perturbation in their observational constraints to simulate observational errors. The ages of these synthetic objects were then recovered by means of a Monte Carlo Markov chains approach over a grid of pre-computed stellar models. To account for observational uncertainties the grid covers different values of initial helium abundance and mixing-length parameter, that act as nuisance parameters in the age estimation. Results. The obtained differences between the recovered and true ages were modelled against the errors in the observables. This procedure was performed by means of linear models and projection pursuit regression models. The first class of statistical models provides an easily generalizable result, whose robustness is checked with the second method. From linear models we find that no age error source dominates in all the evolutionary phases. Assuming typical observational uncertainties, for MS the most important error source in the reconstructed age is the effective temperature of the star. An offset of 75 K accounts for an underestimation of the stellar age from 0.4 to 0.6 Gyr for initial and terminal MS. An error of 2.5% in νmax resulted the second most important source of uncertainty accounting for about −0.3 Gyr. The 0.1 dex error in [Fe/H] resulted particularly important only at the end of the MS, producing an age error of −0.4 Gyr. For the RGB phase the dominant source of uncertainty is νmax, causing an underestimation of about 0.6 Gyr; the offset in the effective temperature and Δν caused respectively an underestimation and overestimation of 0.3 Gyr. We find that the inference from the linear model is a good proxy for that from projection pursuit regression models. Therefore, inference from linear models can be safely used thanks to its broader generalizability. Finally, we explored the impact on age estimates of adding the luminosity to the previously discussed observational constraints. To this purpose, we assumed – for computational reasons – a 2.5% error in luminosity, much lower than the average error in the Gaia DR2 catalogue. However, even in this optimistic case, the addition of the luminosity does not increase precision of age estimates. Moreover, the luminosity resulted as a major contributor to the variability in the estimated ages, accounting for an error of about −0.3 Gyr in the explored evolutionary phases.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 130
Author(s):  
Omar Rodríguez-Abreo ◽  
Juvenal Rodríguez-Reséndiz ◽  
L. A. Montoya-Santiyanes ◽  
José Manuel Álvarez-Alvarado

Machinery condition monitoring and failure analysis is an engineering problem to pay attention to among all those being studied. Excessive vibration in a rotating system can damage the system and cannot be ignored. One option to prevent vibrations in a system is through preparation for them with a model. The accuracy of the model depends mainly on the type of model and the fitting that is attained. The non-linear model parameters can be complex to fit. Therefore, artificial intelligence is an option for performing this tuning. Within evolutionary computation, there are many optimization and tuning algorithms, the best known being genetic algorithms, but they contain many specific parameters. That is why algorithms such as the gray wolf optimizer (GWO) are alternatives for this tuning. There is a small number of mechanical applications in which the GWO algorithm has been implemented. Therefore, the GWO algorithm was used to fit non-linear regression models for vibration amplitude measurements in the radial direction in relation to the rotational frequency in a gas microturbine without considering temperature effects. RMSE and R2 were used as evaluation criteria. The results showed good agreement concerning the statistical analysis. The 2nd and 4th-order models, and the Gaussian and sinusoidal models, improved the fit. All models evaluated predicted the data with a high coefficient of determination (85–93%); the RMSE was between 0.19 and 0.22 for the worst proposed model. The proposed methodology can be used to optimize the estimated models with statistical tools.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


Sign in / Sign up

Export Citation Format

Share Document