scholarly journals Estimation of Parameters of Logistic Regression with Missing Covariates via Joint Conditional Likelihood Method

Author(s):  
Phuoc-Loc Tran ◽  
Truong-Nhat Le ◽  
Shen-Ming Lee ◽  
Chin-Shang Li
2002 ◽  
Vol 92 (1) ◽  
pp. 248-256 ◽  
Author(s):  
R. Arieli ◽  
A. Yalov ◽  
A. Goldenshluger

10.1152/japplphysiol.00434.2001.—The power expression for cumulative oxygen toxicity and the exponential recovery were successfully applied to various features of oxygen toxicity. From the basic equation, we derived expressions for a protocol in which Po 2 changes with time. The parameters of the power equation were solved by using nonlinear regression for the reduction in vital capacity (ΔVC) in humans:  %ΔVC  = 0.0082 × t 2(Po 2/101.3)4.57, where t is the time in hours and Po 2is expressed in kPa. The recovery of lung volume is  ΔVC t  = ΔVCe × e −(−0.42 + 0.00379Po 2 ) t , where ΔVC t is the value at time tof the recovery, ΔVCe is the value at the end of the hyperoxic exposure, and Po 2 is the prerecovery oxygen pressure. Data from different experiments on central nervous system (CNS) oxygen toxicity in humans in the hyperbaric chamber ( n = 661) were analyzed along with data from actual closed-circuit oxygen diving ( n = 2,039) by using a maximum likelihood method. The parameters of the model were solved for the combined data, yielding the power equation for active diving: K = t 2(Po 2/101.3)6.8, where tis in minutes. It is suggested that the risk of CNS oxygen toxicity in diving can be derived from the calculated parameter of the normal distribution: Z = [ln( t) − 9.63 +3.38 × ln(Po 2/101.3)]/2.02. The recovery time constant for CNS oxygen toxicity was calculated from the value obtained for the rat, taking into account the effect of body mass, and yielded the recovery equation: Kt = K e × e −0.079 t , where Kt and K e are the values of K at time t of the recovery process and at the end of the hyperbaric oxygen exposure, respectively, and tis in minutes.


Metrika ◽  
2011 ◽  
Vol 75 (5) ◽  
pp. 621-653 ◽  
Author(s):  
Shen-Ming Lee ◽  
Chin-Shang Li ◽  
Shu-Hui Hsieh ◽  
Li-Hui Huang

Biometrics ◽  
1998 ◽  
Vol 54 (1) ◽  
pp. 295 ◽  
Author(s):  
Stuart R. Lipsitz ◽  
Michael Parzen ◽  
Marian Ewell

2017 ◽  
Vol 34 (4) ◽  
pp. 494-507 ◽  
Author(s):  
Ahmad Hakimi ◽  
Amirhossein Amiri ◽  
Reza Kamranrad

Purpose The purpose of this paper is to develop some robust approaches to estimate the logistic regression profile parameters in order to decrease the effects of outliers on the performance of T2 control chart. In addition, the performance of the non-robust and the proposed robust control charts is evaluated in Phase II. Design/methodology/approach In this paper some, robust approaches including weighted maximum likelihood estimation, redescending M-estimator and a combination of these two approaches (WRM) are used to decrease the effects of outliers on estimating the logistic regression parameters as well as the performance of the T2 control chart. Findings The results of the simulation studies in both Phases I and II show the better performance of the proposed robust control charts rather than the non-robust control chart for estimating the logistic regression profile parameters and monitoring the logistic regression profiles. Practical implications In many practical applications, there are outliers in processes which may affect the estimation of parameters in Phase I and as a result of deteriorate the statistical performance of control charts in Phase II. The methods developed in this paper are effective for decreasing the effect of outliers in both Phases I and II. Originality/value This paper considers monitoring the logistic regression profile in Phase I under the presence of outliers. Also, three robust approaches are developed to decrease the effects of outliers on the parameter estimation and monitoring the logistic regression profiles in both Phases I and II.


2021 ◽  
Vol 27 (1) ◽  
pp. 43-53
Author(s):  
J.O. Braimah ◽  
J.A. Adjekukor ◽  
N. Edike ◽  
S.O. Elakhe

An Exponentiated Inverted Weibull Distribution (EIWD) has a hazard rate (failure rate) function that is unimodal, thus making it less efficient for modeling data with an increasing failure rate (IFR). Hence, the need to generalize the EIWD in order to obtain a distribution that will be proficient in modeling these types of dataset (data with an increasing failure rate). This paper therefore, extends the EIWD in order to obtain Weibull Exponentiated Inverted Weibull (WEIW) distribution using the Weibull-Generator technique. Some of the properties investigated include the mean, variance, median, moments, quantile and moment generating functions. The explicit expressions were derived for the order statistics and hazard/failure rate function. The estimation of parameters was derived using the maximum likelihood method. The developed model was applied to a real-life dataset and compared with some existing competing lifetime distributions. The result revealed that the (WEIW) distribution provided a better fit to the real life dataset than the existing Weibull/Exponential family distributions.


2018 ◽  
Vol 52 (1) ◽  
pp. 19-41
Author(s):  
YUVRAJ SUNECHER ◽  
NAUSHAD MAMODE KHAN ◽  
VANDNA JOWAHEER

It is commonly observed in medical and financial studies that large volume of time series of count data are collected for several variates. The modelling of such time series and the estimation of parameters under such processes are rather challenging since these high dimensional time series are influenced by time-varying covariates that eventually render the data non-stationary. This paper considers the modelling of a bivariate integer-valued autoregressive (BINAR(1)) process where the innovation terms are distributed under non- stationary Poisson moments. Since the full and conditional likelihood approaches are cumbersome in this situation, a Generalized Quasi-likelihood (GQL) approach is proposed to estimate the regression effects while the serial and time-dependent cross correlation effects are handled by method of moments. This new technique is assessed over several simulation experiments and the results demonstrate that GQL yields consistent estimates and is computationally stable since few non-convergent simulations are reported.


1994 ◽  
Vol 24 (3) ◽  
pp. 229-238 ◽  
Author(s):  
P. C. Sham ◽  
E. E. Walters ◽  
M. C. Neale ◽  
A. C. Heath ◽  
C. J. MacLean ◽  
...  

2020 ◽  
Vol 67 (1) ◽  
pp. 5-32
Author(s):  
Barbara Pawełek ◽  
Jadwiga Kostrzewska ◽  
Maciej Kostrzewski ◽  
Krzysztof Gałuszka

The aim of this paper is to present the results of an assessment of the financial condition of companies from the construction industry after the announcement of arrangement bankruptcy, in comparison to the condition of healthy companies. The logistic regression model estimated by means of the maximum likelihood method and the Bayesian approach were used. The first achievement of our study is the assessment of the financial condition of companies from the construction industry after the announcement of bankruptcy. The second achievement is the application of an approach combining the classical and Bayesian logistic regression models to assess the financial condition of companies in the years following the declaration of bankruptcy, and the presentation of the benefits of such a combination. The analysis described in the paper, carried out in most part by means of the ML logistic regression model, was supplemented with information yielded by the application of the Bayesian approach. In particular, the analysis of the shape of the posterior distribution of the repeat bankruptcy probability makes it possible, in some cases, to observe that the financial condition of a company is not clear, despite clear assessments made on the basis of the point estimations.


2020 ◽  
Vol 36 (4) ◽  
pp. 1253-1259
Author(s):  
Autcha Araveeporn ◽  
Yuwadee Klomwises

Markov Chain Monte Carlo (MCMC) method has been a popular method for getting information about probability distribution for estimating posterior distribution by Gibbs sampling. So far, the standard methods such as maximum likelihood and logistic ridge regression methods have represented to compare with MCMC. The maximum likelihood method is the classical method to estimate the parameter on the logistic regression model by differential the loglikelihood function on the estimator. The logistic ridge regression depends on the choice of ridge parameter by using crossvalidation for computing estimator on penalty function. This paper provides maximum likelihood, logistic ridge regression, and MCMC to estimate parameter on logit function and transforms into a probability. The logistic regression model predicts the probability to observe a phenomenon. The prediction accuracy evaluates in terms of the percentage with correct predictions of a binary event. A simulation study conducts a binary response variable by using 2, 4, and 6 explanatory variables, which are generated from multivariate normal distribution on the positive and negative correlation coefficient or called multicollinearity problem. The criterion of these methods is to compare by a maximum of predictive accuracy. The outcomes find that MCMC satisfies all situations.


Sign in / Sign up

Export Citation Format

Share Document