scholarly journals Comparison of Censored Regression and Standard Regression Analyses for Modeling Relationships between Antimicrobial Susceptibility and Patient- and Institution-Specific Variables

2006 ◽  
Vol 50 (1) ◽  
pp. 62-67 ◽  
Author(s):  
Jeffrey P. Hammel ◽  
Sujata M. Bhavnani ◽  
Ronald N. Jones ◽  
Alan Forrest ◽  
Paul G. Ambrose

ABSTRACT In order to identify patients likely to be infected with resistant bacterial pathogens, analytic methods such as standard regression (SR) may be applied to surveillance data to determine patient- and institution-specific factors predictive of an increased MIC. However, the censored nature of MIC data (e.g., MIC ≤ 0.5 mg/liter or MIC > 8 mg/liter) imposes certain limitations on the use of SR. In order to investigate the nature of these limitations, simulations were performed to compare a regression tailored for censored data (censored regression [CR]) and one tailored for an SR. By using a model relating piperacillin-tazobactam MICs against Enterobacter spp. to patient age and hospital bed capacity, 200 simulations of 500 isolates were performed. Various MIC censoring patterns were imposed by using 26 left- or right-censored (L,R) pairs (i.e., MICs ≤ 2 mg/liter L [2 L ] or MICs > 2 mg/liter R [2 R ], respectively). Data were fit by CR and SR for which censored MICs were either (i) excluded, (ii) replaced by 2 L or 2 R , or (iii) replaced by 2 L − 1 or 2 R + 1. Total censoring for the 26 pairs ranged from 7 to 86%. By CR, deviations of average parameter estimates from the true parameter values were <0.10 log2 (mg/liter) for all parameters for each of the 26 pairs. By SR, these deviations were >0.10 log2 (mg/liter) for at least 18 of the 26 pairs for all but one parameter. Two-standard-error confidence intervals for individual parameters contained as little as 0% of cases for all SR approaches but ≥91.5% of cases for the CR approach. When censored MIC data are modeled, CR may reduce or eliminate biased parameter estimates obtained by SR.

2014 ◽  
Vol 26 (3) ◽  
pp. 472-496 ◽  
Author(s):  
Levin Kuhlmann ◽  
Michael Hauser-Raspe ◽  
Jonathan H. Manton ◽  
David B. Grayden ◽  
Jonathan Tapson ◽  
...  

Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.


2016 ◽  
Vol 46 (1) ◽  
pp. 44-67 ◽  
Author(s):  
Stephen Vaisey ◽  
Andrew Miles

The recent change in the general social survey (GSS) to a rotating panel design is a landmark development for social scientists. Sociological methodologists have argued that fixed-effects (FE) models are generally the best starting point for analyzing panel data because they allow analysts to control for unobserved time-constant heterogeneity. We review these treatments and demonstrate the advantages of FE models in the context of the GSS. We also show, however, that FE models have two rarely tested assumptions that can seriously bias parameter estimates when violated. We provide simple tests for these assumptions. We further demonstrate that FE models are extremely sensitive to the correct specification of temporal lags. We provide a simulation and a proof to show that the use of incorrect lags in FE models can lead to coefficients that are the opposite sign of the true parameter values.


2011 ◽  
Vol 152 (20) ◽  
pp. 797-801 ◽  
Author(s):  
Miklós Gresz

In the past decades the bed occupancy of hospitals in Hungary has been calculated from the average of in-patient days and the number of beds during a given period of time. This is the only measure being currently looked at when evaluating the performance of hospitals and changing their bed capacity. The author outlines how limited is the use of this indicator and what other statistical indicators may characterize the occupancy of hospital beds. Since adjustment of capacity to patient needs becomes increasingly important, it is essential to find indicator(s) that can be easily applied in practice and can assist medical personal and funders who do not work with statistics. Author recommends the use of daily bed occupancy as a base for all these statistical indicators. Orv. Hetil., 2011, 152, 797–801.


Author(s):  
Russell Cheng

This book relies on maximum likelihood (ML) estimation of parameters. Asymptotic theory assumes regularity conditions hold when the ML estimator is consistent. Typically an additional third derivative condition is assumed to ensure that the ML estimator is also asymptotically normally distributed. Standard asymptotic results that then hold are summarized in this chapter; for example, the asymptotic variance of the ML estimator is then given by the Fisher information formula, and the log-likelihood ratio, the Wald and the score statistics for testing the statistical significance of parameter estimates are all asymptotically equivalent. Also, the useful profile log-likelihood then behaves exactly as a standard log-likelihood only in a parameter space of just one dimension. Further, the model can be reparametrized to make it locally orthogonal in the neighbourhood of the true parameter value. The large exponential family of models is briefly reviewed where a unified set of regular conditions can be obtained.


Genetics ◽  
2000 ◽  
Vol 155 (3) ◽  
pp. 1429-1437
Author(s):  
Oliver G Pybus ◽  
Andrew Rambaut ◽  
Paul H Harvey

Abstract We describe a unified set of methods for the inference of demographic history using genealogies reconstructed from gene sequence data. We introduce the skyline plot, a graphical, nonparametric estimate of demographic history. We discuss both maximum-likelihood parameter estimation and demographic hypothesis testing. Simulations are carried out to investigate the statistical properties of maximum-likelihood estimates of demographic parameters. The simulations reveal that (i) the performance of exponential growth model estimates is determined by a simple function of the true parameter values and (ii) under some conditions, estimates from reconstructed trees perform as well as estimates from perfect trees. We apply our methods to HIV-1 sequence data and find strong evidence that subtypes A and B have different demographic histories. We also provide the first (albeit tentative) genetic evidence for a recent decrease in the growth rate of subtype B.


1995 ◽  
Vol 22 (4) ◽  
pp. 819-833 ◽  
Author(s):  
Mukesh Sharma ◽  
Neil R. Thomson ◽  
Edward A. McBean

Detection limits of analyzing instruments are the main reason for censored observations of pollutant concentrations. An iterative least squares method for regression analyses is developed to suit the doubly censored data commonly observed in environmental engineering. The modified iterative least squares method utilizes the expected values of censored observations estimated from the probability density function of doubly censored data in a regression process. The modified method is examined for bias in the estimation of the parameters of a linear model, and in the estimation of the standard deviation of the regression. A mechanistic model for atmospheric transport and deposition of polycyclic aromatic hydrocarbons (PAHs) to a snow surface is formulated by utilizing the long-term PAH retention property of deep snowpacks. The modified iterative least squares method is applied to estimate the deposition parameters (dry deposition velocity and washout ratio) for various PAH species, since some of the PAH deposition levels were below the minimum detection limit of the analyzing instrument. The estimated parameters are examined statistically, and compare favourably with previously reported estimates of these parameters. Key words: censored data, regression, iterative least squares, PAHs, dry deposition velocity, washout ratio.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Chao Zhang ◽  
Ru-bin Wang ◽  
Qing-xiang Meng

Parameter optimization for the conceptual rainfall-runoff (CRR) model has always been the difficult problem in hydrology since watershed hydrological model is high-dimensional and nonlinear with multimodal and nonconvex response surface and its parameters are obviously related and complementary. In the research presented here, the shuffled complex evolution (SCE-UA) global optimization method was used to calibrate the Xinanjiang (XAJ) model. We defined the ideal data and applied the method to observed data. Our results show that, in the case of ideal data, the data length did not affect the parameter optimization for the hydrological model. If the objective function was selected appropriately, the proposed method found the true parameter values. In the case of observed data, we applied the technique to different lengths of data (1, 2, and 3 years) and compared the results with ideal data. We found that errors in the data and model structure lead to significant uncertainties in the parameter optimization.


2000 ◽  
Vol 42 (3-4) ◽  
pp. 59-68 ◽  
Author(s):  
S.-E. Oh ◽  
K.-S. Kim ◽  
H.-C. Choi ◽  
J. Cho ◽  
I.S. Kim

To study the kinetics and physiology of autotrophic denitrifying sulfur bacteria, a steady-state anaerobic master culture reactor (MCR) was operated for over six months under a semi-continuous mode and nitrate limiting conditions using nutrient/mineral/buffer (NMB) medium containing thiosulfate and nitrate. Characteristics of the autotropic denitrifier were investigated through the cumulative gas production volume and rate, measured using an anaerobic respirometer, and through the nitrate, nitrite, and sulfate concentrations within the media. The bio-kinetic parameters were obtained based upon the Monod equation using mixed cultures in the MCR. Nonlinear regression analysis was employed using nitrate depletion and biomass production curves. Although this analysis did not yield exact biokinetic parameter estimates, the following ranges for the parameter values were obtained: μmax =0.12-0.2 hr-1; k=0.3-0.4 hr-1; Ks=3-10mg/L; YNO3=0.4-0.5mg Biomass/mg NO3--N. Inhibition of denitrification occurred when the concentrations of NO3--N, and SO42- reached about 660mg/L and 2,000mg/L, respectively. The autotrophic denitrifying sulfur bacteria were observed to be very sensitive to nitrite but relatively tolerant of nitrate, sulfate, and thiosulfate. Under mixotrophic conditions, denitrification by these bacteria occurred autotrophically; even with as high as 2 g COD, autotrophic denitrification was not significantly affected. The optimal pH and temperature for autotrophic denitrification was about 6.5–7.5 and 33–35 °C, respectively.


2019 ◽  
Vol 3 ◽  
Author(s):  
Charlotte Olivia Brand ◽  
James Patrick Ounsley ◽  
Daniel Job Van der Post ◽  
Thomas Joshua Henry Morgan

This paper introduces a statistical technique known as “posterior passing” in which the results of past studies can be used to inform the analyses carried out by subsequent studies. We first describe the technique in detail and show how it can be implemented by individual researchers on an experiment by experiment basis. We then use a simulation to explore its success in identifying true parameter values compared to current statistical norms (ANOVAs and GLMMs). We find that posterior passing allows the true effect in the population to be found with greater accuracy and consistency than the other analysis types considered. Furthermore, posterior passing performs almost identically to a data analysis in which all data from all simulated studies are combined and analysed as one dataset. On this basis, we suggest that posterior passing is a viable means of implementing cumulative science. Furthermore, because it prevents the accumulation of large bodies of conflicting literature, it alleviates the need for traditional meta-analyses. Instead, posterior passing cumulatively and collaboratively provides clarity in real time as each new study is produced and is thus a strong candidate for a new, cumulative approach to scientific analyses and publishing.


2017 ◽  
Vol 10 (1) ◽  
pp. 127-154 ◽  
Author(s):  
Iris Kriest ◽  
Volkmar Sauerland ◽  
Samar Khatiwala ◽  
Anand Srivastav ◽  
Andreas Oschlies

Abstract. Global biogeochemical ocean models contain a variety of different biogeochemical components and often much simplified representations of complex dynamical interactions, which are described by many ( ≈ 10 to  ≈ 100) parameters. The values of many of these parameters are empirically difficult to constrain, due to the fact that in the models they represent processes for a range of different groups of organisms at the same time, while even for single species parameter values are often difficult to determine in situ. Therefore, these models are subject to a high level of parametric uncertainty. This may be of consequence for their skill with respect to accurately describing the relevant features of the present ocean, as well as their sensitivity to possible environmental changes. We here present a framework for the calibration of global biogeochemical ocean models on short and long timescales. The framework combines an offline approach for transport of biogeochemical tracers with an estimation of distribution algorithm (Covariance Matrix Adaption Evolution Strategy, CMA-ES). We explore the performance and capability of this framework by five different optimizations of six biogeochemical parameters of a global biogeochemical model, simulated over 3000 years. First, a twin experiment explores the feasibility of this approach. Four optimizations against a climatology of observations of annual mean dissolved nutrients and oxygen determine the extent to which different setups of the optimization influence model fit and parameter estimates. Because the misfit function applied focuses on the large-scale distribution of inorganic biogeochemical tracers, parameters that act on large spatial and temporal scales are determined earliest, and with the least spread. Parameters more closely tied to surface biology, which act on shorter timescales, are more difficult to determine. In particular, the search for optimum zooplankton parameters can benefit from a sound knowledge of maximum and minimum parameter values, leading to a more efficient optimization. It is encouraging that, although the misfit function does not contain any direct information about biogeochemical turnover, the optimized models nevertheless provide a better fit to observed global biogeochemical fluxes.


Sign in / Sign up

Export Citation Format

Share Document