unbiased estimates
Recently Published Documents


TOTAL DOCUMENTS

312
(FIVE YEARS 75)

H-INDEX

31
(FIVE YEARS 4)

Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 87
Author(s):  
Eunae Lee ◽  
Dong Sik Kim

In fluoroscopic imaging, we can acquire X-ray image sequences using a flat-panel dynamic detector. However, lag signals from previous frames are added to the subsequently acquired images and produce lag artifacts. The lag signals also inflate the measured noise power spectrum (NPS) of a detector. In order to correct the measured NPS, the lag correction factor (LCF) is generally used. However, the nonuniform temporal gain (NTG), which is from inconsistent X-ray sources and readout circuits, can significantly distort the LCF measurements. In this paper, we propose a simple scheme to alleviate the NTG problem in order to accurately and efficiently measure the detector LCF. We first theoretically analyze the effects of NTG, especially on the correlation-based LCF measurement methods, where calculating the correlation coefficients are required. In order to remove the biases due to NTG, a notion of conditional covariance is considered for unbiased estimates of the correlation coefficients. Experiments using practical X-ray images acquired from a dynamic detector were conducted. The proposed approach could yield accurate LCF values similarly to the current approaches of the direct and U-L corrections with a low computational complexity. By calculating the correlation coefficients based on conditional covariance, we could obtain accurate LCF values even under the NTG environment. This approach does not require any preprocessing scheme of the direct or U-L correction and can provide further accurate LCF values than the method of IEC62220-1-3 does.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0258581
Author(s):  
Amanda M. E. D’Andrea ◽  
Vera L. D. Tomazella ◽  
Hassan M. Aljohani ◽  
Pedro L. Ramos ◽  
Marco P. Almeida ◽  
...  

This article focus on the analysis of the reliability of multiple identical systems that can have multiple failures over time. A repairable system is defined as a system that can be restored to operating state in the event of a failure. This work under minimal repair, it is assumed that the failure has a power law intensity and the Bayesian approach is used to estimate the unknown parameters. The Bayesian estimators are obtained using two objective priors know as Jeffreys and reference priors. We proved that obtained reference prior is also a matching prior for both parameters, i.e., the credibility intervals have accurate frequentist coverage, while the Jeffreys prior returns unbiased estimates for the parameters. To illustrate the applicability of our Bayesian estimators, a new data set related to the failures of Brazilian sugar cane harvesters is considered.


Author(s):  
Jan Derrfuss ◽  
Claudia Danielmeier ◽  
Tilmann A. Klein ◽  
Adrian G. Fischer ◽  
Markus Ullsperger

AbstractWe typically slow down after committing an error, an effect termed post-error slowing (PES). Traditionally, PES has been calculated by subtracting post-correct from post-error RTs. Dutilh et al. (Journal of Mathematical Psychology, 56(3), 208-216, 2012), however, showed PES values calculated in this way are potentially biased. Therefore, they proposed to compute robust PES scores by subtracting pre-error RTs from post-error RTs. Based on data from a large-scale study using the flanker task, we show that both traditional and robust PES estimates can be biased. The source of the bias are differential imbalances in the percentage of congruent vs. incongruent post-correct, pre-error, and post-error trials. Specifically, we found that post-correct, pre-error, and post-error trials were more likely to be congruent than incongruent, with the size of the imbalance depending on the trial type as well as the length of the response-stimulus interval (RSI). In our study, for trials preceded by a 700-ms RSI, the percentages of congruent trials were 62% for post-correct trials, 66% for pre-error trials, and 56% for post-error trials. Relative to unbiased estimates, these imbalances inflated traditional PES estimates by 37% (9 ms) and robust PES estimates by 42% (16 ms) when individual-participant means were calculated. When individual-participant medians were calculated, the biases were even more pronounced (40% and 50% inflation, respectively). To obtain unbiased PES scores for interference tasks, we propose to compute unweighted individual-participant means by initially calculating mean RTs for congruent and incongruent trials separately, before averaging congruent and incongruent mean RTs to calculate means for post-correct, pre-error and post-error trials.


2021 ◽  
Author(s):  
Михаил Михайлов ◽  

Heterosis in maize: toward prevalence type of intralocus interactions. In the biometrical genetic analysis of maize productivity, performed according to the North Caroline III design, unbiased estimates were used to calculate the average degree of dominance, in which, on average, the effect of linkage was eliminated. The hybrids Rf7×Ku123, MK01×A619 were studied, and unbiased estimates were calculated for four more hybrids according to the literature data. For genes controlling productivity, unbiased estimates of the average degree of dominance ranged from 0.65 to 0.87 for different hybrids. The result indicates that the heterosis effect in maize is more likely to be caused by dominant interactions than over-dominant ones.


2021 ◽  
Author(s):  
Andrew David Grotzinger ◽  
Javier de la Fuente ◽  
Michel G Nivard ◽  
Elliot M Tucker-Drob

SNP heritability is a fundamental quantity in the genetic analysis of complex traits. For binary phenotypes, in which the continuous distribution of risk in the population is unobserved, observed-scale heritabilities must be transformed to the more interpretable liability-scale. We demonstrate here that the field standard approach for performing the liability conversion can downwardly bias estimates by as much as ~20% in simulation and ~30% in real data. These attenuated estimates stem from the standard approach failing to appropriately account for varying levels of ascertainment across the cohorts comprising the meta-analysis. We formally derive a simple procedure for incorporating cohort-specific ascertainment based on the summation of effective sample sizes across the contributing cohorts, and confirm via simulation that it produces unbiased estimates of liability-scale heritability.


Trials ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Lea Multerer ◽  
Tracy R. Glass ◽  
Fiona Vanobberghen ◽  
Thomas Smith

Abstract Background In cluster randomized trials (CRTs) of interventions against malaria, mosquito movement between households ultimately leads to contamination between intervention and control arms, unless they are separated by wide buffer zones. Methods This paper proposes a method for adjusting estimates of intervention effectiveness for contamination and for estimating a contamination range between intervention arms, the distance over which contamination measurably biases the estimate of effectiveness. A sigmoid function is fitted to malaria prevalence or incidence data as a function of the distance of households to the intervention boundary, stratified by intervention status and including a random effect for the clustering. The method is evaluated in a simulation study, corresponding to a range of rural settings with varying intervention effectiveness and contamination range, and applied to a CRT of insecticide treated nets in Ghana. Results The simulations indicate that the method leads to approximately unbiased estimates of effectiveness. Precision decreases with increasing mosquito movement, but the contamination range is much smaller than the maximum distance traveled by mosquitoes. For the method to provide precise and approximately unbiased estimates, at least 50% of the households should be at distances greater than the estimated contamination range from the discordant intervention arm. Conclusions A sigmoid approach provides an appropriate analysis for a CRT in the presence of contamination. Outcome data from boundary zones should not be discarded but used to provide estimates of the contamination range. This gives an alternative to “fried egg” designs, which use large clusters (increasing costs) and exclude buffer zones to avoid bias.


Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2200
Author(s):  
Ilya V. Sysoev ◽  
Danil D. Kulminskiy ◽  
Vladimir I. Ponomarenko ◽  
Mikhail D. Prokhorov

An approach to solve the inverse problem of the reconstruction of the network of time-delay oscillators from their time series is proposed and studied in the case of the nonstationary connectivity matrix. Adaptive couplings have not been considered yet for this particular reconstruction problem. The problem of coupling identification is reduced to linear optimization of a specially constructed target function. This function is introduced taking into account the continuity of the nonlinear functions of oscillators and does not exploit the mean squared difference between the model and observed time series. The proposed approach allows us to minimize the number of estimated parameters and gives asymptotically unbiased estimates for a large class of nonlinear functions. The approach efficiency is demonstrated for the network composed of time-delayed feedback oscillators with a random architecture of constant and adaptive couplings in the absence of a priori knowledge about the connectivity structure and its evolution. The proposed technique extends the application area of the considered class of methods.


animal ◽  
2021 ◽  
Vol 15 (9) ◽  
pp. 100321
Author(s):  
David Kenny ◽  
Craig P. Murphy ◽  
Roy D. Sleator ◽  
Ross D. Evans ◽  
Donagh P. Berry

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Simon L. Turner ◽  
Andrew B. Forbes ◽  
Amalia Karahalios ◽  
Monica Taljaard ◽  
Joanne E. McKenzie

Abstract Background Interrupted time series (ITS) studies are frequently used to evaluate the effects of population-level interventions or exposures. However, examination of the performance of statistical methods for this design has received relatively little attention. Methods We simulated continuous data to compare the performance of a set of statistical methods under a range of scenarios which included different level and slope changes, varying lengths of series and magnitudes of lag-1 autocorrelation. We also examined the performance of the Durbin-Watson (DW) test for detecting autocorrelation. Results All methods yielded unbiased estimates of the level and slope changes over all scenarios. The magnitude of autocorrelation was underestimated by all methods, however, restricted maximum likelihood (REML) yielded the least biased estimates. Underestimation of autocorrelation led to standard errors that were too small and coverage less than the nominal 95%. All methods performed better with longer time series, except for ordinary least squares (OLS) in the presence of autocorrelation and Newey-West for high values of autocorrelation. The DW test for the presence of autocorrelation performed poorly except for long series and large autocorrelation. Conclusions From the methods evaluated, OLS was the preferred method in series with fewer than 12 points, while in longer series, REML was preferred. The DW test should not be relied upon to detect autocorrelation, except when the series is long. Care is needed when interpreting results from all methods, given confidence intervals will generally be too narrow. Further research is required to develop better performing methods for ITS, especially for short series.


Sign in / Sign up

Export Citation Format

Share Document