Actual error rates in linear discrimination of spatial Gaussian data in terms of semivariograms

Author(s):  
Kestutis Ducinskas ◽  
Lina Dreiziene
2018 ◽  
Vol 75 (19) ◽  
pp. 1460-1466 ◽  
Author(s):  
Jessica M. Zacher ◽  
Francesca E. Cunningham ◽  
Xinhua Zhao ◽  
Muriel L. Burk ◽  
Von R. Moore ◽  
...  

Abstract Purpose Results of a study to estimate the prevalence of look-alike/sound-alike (LASA) medication errors through analysis of Veterans Affairs (VA) administrative data are reported. Methods Veterans with at least 2 filled prescriptions for 1 medication in 20 LASA drug pairs during the period April 2014–March 2015 and no history of use of both medications in the preceding 6 months were identified. First occurrences of potential LASA errors were identified by analyzing dispensing patterns and documented diagnoses. For 7 LASA drug pairs, potential errors were evaluated via chart review to determine if an actual error occurred. Results Among LASA drug pairs with overlapping indications, the pairs associated with the highest potential-error rates, by percentage of treated patients, were tamsulosin and terazosin (3.05%), glipizide and glyburide (2.91%), extended- and sustained-release formulations of bupropion (1.53%), and metoprolol tartrate and metoprolol succinate (1.48%). Among pairs with distinct indications, the pairs associated with the highest potential-error rates were tramadol and trazodone (2.20%) and bupropion and buspirone (1.31%). For LASA drug pairs found to be associated with actual errors, the estimated error rates were as follows: lamivudine and lamotrigine, 0.003% (95% confidence interval [CI], 0–0.01%); carbamazepine and oxcarbazepine, 0.03% (95% CI, 0–0.09%); and morphine and hydromorphone, 0.02% (95% CI, 0–0.05%). Conclusion Through the use of administrative databases, potential LASA errors that could be reviewed for an actual error via chart review were identified. While a high rate of potential LASA errors was detected, the number of actual errors identified was low.


SIMULATION ◽  
2020 ◽  
Vol 97 (1) ◽  
pp. 33-43
Author(s):  
Jack P C Kleijnen ◽  
Wen Shi

Because computers (except for parallel computers) generate simulation outputs sequentially, we recommend sequential probability ratio tests (SPRTs) for the statistical analysis of these outputs. However, until now simulation analysts have ignored SPRTs. To change this situation, we review SPRTs for the simplest case; namely, the case of choosing between two hypothesized values for the mean simulation output. For this case, the classic SPRT of Wald (Wald A. Sequential tests of statistical hypotheses. Ann Math Stat 1945; 16: 117–186) allows general types of distribution, including normal distributions with known variances. A modification permits unknown variances that are estimated. Hall (Hall WJ. Some sequential analogs of Stein’s two-stage test. Biometrika 1962; 49: 367–378) developed a SPRT that assumes normal distributions with unknown variances estimated from a pilot sample. A modification uses a fully sequential variance estimator. In this paper, we quantify the performance of the various SPRTs, using several Monte Carlo experiments. In experiment #1, simulation outputs are normal. Whereas Wald’s SPRT with estimated variance gives too high error rates, Hall’s original and modified SPRTs are “conservative”; that is, the actual error rates are smaller than those prespecified (nominal). Furthermore, our experiment shows that the most efficient SPRT is Hall’s modified SPRT. In experiment #2, we estimate the robustness of these SPRTs for non-normal output. For these two experiments, we provide details on their design and analysis; these details may also be useful for simulation experiments in general.


2010 ◽  
Vol 51 ◽  
Author(s):  
Lijana Stabingienė ◽  
Kęstutis Dučinskas

In spatial classification it is usually assumed that features observations given labels are independently distributed. We have retracted this assumption by proposing stationary Gaussian random field model for features observations. The label are assumed to follow Disrete Random Field (DRF) model. Formula for exact error rate based on Bayes discriminant function (BDF) is derived. In the case of partial parametric uncertainty (mean parameters and variance are unknown), the approximation of the expected error rate associated with plug-in BDF is also derived. The dependence of considered error rates on the values of range and clustering parameters is investigated numerically for training locations being second-order neighbors to location of observation to be classified.


2017 ◽  
Vol 13 (1) ◽  
Author(s):  
Inna Gerlovina ◽  
Mark J. van der Laan ◽  
Alan Hubbard

AbstractMultiple comparisons and small sample size, common characteristics of many types of “Big Data” including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to “reproducibility crisis”. We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.


2018 ◽  
Vol 29 (1) ◽  
pp. 142-148 ◽  
Author(s):  
Abdurrahman Coskun ◽  
Mustafa Serteser ◽  
Ibrahim Ünsal

Six Sigma methodology has been used successfully in industry since the mid-1980s. Unfortunately, the same success has not been achieved in laboratory medicine. In this case, although the multidisciplinary structure of laboratory medicine is an important factor, the concept and statistical principles of Six Sigma have not been transferred correctly from industry to laboratory medicine. Furthermore, the performance of instruments and methods used in laboratory medicine is calculated by a modified equation that produces a value lower than the actual level. This causes unnecessary, increasing pressure on manufacturers in the market. We concluded that accurate implementation of the sigma metric in laboratory medicine is essential to protect both manufacturers by calculating the actual performance level of instruments, and patients by calculating the actual error rates.


2021 ◽  
Vol 62 ◽  
pp. 36-43
Author(s):  
Eglė Zikarienė ◽  
Kęstutis Dučinskas

In this paper, spatial data specified by auto-beta models is analysed by considering a supervised classification problem of classifying feature observation into one of two populations. Two classification rules based on conditional Bayes discriminant function (BDF) and linear discriminant function (LDF) are proposed. These classification rules are critically compared by the values of the actual error rates through the simulation study.


Sign in / Sign up

Export Citation Format

Share Document