Systematic error detection for RFID reliability

Author(s):  
S. Inoue ◽  
D. Hagiwara ◽  
H. Yasuura
1992 ◽  
Vol 38 (2) ◽  
pp. 204-210 ◽  
Author(s):  
Aristides T Hatjimihail

Abstract I have developed an interactive microcomputer simulation program for the design, comparison, and evaluation of alternative quality-control (QC) procedures. The program estimates the probabilities for rejection under different conditions of random and systematic error when these procedures are used and plots their power function graphs. It also estimates the probabilities for detection of critical errors, the defect rate, and the test yield. To allow a flexible definition of the QC procedures, it includes an interpreter. Various characteristics of the analytical process and the QC procedure can be user-defined. The program extends the concepts of the probability for error detection and of the power function to describe the results of the introduction of error between runs and within a run. The usefulness of this approach is illustrated with some examples.


1985 ◽  
Vol 31 (2) ◽  
pp. 206-212 ◽  
Author(s):  
A S Blum

Abstract I describe a program for definitive comparison of different quality-control statistical procedures. A microcomputer simulates quality-control results generated by repetitive analytical runs. It applies various statistical rules to each result, tabulating rule breaks to evaluate rules as routinely applied by the analyst. The process repeats with increasing amounts of random and systematic error. Rate of false rejection and true error detection for currently popular statistical procedures were comparatively evaluated together with a new multirule procedure described here. The nature of the analyst's response to out-of-control signals was also evaluated. A single-rule protocol that is as effective as the multirule protocol of Westgard et al. (Clin Chem 27:493, 1981) is reported.


2011 ◽  
Vol 12 (1) ◽  
pp. 25 ◽  
Author(s):  
Plamen Dragiev ◽  
Robert Nadon ◽  
Vladimir Makarenkov

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Hikmet Can Çubukçu

Abstract Objectives The present study set out to build a machine learning model to incorporate conventional quality control (QC) rules, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) with random forest (RF) algorithm to achieve better performance and to evaluate the performances the models using computer simulation to aid laboratory professionals in QC procedure planning. Methods Conventional QC rules, EWMA, CUSUM, and RF models were implemented on the simulation data using an in-house algorithm. The models’ performances were evaluated on 170,000 simulated QC results using outcome metrics, including the probability of error detection (Ped), probability of false rejection (Pfr), average run length (ARL), and power graph. Results The highest Pfr (0.0404) belonged to the 1–2s rule. The 1–3s rule could not detect errors with a 0.9 Ped up to 4 SD of systematic error. The random forest model had the highest Ped for systematic errors lower than 1 SD. However, ARLs of the model require the combined utility of the RF model with conventional QC rules having lower ARLs or more than one QC measurement is required. Conclusions The RF model presented in this study showed acceptable Ped for most degrees of systematic error. The outcome metrics established in this study will help laboratory professionals planning internal QC.


Sign in / Sign up

Export Citation Format

Share Document