scholarly journals Concordance rate of a four-quadrant plot for repeated measurements

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Mayu Hiraishi ◽  
Kensuke Tanioka ◽  
Toshio Shimokawa

Abstract Background To assure the equivalence between new clinical measurement methods and the standard methods, the four-quadrant plot and the plot’s concordance rate is used in clinical practice, along with Bland-Altman analysis. The conventional concordance rate does not consider the correlation among the data on individual subjects, which may affect its proper evaluation. Methods We propose a new concordance rate for the four-quadrant plot based on multivariate normal distribution to take into account the covariance within each individual subject. The proposed concordance rate is formulated as the conditional probability of the agreement. It contains a parameter to set the minimum concordant number between two measurement methods, which is regarded as agreement. This parameter allows flexibility in the interpretation of the results. Results Through numerical simulations, the AUC value of the proposed method was 0.967, while that of the conventional concordance rate was 0.938. In the application to a real example, the AUC value of the proposed method was 0.999 and that of the conventional concordance rate was 0.964. Conclusion From the results of numerical simulations and a real example, the proposed concordance rate showed better accuracy and higher diagnosability than the conventional approaches.

2019 ◽  
Vol 29 (3) ◽  
pp. 778-796 ◽  
Author(s):  
Patrick Taffé

Recently, a new estimation procedure has been developed to assess bias and precision of a new measurement method, relative to a reference standard. However, the author did not develop confidence bands around the bias and standard deviation curves. Therefore, the goal in this paper is to extend this methodology in several important directions. First, by developing simultaneous confidence bands for the various parameters estimated to allow formal comparisons between different measurement methods. Second, by proposing a new index of agreement. Third, by providing a series of new graphs to help the investigator to assess bias, precision, and agreement between the two measurement methods. The methodology requires repeated measurements on each individual for at least one of the two measurement methods. It works very well to estimate the differential and proportional biases, even with as few as two to three measurements by one of the two methods and only one by the other. The repeated measurements need not come from the reference standard but from either measurement methods. This is a great advantage as it may sometimes be more feasible to gather repeated measurements with the new measurement method.


1994 ◽  
Vol 77 (5) ◽  
pp. 1318-1325 ◽  
Author(s):  
Christa Hartmann ◽  
Desiré L Massart

Abstract The use of a plot originally proposed by Bland and Altman (1986, Lancet 8,307-310) for the comparison of 2 clinical measurement methods was investigated and compared with a new visual display based on principal component analysis. The characteristics of both methods are demonstrated for several computer-simulated situations. For visual comparison of 2 methods, it is recommended to use the 2 methods simultaneously, together with a plot of the results of method 2 against method 1.


2020 ◽  
Vol 9 (7) ◽  
pp. 2205
Author(s):  
Arne Ohlendorf ◽  
Alexander Leube ◽  
Siegfried Wahl

Advancements in clinical measurement of refractive errors should lead to faster and more reliable measurements of such errors. The study investigated different aspects of advancements and the agreement of the spherocylindrical prescriptions obtained with an objective method of measurement (“Aberrometry” (AR)) and two methods of subjective refinements (“Wavefront Refraction” (WR) and “Standard Refraction” (StdR)). One hundred adults aged 20–78 years participated in the course of the study. Bland–Altman analysis of the right eye measurement of the spherocylindrical refractive error (M) identified mean differences (±95% limits of agreement) between the different types of measurements of +0.36 D (±0.76 D) for WR vs. AR (t-test: p < 0.001), +0.35 D (± 0.84 D) for StdR vs. AR (t-test: p < 0.001), and 0.0 D (± 0.65 D) for StdR vs. WR (t-test: p < 0.001). Monocular visual acuity was 0.0 logMAR in 96% of the tested eyes, when refractive errors were corrected with measurements from AR, indicating that only small differences between the different types of prescriptions are present.


1994 ◽  
Vol 40 (3) ◽  
pp. 464-471 ◽  
Author(s):  
V M Chinchilli ◽  
W G Miller

Abstract A common procedure for evaluating a test method by comparison with another, well-accepted method has been to use a repeated measurements design, in which several individual subjects' specimens are assayed with both methods. We propose the use of the intrasubject relative mean square error, which is a function of the intrasubject relative bias and the coefficient of variation of the test method, as a measure of total error. We construct for each individual subject a score that is based on how well an individual's estimate of total error compares with a maximum allowable value. If the individual's score is &gt; 100%, then that individual's estimate of total error exceeds the maximum allowable value. We present a distribution-free statistical methodology for evaluating the sample of scores. This involves the construction of an upper tolerance limit to determine whether the test method yields values of the total error that are acceptable for most of the population with some level of confidence. Our definition of total error is very different from that defined in the National Cholesterol Education Program (NCEP) guidelines. The NCEP bound for total error has three main problems: (a) it incorrectly assumes that the standard error of the estimated relative bias is the test coefficient of variation; (b) it incorrectly assumes that the individual estimated relative biases follow gaussian distributions; (c) it is based on requiring the relative bias of the average individual in the population to lie within prescribed limits, whereas we believe it is more important to require the total error for most of the individuals in the population, say 95%, to lie within prescribed limits.


Author(s):  
David Vetturi ◽  
Matteo Lancini ◽  
Ileana Bodini

Often a designer has the problem to apply a suitable system of geometrical and dimensional tolerances to an assembly. The right solution is not unique, in fact it depends on the chosen parameters. If the tolerances have to be optimized, some important parameters have to be taken into account, e.g. the efficiency of each prescription, or if this last is reachable, or it can be verified and how much the realization costs. The authors opinion is that a statistical approach based on the Monte Carlo Method is very useful when the tolerances chains are complex. This paper shows an application of this method in order to verify the functional alignment between two assemblies and a critical analysis of the uncertainty in phase both of the component design and test. This study has been developed thanks to the strict requirements imposed by ESA (European Space Agency) on the components that Thales Alenia Space has to realize within the LISA Pathfinder experiment. The very critical aspect of this work is to reciprocally align two cylindrical elements of two different assemblies. The specifications require 100 μm as maximum linear displacement and 300 μrad as maximum angular displacement. Moreover this prescriptions have to be verified also when the two elements are independently moving. To be able to reach such strict accuracy level the components have been assembled in an ISO 100 class cleanroom and the work space was a 3D Coordinate-Measuring Machine (CMM). The cylindrical elements have a 10 mm diameter, so the value of the measurement uncertainty associated with the alignment check is fundamental. Starting from the different uncertainty sources, the measurability and verifiability of the alignment have been considered and evaluated. The overall uncertainty has been assessed by numerical simulations which have taken into account the dimensional, geometrical and form tolerances as well as the instrumental uncertainty of the 3D CMM. This estimation has been positively validated by a session of repeated measurements. Numerical simulations have also allowed performing a sensitivity analysis, in order to give information about which sources more contribute to the overall uncertainty.


2020 ◽  
Vol 19 (1) ◽  
pp. 273-279
Author(s):  
Mohammad Taha Jalali ◽  
Samaneh Salehipour Bavarsad ◽  
Saeed Hesam ◽  
Mohammad Reza Afsharmanesh ◽  
Narges Mohammadtaghvaei

2008 ◽  
Vol 19 (10) ◽  
pp. 748-757 ◽  
Author(s):  
Todd Ricketts ◽  
Earl Johnson ◽  
Jeremy Federman

Background: New and improved methods of feedback suppression are routinely introduced in hearing aids; however, comparisons of additional gain before feedback (AGBF) values across instruments are complicated by potential variability across subjects and measurement methods. Purpose: To examine the variability in AGBF values across individual listeners and an acoustic manikin. Research Design: A descriptive study of the reliability and variability of the AGBF measured within six commercially available feedback suppression (FS) algorithms using probe microphone techniques. Study Sample: Sixteen participants and an acoustic manikin. Results: The range of AGBF across the six FS algorithms was 0 to 15 dB, consistent with other recent studies. However, measures made in the participants ears and on the acoustic manikin within the same instrument suggest that across instrument comparisons of AGBF measured using acoustic manikin techniques may be misleading, especially when differences between hearing aids are small (i.e., less than 6 dB). Individual subject results also revealed considerable variability within the same FS algorithms. The range of AGBF values was as small as 7 dB and as large as 16 dB depending on the specific FS algorithm, suggesting that some models are much more robust than others. Conclusions: These results suggest caution when selecting FS algorithms clinically since different models can demonstrate similar AGBF when averaging across ears, but result in quite different AGBF values in a single individual ear.


Sign in / Sign up

Export Citation Format

Share Document