Cluster Analysis of Laboratory Tests Used for the Evaluation of Hand-Arm Vibration Syndrome

1993 ◽  
Vol 12 (3) ◽  
pp. 98-109 ◽  
Author(s):  
P.L. Pelmear ◽  
R. Kusiak ◽  
B. Dembek

Three hundred and sixty-four patients exposed to hand-arm vibration at work were assessed in a clinical laboratory designated for this purpose in Toronto, Canada during the period 1989–92. The assessment included completion of a subjective history questionnaire; a medical examination of the upper torso, cardiovascular and central nervous systems; and multiple vascular, sensorineural and laboratory tests. The test results were used to assess the severity of Hand-arm Vibration Syndrome (HAVS) and grade the subjects according to the Stockholm stages. A statistical clustering algorithm was used to categorise the subjects according to the results of their diagnostic tests. One set of clusters was made from the vascular test results and another from the sensory test results. These clusters were compared with the Stockholm history (SH) and diagnostic (SD) stages. The clusters based upon the vascular test results (the vascular clusters) agreed well with the SD vascular stages (P= 1×10−9), and less well with the SD sensorineural stages (P=0.04). The clusters based upon the sensory test results (the sensory clusters) agreed well with the SD sensorineural stages (P=0.0003), and less well with the SD vascular stages (P=0.03). The mean values for the diagnostic tests within the clusters were compared. The sensory test results differed between the sensory clusters while most vascular test results did not. Likewise, the vascular test results differed between the vascular clusters while most sensory tests did not. A comparison of the vascular and sensory clusters showed that while some men suffered severe sensory effects and others suffered severe vascular effects, few suffered both. Hence this analysis confirms that the severity grading of sensory and vascular components of HAVS must be evaluated separately as now practised. This cluster analysis technique (SAS'S FASTCLUS procedure) has proved to be useful for the objective analysis of the results from many diagnostic tests on a large group of individuals. The reference data of the tests within the cluster groupings provides a basis for the objective classification of the severity of HAVS in individual patients.

1990 ◽  
Vol 29 (03) ◽  
pp. 205-212 ◽  
Author(s):  
Johanna Zwetsloot-Schonk

AbstractTest indices are often determined by comparing test results of healthy persons with test results of patients known to have the disease. However, the patient population for which the test is ordered in clinical practice often differs from the study population on which the test indices are based. Hence, these indices are not applicable to clinical practice and should be recalculated using data from daily clinical practice. Two major problems of using routinely collected data are discussed: the assessment of the final health status and tracing the reason for ordering the test. Prior considerations are given to the use of hospital information systems (HIS) to sample the patient population that is desired and to collect the necessary data for calculating test indices. We investigated whether the HIS of Leiden University Hospital (which is presented as an example) can be used to calculate the indices of clinical laboratory tests, histopathologic examinations and radiodiagnostic investigations. The results indicate that the registration of diagnoses must be improved and that a way must be found to capture the implicit reasoning for ordering diagnostic tests.


1994 ◽  
Vol 3 (2) ◽  
pp. 97-111 ◽  
Author(s):  
David Mortimer

It is a fundamental principle of laboratory tests that they are never entirely free from error. However, understanding the source and extent of such errors is a prerequisite for correct appreciation and interpretation of test results in the diagnostic process. In order to evaluate these errors, quality control (QC) has been introduced into clinical laboratory tests and has become routine practice.


2017 ◽  
Vol 25 (2) ◽  
pp. 121-126 ◽  
Author(s):  
Ronald George Hauser ◽  
Douglas B Quine ◽  
Alex Ryder

Abstract Objective Clinical laboratories in the United States do not have an explicit result standard to report the 7 billion laboratory tests results they produce each year. The absence of standardized test results creates inefficiencies and ambiguities for secondary data users. We developed and tested a tool to standardize the results of laboratory tests in a large, multicenter clinical data warehouse. Methods Laboratory records, each of which consisted of a laboratory result and a test identifier, from 27 diverse facilities were captured from 2000 through 2015. Each record underwent a standardization process to convert the original result into a format amenable to secondary data analysis. The standardization process included the correction of typos, normalization of categorical results, separation of inequalities from numbers, and conversion of numbers represented by words (eg, “million”) to numerals. Quality control included expert review. Results We obtained 1.266 × 109 laboratory records and standardized 1.252 × 109 records (98.9%). Of the unique unstandardized records (78.887 × 103), most appeared <5 times (96%, eg, typos), did not have a test identifier (47%), or belonged to an esoteric test with <100 results (2%). Overall, these 3 reasons accounted for nearly all unstandardized results (98%). Conclusion Current results suggest that the tool is both scalable and generalizable among diverse clinical laboratories. Based on observed trends, the tool will require ongoing maintenance to stay current with new tests and result formats. Future work to develop and implement an explicit standard for test results would reduce the need to retrospectively standardize test results.


Author(s):  
Roman Kaminskyy ◽  
Nataliya Shakhovska

Background: Increasing the amount of information generated as a result of smart city activity leads to the problem of its accumulation and preprocessing. One type of data preprocessing is clustering. The cluster analysis is an objective method of classification. It provides an appropriate choice of further processing methods as well as the visualization and interpretation of the collected data, which are multidimensional objects. The most valuable feature of cluster analysis is the representation of the result by an image of a dendrogram that reflects a particular hierarchy of relationships between the selected clusters and their objects. The aim of the paper is to develop method of 3D visualization of hierarchical clustering for streaming and multidimensional data collected from IoT devices and open databases. Methods: It is suggested that a more detailed interpretation of the dendrogram is made by implementing the hypothesis given above. Testing this hypothesis means a procedure of visualizing and interpreting the result of a cluster analysis. The disclosed dendrogram allows fully usage of association metrics. Since this metric is derived from the calculation of the values of the proximity matrix in accordance with the chosen object pooling strategy, the use of the disclosed dendrogram is quite legitimate. In addition, the procedure for opening the dendrogram is specific and unambiguous. This methods is built on hierarchical clustering algorithm as the simplest and fasters one. The developed algorithm should make it impossible to cross clusters on a plane. It is also necessary to look for the distance not only between objects, but also between clusters, represented as complex geometric figures. It will allow explaining the nature of the clusters Results: The result of the research and verification of the proposed hypothesis is the diclosure of the dendrogram algorithm as the extension of classical methods of cluster analysis. This extension is made by studying and disclosing the resulting image of the dendrogram. The dendrogram visualization thus obtained differs significantly from the classical results. The opening of the dendrogram according to the developed algorithm allows us 3D visualization of the analysis results, as well as calculating the area and perimeter of the obtained clusters. Therefore, using analytical geometry methods, it is quite easy to isolate and calculate the parameters of minimum cluster coverage surfaces and the immediate distances between any objects of one or different clusters, as well as between the objects of a given cluster. This, in turn, is a significant complement to cluster analysis. Conclusion: The disclosed dendrogram retains proportions in distances between objects. On the basis of these characteristics, it is possible to determine the close relationship between the clusters themselves by correlating the values of their quantitative averaged values of the traits. Thus, the opening of the dendrogram allows us to clearly identify the set of clusters, each of which has its own distribution of the range of features values. The quantitative characteristics of clusters on both dendrograms are quite simple. In addition, the mean values of the features of objects in a given cluster can be interpreted as generalized characteristics of this cluster, and the cluster itself can be represented as a single integral object.


Author(s):  
J B Whitfield ◽  
J K Allen ◽  
M A Adena ◽  
W J Hensley

Several clinical laboratory tests correlate with alcohol consumption, for example, the plasma activities of gamma-glutamyl transpeptidase (GGT) and aspartate aminotransferase (AST), the concentrations of triglycerides (TG) and uric acid (UA) and the erythrocyte mean corpuscular volume (MCV). The correlations of these test results with each other have been studied in a population of men attending a multiphasic health-screening centre. The patterns of correlation were of two types: those between the pairs of variables GGT/TG, GGT/UA, TG/AST, and TG/UA were all unchanged as the level of alcohol consumption increased; the pairs of variables GGT/MCV, UA/AST, AST/MCV, UA/MCV, and GGT/AST all became more highly correlated as the level of alcohol consumption increased.


2002 ◽  
pp. 4-7
Author(s):  
Mária Farkasné Kis

According to publishings in foreign scientific journals, ROC (Receiver Operating Characteristic) analysis is a widely used method for analysing the diagnostic utility of clinical laboratory tests. In this paper, we explain the basic principles of ROC analysis and produce ROC curves, as well as demonstrate some ROC curves, which represent the results of diagnostic tests.


2017 ◽  
Vol 44 (1) ◽  
pp. 191-222
Author(s):  
Dariusz Ampuła

Abstract A statistical analysis of multiyear laboratory test results and dynamic tests chosen type of artillery fuses is presented in this article as the aim to testify influence of natural ageing process on quality indicators during long - time exploitation. The influence of exploitation time on taking shooting test decisions relating the quality of lots after conducted laboratory tests and on different inconsistencies classes which stepped out during these tests were analysed. The analysis of exploitation time influence on diagnostic decisions taking after conducted shooting tests and on occurrence inconsistences in the establish classes were also presented. The conducted statistical analysis suggested assumption, that there is possibility of changing of estimate module in the test’s methodology applied until now. The modification of this estimate module will not influence negatively on the quality of further diagnostic tests conducted, and it will not influence negatively on correct evaluation of prediction process tested elements of ammunition which the artillery fuses are. Performed the statistical analysis can be essentially relevant to modify tests methodology of the artillery fuses.


2018 ◽  
Vol 56 (12) ◽  
pp. 2047-2057 ◽  
Author(s):  
Kasper M. Petersen ◽  
Niklas R. Jørgensen ◽  
Søren Bøgevig ◽  
Tonny S. Petersen ◽  
Thomas B. Jensen ◽  
...  

AbstractBackgroundIntravenous lipid emulsion (ILE) is used to treat drug poisonings. The resultant hyperlipemia may affect laboratory tests but the consequences are poorly characterized. In a clinical trial we therefore investigated the effects of ILE on laboratory tests analyzed on common analytical platforms (Roche®cobas 8000 and SYSMEX®flow-cytometry).MethodsTen healthy participants each completed 4 trial days (two with ILE and two with placebo). ILE (5.25 mL/kg) was administered from 12.5 to 30 min from baseline. At 0, 30 and 60 min, blood samples were drawn for measurement of 20 analytes. We investigated the effects of ILE on analyte levels and frequencies of exceedance of predefined analyzer hemolysis (H) or lipemia (L)-index cut-offs and test-specific reference change values (RCVs) on ILE-days. If the results were blocked due to exceedance of index values, we manually extracted the results.ResultsSixteen out of 20 tests were blocked because H- or L-index cut-offs were exceeded on ILE-days. Differences in analyte levels between ILE- and placebo-days above the RCV were observed for aspartate aminotransferase, total calcium, lactate dehydrogenase (LDH), sodium and neutrophils. Mean values outside the normal range after ILE were observed for LDH (219 U/L), sodium (135.3 mmol/L) and total calcium (2.1 mmol/L).ConclusionsILE-infusion caused report failure of nearly all laboratory tests performed on a cobas 8000-platform, but it was possible to manually retrieve the results. For most test results – particularly alkaline phosphatase, bilirubin, phosphate and carbamide – the consequences of ILE were marginal, and the effects of ILE were reduced at the 60-min timepoint.


2015 ◽  
pp. 125-138 ◽  
Author(s):  
I. V. Goncharenko

In this article we proposed a new method of non-hierarchical cluster analysis using k-nearest-neighbor graph and discussed it with respect to vegetation classification. The method of k-nearest neighbor (k-NN) classification was originally developed in 1951 (Fix, Hodges, 1951). Later a term “k-NN graph” and a few algorithms of k-NN clustering appeared (Cover, Hart, 1967; Brito et al., 1997). In biology k-NN is used in analysis of protein structures and genome sequences. Most of k-NN clustering algorithms build «excessive» graph firstly, so called hypergraph, and then truncate it to subgraphs, just partitioning and coarsening hypergraph. We developed other strategy, the “upward” clustering in forming (assembling consequentially) one cluster after the other. Until today graph-based cluster analysis has not been considered concerning classification of vegetation datasets.


Sign in / Sign up

Export Citation Format

Share Document