Machine Learning as a Knowledge Acquisition Tool Application in the Domain of the Interpretation of Test Results

Author(s):  
R. A. J. Schijven ◽  
J. L. Talmon ◽  
E. Ermers ◽  
R. Penders ◽  
P. J. E. H. M. Kitslaar
2021 ◽  
Author(s):  
Camilo E. Valderrama ◽  
Daniel J. Niven ◽  
Henry T. Stelfox ◽  
Joon Lee

BACKGROUND Redundancy in laboratory blood tests is common in intensive care units (ICU), affecting patients' health and increasing healthcare expenses. Medical communities have made recommendations to order laboratory tests more judiciously. Wise selection can rely on modern data-driven approaches that have been shown to help identify redundant laboratory blood tests in ICUs. However, most of these works have been developed for highly selected clinical conditions such as gastrointestinal bleeding. Moreover, features based on conditional entropy and conditional probability distribution have not been used to inform the need for performing a new test. OBJECTIVE We aimed to address the limitations of previous works by adapting conditional entropy and conditional probability to extract features to predict abnormal laboratory blood test results. METHODS We used an ICU dataset collected across Alberta, Canada which included 55,689 ICU admissions from 48,672 patients with different diagnoses. We investigated conditional entropy and conditional probability-based features by comparing the performances of two machine learning approaches to predict normal and abnormal results for 18 blood laboratory tests. Approach 1 used patients' vitals, age, sex, admission diagnosis, and other laboratory blood test results as features. Approach 2 used the same features plus the new conditional entropy and conditional probability-based features. RESULTS Across the 18 blood laboratory tests, both Approach 1 and Approach 2 achieved a median F1-score, AUC, precision-recall AUC, and Gmean above 80%. We found that the inclusion of the new features statistically significantly improved the capacity to predict abnormal laboratory blood test results in between ten and fifteen laboratory blood tests depending on the machine learning model. CONCLUSIONS Our novel approach with promising prediction results can help reduce over-testing in ICUs, as well as risks for patients and healthcare systems. CLINICALTRIAL N/A


Author(s):  
Miss. Aakansha P. Tiwari

Abstract: Effective contact tracing of SARS-CoV-2 enables quick and efficient diagnosis of COVID-19 and might mitigate the burden on healthcare system. Prediction models that combine several features to approximate the danger of infection are developed. These aim to help medical examiners worldwide in treatment of patients, especially within the context of limited healthcare resources. They established a machine learning approach that trained on records from 51,831 tested individuals (of whom 4769 were confirmed to own COVID-19 coronavirus). Test set contained data from the upcoming week (47,401 tested individuals of whom 3624 were confirmed to own COVID-19 disease). Their model predicted COVID-19 test results with highest accuracy using only eight binary features: sex, age ≥60 years, known contact with infected patients, and also the appearance of 5 initial clinical symptoms appeared. Generally, supported the nationwide data publicly reported by the Israeli Ministry of Health, they developed a model that detects COVID-19 cases by simple features accessed by asking basic inquiries to the affected patient. Their framework may be used, among other considerations, to prioritize testing for COVID-19 when testing resources are limited and important. Keywords: Machine Learning, SARS-COV-2, COVID-19, Coronavirus.


2021 ◽  
Vol 30 (2) ◽  
pp. 354-364
Author(s):  
Firas Al-Mashhadani ◽  
Ibrahim Al-Jadir ◽  
Qusay Alsaffar

In this paper, this method is intended to improve the optimization of the classification problem in machine learning. The EKH as a global search optimization method, it allocates the best representation of the solution (krill individual) whereas it uses the simulated annealing (SA) to modify the generated krill individuals (each individual represents a set of bits). The test results showed that the KH outperformed other methods using the external and internal evaluation measures.


Author(s):  
Prof. Dr. R. Sandhiya

In recent times, the diagnosis of heart disease has become a very critical task in the medical field. In the modern age, one person dies every minute due to heart disease. Data science has an important role in processing big amounts of data in the field of health sciences. Since the diagnosis of heart disease is a complex task, the assessment process should be automated to avoid the risks associated with it and alert the patient in advance. This paper uses the heart disease dataset available in the UCI Machine Learning Repository. The proposed work assesses the risk of heart disease in a patient by applying various data mining methods such as Naive Bayes, Decision Tree, KNN, Linear SVM, RBF SVM, Gaussian Process, Neural Network, Adabost, QDA and Random Forest. This paper provides a comparative study by analyzing the performance of various machine learning algorithms. Test results confirm that the KNN algorithm achieved the highest 97% accuracy compared to other implemented ML algorithms.


1992 ◽  
Vol 5 (1) ◽  
pp. 19-24 ◽  
Author(s):  
F. Bergadano ◽  
Y. Kodratoff ◽  
K. Morik

Sign in / Sign up

Export Citation Format

Share Document