scholarly journals Involvement of Machine Learning Tools in Healthcare Decision Making

2021 ◽  
Vol 2021 ◽  
pp. 1-20
Author(s):  
Senerath Mudalige Don Alexis Chinthaka Jayatilake ◽  
Gamage Upeksha Ganegoda

In the present day, there are many diseases which need to be identified at their early stages to start relevant treatments. If not, they could be uncurable and deadly. Due to this reason, there is a need of analysing complex medical data, medical reports, and medical images at a lesser time but with greater accuracy. There are even some instances where certain abnormalities cannot be directly recognized by humans. In healthcare for computational decision making, machine learning approaches are being used in these types of situations where a crucial data analysis needs to be performed on medical data to reveal hidden relationships or abnormalities which are not visible to humans. Implementing algorithms to perform such tasks itself is difficult, but what makes it even more challenging is to increase the accuracy of the algorithm while decreasing the required time for the algorithm to execute. In the early days, processing of large amount of medical data was an important task which resulted in machine learning being adapted in the biological domain. Since this happened, the biology and biomedical fields have been reaching higher levels by exploring more knowledge and identifying relationships which were never observed before. Reaching to its peak now the concern is being diverted towards treating patients not only based on the type of disease but also their genetics, which is known as precision medicine. Modifications in machine learning algorithms are being performed and tested daily to improve the performance of the algorithms in analysing and presenting more accurate information. In the healthcare field, starting from information extraction from medical documents until the prediction or diagnosis of a disease, machine learning has been involved. Medical imaging is a section that was greatly improved with the integration of machine learning algorithms to the field of computational biology. Nowadays, many disease diagnoses are being performed by medical image processing using machine learning algorithms. In addition, patient care, resource allocation, and research on treatments for various diseases are also being performed using machine learning-based computational decision making. Throughout this paper, various machine learning algorithms and approaches that are being used for decision making in the healthcare sector will be discussed along with the involvement of machine learning in healthcare applications in the current context. With the explored knowledge, it was evident that neural network-based deep learning methods have performed extremely well in the field of computational biology with the support of the high processing power of modern sophisticated computers and are being extensively applied because of their high predicting accuracy and reliability. When giving concern towards the big picture by combining the observations, it is noticeable that computational biology and biomedicine-based decision making in healthcare have now become dependent on machine learning algorithms, and thus they cannot be separated from the field of artificial intelligence.

Author(s):  
Peter Kokol ◽  
Jan Jurman ◽  
Tajda Bogovič ◽  
Tadej Završnik ◽  
Jernej Završnik ◽  
...  

Cardiovascular diseases are one of the leading global causes of death. Following the positive experiences with machine learning in medicine we performed a study in which we assessed how machine learning can support decision making regarding coronary artery diseases. While a plethora of studies reported high accuracy rates of machine learning algorithms (MLA) in medical applications, the majority of the studies used the cleansed medical data bases without the presence of the “real world noise.” Contrary, the aim of our study was to perform machine learning on the routinely collected Anonymous Cardiovascular Database (ACD), extracted directly from a hospital information system of the University Medical Centre Maribor). Many studies used tens of different machine learning approaches with substantially varying results regarding accuracy (ACU), hence they were not usable as a base to validate the results of our study. Thus, we decided, that our study will be performed in the 2 phases. During the first phase we trained the different MLAs on a comparable University of California Irvine UCI Heart Disease Dataset. The aim of this phase was first to define the “standard” ACU values and second to reduce the set of all MLAs to the most appropriate candidates to be used on the ACD, during the second phase. Seven MLAs were selected and the standard ACUs for the 2-class diagnosis were 0.85. Surprisingly, the same MLAs achieved the ACUs around 0.96 on the ACD. A general comparison of both databases revealed that different machine learning algorithms performance differ significantly. The accuracy on the ACD reached the highest levels using decision trees and neural networks while Liner regression and AdaBoost performed best in UCI database. This might indicate that decision trees based algorithms and neural networks are better in coping with real world not “noise free” clinical data and could successfully support decision making concerned with coronary diseasesmachine learning.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


Author(s):  
Magdalena Kukla-Bartoszek ◽  
Paweł Teisseyre ◽  
Ewelina Pośpiech ◽  
Joanna Karłowska-Pik ◽  
Piotr Zieliński ◽  
...  

AbstractIncreasing understanding of human genome variability allows for better use of the predictive potential of DNA. An obvious direct application is the prediction of the physical phenotypes. Significant success has been achieved, especially in predicting pigmentation characteristics, but the inference of some phenotypes is still challenging. In search of further improvements in predicting human eye colour, we conducted whole-exome (enriched in regulome) sequencing of 150 Polish samples to discover new markers. For this, we adopted quantitative characterization of eye colour phenotypes using high-resolution photographic images of the iris in combination with DIAT software analysis. An independent set of 849 samples was used for subsequent predictive modelling. Newly identified candidates and 114 additional literature-based selected SNPs, previously associated with pigmentation, and advanced machine learning algorithms were used. Whole-exome sequencing analysis found 27 previously unreported candidate SNP markers for eye colour. The highest overall prediction accuracies were achieved with LASSO-regularized and BIC-based selected regression models. A new candidate variant, rs2253104, located in the ARFIP2 gene and identified with the HyperLasso method, revealed predictive potential and was included in the best-performing regression models. Advanced machine learning approaches showed a significant increase in sensitivity of intermediate eye colour prediction (up to 39%) compared to 0% obtained for the original IrisPlex model. We identified a new potential predictor of eye colour and evaluated several widely used advanced machine learning algorithms in predictive analysis of this trait. Our results provide useful hints for developing future predictive models for eye colour in forensic and anthropological studies.


Author(s):  
Pragya Paudyal ◽  
B.L. William Wong

In this paper we introduce the problem of algorithmic opacity and the challenges it presents to ethical decision-making in criminal intelligence analysis. Machine learning algorithms have played important roles in the decision-making process over the past decades. Intelligence analysts are increasingly being presented with smart black box automation that use machine learning algorithms to find patterns or interesting and unusual occurrences in big data sets. Algorithmic opacity is the lack visibility of computational processes such that humans are not able to inspect its inner workings to ascertain for themselves how the results and conclusions were computed. This is a problem that leads to several ethical issues. In the VALCRI project, we developed an abstraction hierarchy and abstraction decomposition space to identify important functional relationships and system invariants in relation to ethical goals. Such explanatory relationships can be valuable for making algorithmic process transparent during the criminal intelligence analysis process.


2020 ◽  
Vol 110 ◽  
pp. 91-95 ◽  
Author(s):  
Ashesh Rambachan ◽  
Jon Kleinberg ◽  
Jens Ludwig ◽  
Sendhil Mullainathan

There are widespread concerns that the growing use of machine learning algorithms in important decisions may reproduce and reinforce existing discrimination against legally protected groups. Most of the attention to date on issues of “algorithmic bias” or “algorithmic fairness” has come from computer scientists and machine learning researchers. We argue that concerns about algorithmic fairness are at least as much about questions of how discrimination manifests itself in data, decision-making under uncertainty, and optimal regulation. To fully answer these questions, an economic framework is necessary--and as a result, economists have much to contribute.


Author(s):  
Gowri Prasad ◽  
Vrinda Raveendran ◽  
Vidya B M ◽  
Tejavati Hedge

Diabetic retinopathy is a eye disorder which is developed due to high blood sugar that affects the neurons in retina. A dangerous fact about this disease is that it can lead to blindness. The possible cure is through detection of disease at early age. This can be done using different machine learning algorithms. This paper does a comparative study on different machine learning algorithms that can be used for early detection of diabetic retinopathy. This study is done to find out the most efficient algorithm suitable for the process and to increase the efficiency of the particular algorithm.


Sign in / Sign up

Export Citation Format

Share Document