A Review of Statistical and Machine Learning Techniques for Microvascular Complications in Type 2 Diabetes

2020 ◽  
Vol 16 ◽  
Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

Background and Introduction: Diabetes mellitus is a metabolic disorder that has emerged as a serious public health issue worldwide. According to the World Health Organization (WHO), without interventions, the number of diabetic incidences is expected to be at least 629 million by 2045. Uncontrolled diabetes gradually leads to progressive damage to eyes, heart, kidneys, blood vessels and nerves. Method: The paper presents a critical review of existing statistical and Artificial Intelligence (AI) based machine learning techniques with respect to DM complications namely retinopathy, neuropathy and nephropathy. The statistical and machine learning analytic techniques are used to structure the subsequent content review. Result: It has been inferred that statistical analysis can help only in inferential and descriptive analysis whereas, AI based machine learning models can even provide actionable prediction models for faster and accurate diagnose of complications associated with DM. Conclusion: The integration of AI based analytics techniques like machine learning and deep learning in clinical medicine will result in improved disease management through faster disease detection and cost reduction for disease treatment.

2018 ◽  
Author(s):  
Sandip S Panesar ◽  
Rhett N D’Souza ◽  
Fang-Cheng Yeh ◽  
Juan C Fernandez-Miranda

AbstractBackgroundMachine learning (ML) is the application of specialized algorithms to datasets for trend delineation, categorization or prediction. ML techniques have been traditionally applied to large, highly-dimensional databases. Gliomas are a heterogeneous group of primary brain tumors, traditionally graded using histopathological features. Recently the World Health Organization proposed a novel grading system for gliomas incorporating molecular characteristics. We aimed to study whether ML could achieve accurate prognostication of 2-year mortality in a small, highly-dimensional database of glioma patients.MethodsWe applied three machine learning techniques: artificial neural networks (ANN), decision trees (DT), support vector machine (SVM), and classical logistic regression (LR) to a dataset consisting of 76 glioma patients of all grades. We compared the effect of applying the algorithms to the raw database, versus a database where only statistically significant features were included into the algorithmic inputs (feature selection).ResultsRaw input consisted of 21 variables, and achieved performance of (accuracy/AUC): 70.7%/0.70 for ANN, 68%/0.72 for SVM, 66.7%/0.64 for LR and 65%/0.70 for DT. Feature selected input consisted of 14 variables and achieved performance of 73.4%/0.75 for ANN, 73.3%/0.74 for SVM, 69.3%/0.73 for LR and 65.2%/0.63 for DT.ConclusionsWe demonstrate that these techniques can also be applied to small, yet highly-dimensional datasets. Our ML techniques achieved reasonable performance compared to similar studies in the literature. Though local databases may be small versus larger cancer repositories, we demonstrate that ML techniques can still be applied to their analysis, though traditional statistical methods are of similar benefit.


2020 ◽  
Author(s):  
Akshay Kumar ◽  
Farhan Mohammad Khan ◽  
Rajiv Gupta ◽  
Harish Puppala

AbstractThe outbreak of COVID-19 is first identified in China, which later spread to various parts of the globe and was pronounced pandemic by the World Health Organization (WHO). The disease of transmissible person-to-person pneumonia caused by the extreme acute respiratory coronavirus 2 syndrome (SARS-COV-2, also known as COVID-19), has sparked a global warning. Thermal screening, quarantining, and later lockdown were methods employed by various nations to contain the spread of the virus. Though exercising various possible plans to contain the spread help in mitigating the effect of COVID-19, projecting the rise and preparing to face the crisis would help in minimizing the effect. In the scenario, this study attempts to use Machine Learning tools to forecast the possible rise in the number of cases by considering the data of daily new cases. To capture the uncertainty, three different techniques: (i) Decision Tree algorithm, (ii) Support Vector Machine algorithm, and (iii) Gaussian process regression are used to project the data and capture the possible deviation. Based on the projection of new cases, recovered cases, deceased cases, medical facilities, population density, number of tests conducted, and facilities of services, are considered to define the criticality index (CI). CI is used to classify all the districts of the country in the regions of high risk, low risk, and moderate risk. An online dashpot is created, which updates the data on daily bases for the next four weeks. The prospective suggestions of this study would aid in planning the strategies to apply the lockdown/ any other plan for any country, which can take other parameters to define the CI.


2018 ◽  
Author(s):  
Roberto Acuña

BACKGROUND According to the World Health Organization (WHO) close to 800,000 people worldwide death by suicidal each year. Many more attempt to do it. In consequence, the WHO recognizes suicide as a global public health priority, which affects not only rich countries, but poor and middle income countries as well. OBJECTIVE The aim of this study is to evaluate several supervised classifiers for detecting messages with suicidal ideation in order to know if these systems can be used in automatic suicide prevention systems. METHODS We used machine learning techniques to make a systematic analysis of 28 supervised classifier algorithms with parameters by defect. The Life Corpus, used in this research, is a bilingual corpus (English and Spanish) oriented to suicide. The corpus was constructed by two annotation experts, retrieving texts from several social networks. The corpus quality was measured using mutual annotation agreement. RESULTS The different experiments determined that the classifier with the best performance was KStar, with the corpus version POS-SYNSETS-NUM; and the cycle with 2 classes Urgent and No Risk was the one that achieved the best results with the PRC-Area metrics of 0,81036 and F-measure of 0,7148. CONCLUSIONS The present research fulfilled the objective of discovering which characteristics are the most suitable for the automatic classification of messages with suicidal ideation, using the Life Corpus. The results of this evaluation demonstrate that the Life Corpus and machine learning techniques could be suitable for detecting suicide ideation messages.


Education could be a important resource that has to lean to all or any kids. one in all the largest assets of the longer term generation cloud is alleged because the education that's given to the youngsters. Most of the youngsters aren't ready to continue their education because of many reasons. The prediction of student dropout plays a very important role in characteristic the scholars World Health Organization are on the sting of being a dropout from their education. whereas predicting this, we will simply try and solve their issues and create them continue their education. during this paper, we've planned a model for predicting the scholars can get born out or not mistreatment many machine learning techniques. we have a tendency to create use of decision trees that make a call mistreatment many factors. the choice of the prediction involves crucial wherever many knowledge attributes are used for prediction like correlations, similarity measures, frequent patterns, and associations rule mining. The planned work is evaluated mistreatment numerous parameters and is well-tried to figure expeditiously in predicting the dropout students compared with alternative.


Survey of world health organization has revealed that retinal eye disease Glaucoma is the second leading cause for the blindness worldwide. It is the disease which will steal the vision of the patient without any warning or symptoms. About half of the world Glaucoma patients are estimated to be in Asia. Hence, for social and economic reasons, Glaucoma detection is necessary in preventing blindness and reducing the cost of surgical treatment of the disease. The objective of the paper is to predict and detect Glaucoma efficiently using image processing and machine learning based classification techniques. Segmentation techniques such as unique template approach, Gray Level Coherence Matrix based feature extraction approach and wavelet transform based approach are used to extract these structure and texture based features. Combination of structure based and texture based techniques along with machine learning techniques improves the efficiency of the system. Developed efficient Computer aided Glaucoma detection system classifies a fundus image as either Normal or Glaucomatous image based on the structural features of the fundus image such as Cup-to-Disc Ratio (CDR), Rim-to-Disc Ratio (RDR), Superior and Inferior neuro-retinal rim thicknesses, Vessel structure based features and Distribution of texture features in the fundus images.


2020 ◽  
Author(s):  
Esra Ay ◽  
Burak Eken ◽  
Tuğba Önal-Süzek

AbstractAccording to World Health Organization (WHO) 2016 report, there are over 650 million obese adults and more than 2 billion overweight individuals in the world and it is estimated that this number will reach 2.7 billion in 2025 [1]. A sedentary lifestyle with low physical activity is considered to be one of the most effective environmental effects leading to various chronic disease phenotypes such as obesity and metabolic syndrome. On average, every 1 out of 3 people over the age of 20 in Turkey are known to have struggled with the metabolic syndrome [2]. Our project aims to apply the concept of “serious gaming”, to entertain people, play games, socialize and exercise in parallel to increase the ratio of the healthy individuals in our society. In this project, we applied machine learning techniques to integrate real-life accelerometer and gyroscope sensor data obtained from mobile phones to develop an interactive mobile based exercise game which does not require any external device such as smart watches. To our knowledge and research, our game is the first mobile-only interactive serious game that integrates machine learning techniques and an encouraging virtual environment to the individuals in need of exercise.


2020 ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E Braat ◽  
...  

Abstract Background: Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians.Methods: In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques.Results: Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years.Conclusion: In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables.


2021 ◽  
Vol 297 ◽  
pp. 01073
Author(s):  
Sabyasachi Pramanik ◽  
K. Martin Sagayam ◽  
Om Prakash Jena

Cancer has been described as a diverse illness with several distinct subtypes that may occur simultaneously. As a result, early detection and forecast of cancer types have graced essentially in cancer fact-finding methods since they may help to improve the clinical treatment of cancer survivors. The significance of categorizing cancer suffers into higher or lower-threat categories has prompted numerous fact-finding associates from the bioscience and genomics field to investigate the utilization of machine learning (ML) algorithms in cancer diagnosis and treatment. Because of this, these methods have been used with the goal of simulating the development and treatment of malignant diseases in humans. Furthermore, the capacity of machine learning techniques to identify important characteristics from complicated datasets demonstrates the significance of these technologies. These technologies include Bayesian networks and artificial neural networks, along with a number of other approaches. Decision Trees and Support Vector Machines which have already been extensively used in cancer research for the creation of predictive models, also lead to accurate decision making. The application of machine learning techniques may undoubtedly enhance our knowledge of cancer development; nevertheless, a sufficient degree of validation is required before these approaches can be considered for use in daily clinical practice. An overview of current machine learning approaches utilized in the simulation of cancer development is presented in this paper. All of the supervised machine learning approaches described here, along with a variety of input characteristics and data samples, are used to build the prediction models. In light of the increasing trend towards the use of machine learning methods in biomedical research, we offer the most current papers that have used these approaches to predict risk of cancer or patient outcomes in order to better understand cancer.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E. Braat ◽  
...  

Abstract Background Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians. Methods In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques. Results Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years. Conclusion In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables. Trial registration Retrospective data were provided by the Scientific Registry of Transplant Recipients under Data Use Agreement number 9477 for analysis of risk factors after liver transplantation.


Sign in / Sign up

Export Citation Format

Share Document