scholarly journals Survival prediction among heart patients using machine learning techniques

2021 ◽  
Vol 19 (1) ◽  
pp. 134-145
Author(s):  
Abdulwahab Ali Almazroi ◽  

<abstract><p>Cardiovascular diseases are regarded as the most common reason for worldwide deaths. As per World Health Organization, nearly $ 17.9 $ million people die of heart-related diseases each year. The high shares of cardiovascular-related diseases in total worldwide deaths motivated researchers to focus on ways to reduce the numbers. In this regard, several works focused on the development of machine learning techniques/algorithms for early detection, diagnosis, and subsequent treatment of cardiovascular-related diseases. These works focused on a variety of issues such as finding important features to effectively predict the occurrence of heart-related diseases to calculate the survival probability. This research contributes to the body of literature by selecting a standard well defined, and well-curated dataset as well as a set of standard benchmark algorithms to independently verify their performance based on a set of different performance evaluation metrics. From our experimental evaluation, it was observed that decision tree is the best performing algorithm in comparison to logistic regression, support vector machines, and artificial neural networks. Decision trees achieved $ 14 $% better accuracy than the average performance of the remaining techniques. In contrast to other studies, this research observed that artificial neural networks are not as competitive as the decision tree or support vector machine.</p></abstract>

Author(s):  
Mehmet Fatih Bayramoglu ◽  
Cagatay Basarir

Investing in developed markets offers investors the opportunity to diversify internationally by investing in foreign firms. In other words, it provides the possibility of reducing systematic risk. For this reason, investors are very interested in developed markets. However, developed are more efficient than emerging markets, so the risk and return can be low in these markets. For this reason, developed market investors often use machine learning techniques to increase their gains while reducing their risks. In this chapter, artificial neural networks which is one of the machine learning techniques have been tested to improve internationally diversified portfolio performance. Also, the results of ANNs were compared with the performances of traditional portfolios and the benchmark portfolio. The portfolios are derived from the data of 16 foreign companies quoted on NYSE by ANNs, and they are invested for 30 trading days. According to the results, portfolio derived by ANNs gained 10.30% return, while traditional portfolios gained 5.98% return.


2020 ◽  
Vol 10 (17) ◽  
pp. 5734
Author(s):  
Chee Soon Lim ◽  
Edy Tonnizam Mohamad ◽  
Mohammad Reza Motahari ◽  
Danial Jahed Armaghani ◽  
Rosli Saad

To design geotechnical structures efficiently, it is important to examine soil’s physical properties. Therefore, classifying soil with respect to geophysical parameters is an advantageous and popular approach. Novel, quick, cost, and time effective machine learning techniques can facilitate this classification. This study employs three kinds of machine learning models, including the Decision Tree, Artificial Neural Networks, and Bayesian Networks. The Decision tree models included the chi-square automatic interaction detection (CHAID), classification and regression trees (CART), quick, unbiased, and efficient statistical tree (QUEST), and C5; the Artificial Neural Networks models included Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF); and BN models included the Tree Augmented Naïve (TAN) and Markov Blanket, which were employed to predict the soil classifications using geophysics investigations and laboratory tests. The performance of each model was assessed through the accuracy, stability and gains. The results showed that while the BAYESIANMARKOV model achieved the highest overall accuracy (100%) in training phase, this model achieved the lowest accuracy (34.21%) in testing phases. Thus, this model had the worst stability. The QUEST had the second highest overall training accuracy (99.12%) and had the highest overall testing accuracy (94.74%). Thus, this model was somewhat stable and had an acceptable overall training and testing accuracy to predict the soil characteristics. The future studies can use the findings of this paper as a benchmark to classify the soil characteristics and select the best machine learning technique to perform this classification.


2014 ◽  
pp. 126-134
Author(s):  
Akira Imada

This article is a consideration on computer network intrusion detection using artificial neural networks, or whatever else using machine learning techniques. We assume an intrusion to a network is like a needle in a haystack not like a family of iris flower, and we consider how an attack can be detected by an intelligent way, if any.


2020 ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E Braat ◽  
...  

Abstract Background: Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians.Methods: In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques.Results: Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years.Conclusion: In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables.


Author(s):  
Hesham M. Al-Ammal

Detection of anomalies in a given data set is a vital step in several applications in cybersecurity; including intrusion detection, fraud, and social network analysis. Many of these techniques detect anomalies by examining graph-based data. Analyzing graphs makes it possible to capture relationships, communities, as well as anomalies. The advantage of using graphs is that many real-life situations can be easily modeled by a graph that captures their structure and inter-dependencies. Although anomaly detection in graphs dates back to the 1990s, recent advances in research utilized machine learning methods for anomaly detection over graphs. This chapter will concentrate on static graphs (both labeled and unlabeled), and the chapter summarizes some of these recent studies in machine learning for anomaly detection in graphs. This includes methods such as support vector machines, neural networks, generative neural networks, and deep learning methods. The chapter will reflect the success and challenges of using these methods in the context of graph-based anomaly detection.


2018 ◽  
Author(s):  
Sandip S Panesar ◽  
Rhett N D’Souza ◽  
Fang-Cheng Yeh ◽  
Juan C Fernandez-Miranda

AbstractBackgroundMachine learning (ML) is the application of specialized algorithms to datasets for trend delineation, categorization or prediction. ML techniques have been traditionally applied to large, highly-dimensional databases. Gliomas are a heterogeneous group of primary brain tumors, traditionally graded using histopathological features. Recently the World Health Organization proposed a novel grading system for gliomas incorporating molecular characteristics. We aimed to study whether ML could achieve accurate prognostication of 2-year mortality in a small, highly-dimensional database of glioma patients.MethodsWe applied three machine learning techniques: artificial neural networks (ANN), decision trees (DT), support vector machine (SVM), and classical logistic regression (LR) to a dataset consisting of 76 glioma patients of all grades. We compared the effect of applying the algorithms to the raw database, versus a database where only statistically significant features were included into the algorithmic inputs (feature selection).ResultsRaw input consisted of 21 variables, and achieved performance of (accuracy/AUC): 70.7%/0.70 for ANN, 68%/0.72 for SVM, 66.7%/0.64 for LR and 65%/0.70 for DT. Feature selected input consisted of 14 variables and achieved performance of 73.4%/0.75 for ANN, 73.3%/0.74 for SVM, 69.3%/0.73 for LR and 65.2%/0.63 for DT.ConclusionsWe demonstrate that these techniques can also be applied to small, yet highly-dimensional datasets. Our ML techniques achieved reasonable performance compared to similar studies in the literature. Though local databases may be small versus larger cancer repositories, we demonstrate that ML techniques can still be applied to their analysis, though traditional statistical methods are of similar benefit.


2020 ◽  
Author(s):  
Akshay Kumar ◽  
Farhan Mohammad Khan ◽  
Rajiv Gupta ◽  
Harish Puppala

AbstractThe outbreak of COVID-19 is first identified in China, which later spread to various parts of the globe and was pronounced pandemic by the World Health Organization (WHO). The disease of transmissible person-to-person pneumonia caused by the extreme acute respiratory coronavirus 2 syndrome (SARS-COV-2, also known as COVID-19), has sparked a global warning. Thermal screening, quarantining, and later lockdown were methods employed by various nations to contain the spread of the virus. Though exercising various possible plans to contain the spread help in mitigating the effect of COVID-19, projecting the rise and preparing to face the crisis would help in minimizing the effect. In the scenario, this study attempts to use Machine Learning tools to forecast the possible rise in the number of cases by considering the data of daily new cases. To capture the uncertainty, three different techniques: (i) Decision Tree algorithm, (ii) Support Vector Machine algorithm, and (iii) Gaussian process regression are used to project the data and capture the possible deviation. Based on the projection of new cases, recovered cases, deceased cases, medical facilities, population density, number of tests conducted, and facilities of services, are considered to define the criticality index (CI). CI is used to classify all the districts of the country in the regions of high risk, low risk, and moderate risk. An online dashpot is created, which updates the data on daily bases for the next four weeks. The prospective suggestions of this study would aid in planning the strategies to apply the lockdown/ any other plan for any country, which can take other parameters to define the CI.


2020 ◽  
Author(s):  
Mohamed El Boujnouni

Abstract Coronavirus disease 2019 or COVID-19 is a global health crisis caused by a virus officially named as severe acute respiratory syndrome coronavirus 2 and well known with the acronym (SARS-CoV-2). This very contagious illness has severely impacted people and business all over the world and scientists are trying so far to discover all useful information about it, including its potential origin(s) and inter-host(s). This study is a part of this scientific inquiry and it aims to identify precisely the origin(s) of a large set of genomes of SARS-COV-2 collected from different geographic locations in all over the world. This research is performed through the combination of five powerful techniques of machine learning (Naïve Bayes, K-Nearest Neighbors, Artificial Neural Networks, Decision tree and Support Vector Machine) and a widely known tool of language modeling (N-grams). The experimental results have shown that the majority of techniques gave the same global results concerning the origin(s) and inter-host(s) of SARS-COV-2. These results demonstrated that this virus has one zoonotic source which is Pangolin.


Sign in / Sign up

Export Citation Format

Share Document