Predictive Models for Differentiation Between Normal and Abnormal EEG Through Cross-Correlation and Machine Learning Techniques

Author(s):  
Jefferson Tales Oliva ◽  
João Luís Garcia Rosa
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kumash Kapadia ◽  
Hussein Abdel-Jaber ◽  
Fadi Thabtah ◽  
Wael Hadi

Indian Premier League (IPL) is one of the more popular cricket world tournaments, and its financial is increasing each season, its viewership has increased markedly and the betting market for IPL is growing significantly every year. With cricket being a very dynamic game, bettors and bookies are incentivised to bet on the match results because it is a game that changes ball-by-ball. This paper investigates machine learning technology to deal with the problem of predicting cricket match results based on historical match data of the IPL. Influential features of the dataset have been identified using filter-based methods including Correlation-based Feature Selection, Information Gain (IG), ReliefF and Wrapper. More importantly, machine learning techniques including Naïve Bayes, Random Forest, K-Nearest Neighbour (KNN) and Model Trees (classification via regression) have been adopted to generate predictive models from distinctive feature sets derived by the filter-based methods. Two featured subsets were formulated, one based on home team advantage and other based on Toss decision. Selected machine learning techniques were applied on both feature sets to determine a predictive model. Experimental tests show that tree-based models particularly Random Forest performed better in terms of accuracy, precision and recall metrics when compared to probabilistic and statistical models. However, on the Toss featured subset, none of the considered machine learning algorithms performed well in producing accurate predictive models.


2021 ◽  
Author(s):  
Asad Mustafa Elmgerbi ◽  
Clemens Peter Ettinger ◽  
Peter Mbah Tekum ◽  
Gerhard Thonhauser ◽  
Andreas Nascimento

Abstract Over the past decade, several models have been generated to predict Rate of Penetration (ROP) in real-time. In general, these models can be classified into two categories, model-driven (analytical models) and data-driven models (based on machine learning techniques), which is considered as cutting-edge technology in terms of predictive accuracy and minimal human interfering. Nevertheless, most existing machine learning models are mainly used for prediction, not optimization. The ROP ahead of the bit for a certain formation layer can be predicted with such methods, but the limitation of the applications of these techniques is to find an optimum set of operating parameters for the optimization of ROP. In this regard, two data-driven models for ROP prediction have been developed and thereafter have been merged into an optimizer model. The purpose of the optimization process is to seek the ideal combinations of drilling parameters that would lead to an improvement in the ROP in real-time for a given formation. This paper is mainly focused on describing the process of development to create smart data-driven models (built on MATLAB software environment) for real-time rate of penetration prediction and optimization within a sufficient time span and without disturbing the drilling process, as it is typically required by a drill-off test. The used models here can be classified into two groups: two predictive models, Artificial Neural Network (ANN) and Random Forest (RF), in addition to one optimizer, namely genetic algorithm. The process started by developing, optimizing, and validation of the predictive models, which subsequently were linked to the genetic algorithm (GA) for real-time optimization. Automated optimization algorithms were integrated into the process of developing the productive models to improve the model efficiency and to reduce the errors. In order to validate the functionalities of the developed ROP optimization model, two different cases were studied. For the first case, historical drilling data from different wells were used, and the results confirmed that for the three known controllable surface drilling parameters, weight on bit (WOB) has the highest impact on ROP, followed by flow rate (FR) and finally rotation per minute (RPM), which has the least impact. In the second case, a laboratory scaled drilling rig "CDC miniRig" was utilized to validate the developed model, during the validation only the previous named parameters were used. Several meters were drilled through sandstone cubes at different weights on bit, rotations per minute, and flow rates to develop the productive models; then the optimizer was activated to propose the optimal set of the used parameters, which likely maximize the ROP. The proposed parameters were implemented, and the results showed that ROP improved as expected.


PLoS ONE ◽  
2018 ◽  
Vol 13 (10) ◽  
pp. e0203928 ◽  
Author(s):  
Leily Farrokhvar ◽  
Azadeh Ansari ◽  
Behrooz Kamali

2019 ◽  
Vol 9 (18) ◽  
pp. 3715 ◽  
Author(s):  
Hai Xu ◽  
Jian Zhou ◽  
Panagiotis G. Asteris ◽  
Danial Jahed Armaghani ◽  
Mahmood Md Tahir

Predicting the penetration rate is a complex and challenging task due to the interaction between the tunnel boring machine (TBM) and the rock mass. Many studies highlight the use of empirical and theoretical techniques in predicting TBM performance. However, reliable performance prediction of TBM is of crucial importance to mining and civil projects as it can minimize the risks associated with capital costs. This study presents new applications of supervised machine learning techniques, i.e., k-nearest neighbor (KNN), chi-squared automatic interaction detection (CHAID), support vector machine (SVM), classification and regression trees (CART) and neural network (NN) in predicting the penetration rate (PR) of a TBM. To achieve this aim, an experimental database was set up, based on field observations and laboratory tests for a tunneling project in Malaysia. In the database, uniaxial compressive strength, Brazilian tensile strength, rock quality designation, weathering zone, thrust force, and revolution per minute were utilized as inputs to predict PR of TBM. Then, KNN, CHAID, SVM, CART, and NN predictive models were developed to select the best one. A simple ranking technique, as well as some performance indices, were calculated for each developed model. According to the obtained results, KNN received the highest-ranking value among all five predictive models and was selected as the best predictive model of this study. It can be concluded that KNN is able to provide high-performance capacity in predicting TBM PR. KNN model identified uniaxial compressive strength (0.2) as the most important and revolution per minutes (0.14) as the least important factor for predicting the TBM penetration rate.


2021 ◽  
Vol 22 (6) ◽  
pp. 2903
Author(s):  
Noam Auslander ◽  
Ayal B. Gussow ◽  
Eugene V. Koonin

The exponential growth of biomedical data in recent years has urged the application of numerous machine learning techniques to address emerging problems in biology and clinical research. By enabling the automatic feature extraction, selection, and generation of predictive models, these methods can be used to efficiently study complex biological systems. Machine learning techniques are frequently integrated with bioinformatic methods, as well as curated databases and biological networks, to enhance training and validation, identify the best interpretable features, and enable feature and model investigation. Here, we review recently developed methods that incorporate machine learning within the same framework with techniques from molecular evolution, protein structure analysis, systems biology, and disease genomics. We outline the challenges posed for machine learning, and, in particular, deep learning in biomedicine, and suggest unique opportunities for machine learning techniques integrated with established bioinformatics approaches to overcome some of these challenges.


Author(s):  
Saranya N ◽  
Karthika Renuka D

Epilepsy, One of the most prevalent neurological disorder. Its a chronic condition is characterized by voluntary, unpredictable, and recurrent seizures that affects millions of individuals worldwide. A brief alteration in normal brain function that affects the health of patients occurs in this chronic condition. Detection of epileptic seizures before the start of the onset is beneficial. Recent studies have suggested approaches to machine learning that automatically execute those diagnostic tasks by integrating statistics and computer science. Machine learning, an application of AI (Artificial Intelligence) technology, allows a machine to learn something new automatically and thereby improve its output through meaningful data. For the prediction of epileptic seizures from electroencephalogram (EEG) signals, machine learning techniques and computational methods are used. There is a vast amount of medical data available today about the disease, its symptoms, causes of illness and its effects. But this data is not analyzed properly to predict or to study a disease. The objective of this paper is to provide detailed versions of machine learning predictive models for predicting epilepsy seizure detection and describing several types of predictive models and their applications in the field of healthcare. So that seizures can be predicted earlier before it occurs, it will be useful for epilepsy patients to improve their safety and quality of their life.


F1000Research ◽  
2020 ◽  
Vol 8 ◽  
pp. 1810
Author(s):  
Sameera Senanayake ◽  
Adrian Barnett ◽  
Nicholas Graves ◽  
Helen Healy ◽  
Keshwar Baboolal ◽  
...  

Background: A mechanism to predict graft failure before the actual kidney transplantation occurs is crucial to clinical management of chronic kidney disease patients.  Several kidney graft outcome prediction models, developed using machine learning methods, are available in the literature.  However, most of those models used small datasets and none of the machine learning-based prediction models available in the medical literature modelled time-to-event (survival) information, but instead used the binary outcome of failure or not. The objective of this study is to develop two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using time-to-event data in a large national dataset from Australia.   Methods: The dataset provided by the Australia and New Zealand Dialysis and Transplant Registry will be used for the analysis. This retrospective dataset contains the cohort of patients who underwent a kidney transplant in Australia from January 1 st, 2007, to December 31 st, 2017. This included 3,758 live donor transplants and 7,365 deceased donor transplants. Three machine learning methods (survival tree, random survival forest and survival support vector machine) and one traditional regression method, Cox proportional regression, will be used to develop the two predictive models (for live donor and deceased donor transplants). The best predictive model will be selected based on the model’s performance. Discussion: This protocol describes the development of two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using a large national dataset from Australia. Furthermore, these two models will be the most comprehensive kidney graft failure predictive models that have used survival data to model using machine learning techniques. Thus, these models are expected to provide valuable insight into the complex interactions between graft failure and donor and recipient characteristics.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 1810 ◽  
Author(s):  
Sameera Senanayake ◽  
Adrian Barnett ◽  
Nicholas Graves ◽  
Helen Healy ◽  
Keshwar Baboolal ◽  
...  

Background: A mechanism to predict graft failure before the actual kidney transplantation occurs is crucial to clinical management of chronic kidney disease patients.  Several kidney graft outcome prediction models, developed using machine learning methods, are available in the literature.  However, most of those models used small datasets and none of the machine learning-based prediction models available in the medical literature modelled time-to-event (survival) information, but instead used the binary outcome of failure or not. The objective of this study is to develop two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using time-to-event data in a large national dataset from Australia.   Methods: The dataset provided by the Australia and New Zealand Dialysis and Transplant Registry will be used for the analysis. This retrospective dataset contains the cohort of patients who underwent a kidney transplant in Australia from January 1st, 2007, to December 31st, 2017.  This included 3,758 live donor transplants and 7,365 deceased donor transplants.  Three machine learning methods (survival tree, random survival forest and survival support vector machine) and one traditional regression method, Cox proportional regression, will be used to develop the two predictive models.  The best predictive model will be selected based on the model’s performance. Discussion: This protocol describes the development of two separate machine learning-based predictive models to predict graft failure following live and deceased donor kidney transplant, using a large national dataset from Australia.   Furthermore, these two models will be the most comprehensive kidney graft failure predictive models that have used survival data to model using machine learning techniques.  Thus, these models are expected to provide valuable insight into the complex interactions between graft failure and donor and recipient characteristics.


Author(s):  
Saranya N ◽  
Karthika Renuka D

Epilepsy, One of the most prevalent neurological disorder. Its a chronic condition is characterized by voluntary, unpredictable, and recurrent seizures that affects millions of individuals worldwide. A brief alteration in normal brain function that affects the health of patients occurs in this chronic condition. Detection of epileptic seizures before the start of the onset is beneficial. Recent studies have suggested approaches to machine learning that automatically execute those diagnostic tasks by integrating statistics and computer science. Machine learning, an application of AI (Artificial Intelligence) technology, allows a machine to learn something new automatically and thereby improve its output through meaningful data. For the prediction of epileptic seizures from electroencephalogram (EEG) signals, machine learning techniques and computational methods are used. There is a vast amount of medical data available today about the disease, its symptoms, causes of illness and its effects. But this data is not analyzed properly to predict or to study a disease. The objective of this paper is to provide detailed versions of machine learning predictive models for predicting epilepsy seizure detection and describing several types of predictive models and their applications in the field of healthcare. So that seizures can be predicted earlier before it occurs, it will be useful for epilepsy patients to improve their safety and quality of their life.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

Sign in / Sign up

Export Citation Format

Share Document