scholarly journals On Possibility of Machine Learning Application for Diagnosing Dementia by Eeg Signals

Author(s):  
I.V. Dorovskih ◽  
O.V. Senko ◽  
V.Ya. Chuchupal ◽  
A.A. Dokukin ◽  
A.V. Kuznetsova

The purpose of this study was to investigate the possibility to use electroencephalography for early diagnostics of dementia and for objective assessment of disease severity and neurometabolic treatment results. The study was based on application of machine learning methods for computer diagnosis of dementia by the energy spectra of EEG signals. Effectiveness of various machine learning technologies was investigated in order to separate different groups of patients with varying severity of dementia from healthy ones and patients with pre-dementia disorders according to the vectors of spectral indicators. Applying of cross-validation procedure showed that separation of the group with dementia from the group of people with normal physiological aging and groups of young people reaches 0.783 and 0.786, respectively by parameter ROC AUC. The results of the study allow to make an assumption, that the algorithmic assessment of dementia severity by EEG corresponds to the actual course of the disease. So, the number of cases with algorithmically identified positive dynamics significantly exceeds the number of cases with algorithmically detected negative dynamics after neurometabolic therapy in the group with mild dementia. In a combined group with both average and heavy severity of the disease such an increase was not observed.

2021 ◽  
Vol 15 ◽  
Author(s):  
Jinyu Zang ◽  
Yuanyuan Huang ◽  
Lingyin Kong ◽  
Bingye Lei ◽  
Pengfei Ke ◽  
...  

Recently, machine learning techniques have been widely applied in discriminative studies of schizophrenia (SZ) patients with multimodal magnetic resonance imaging (MRI); however, the effects of brain atlases and machine learning methods remain largely unknown. In this study, we collected MRI data for 61 first-episode SZ patients (FESZ), 79 chronic SZ patients (CSZ) and 205 normal controls (NC) and calculated 4 MRI measurements, including regional gray matter volume (GMV), regional homogeneity (ReHo), amplitude of low-frequency fluctuation and degree centrality. We systematically analyzed the performance of two classifications (SZ vs NC; FESZ vs CSZ) based on the combinations of three brain atlases, five classifiers, two cross validation methods and 3 dimensionality reduction algorithms. Our results showed that the groupwise whole-brain atlas with 268 ROIs outperformed the other two brain atlases. In addition, the leave-one-out cross validation was the best cross validation method to select the best hyperparameter set, but the classification performances by different classifiers and dimensionality reduction algorithms were quite similar. Importantly, the contributions of input features to both classifications were higher with the GMV and ReHo features of brain regions in the prefrontal and temporal gyri. Furthermore, an ensemble learning method was performed to establish an integrated model, in which classification performance was improved. Taken together, these findings indicated the effects of these factors in constructing effective classifiers for psychiatric diseases and showed that the integrated model has the potential to improve the clinical diagnosis and treatment evaluation of SZ.


Kardiologiia ◽  
2020 ◽  
Vol 60 (10) ◽  
pp. 38-46
Author(s):  
B. I. Geltser ◽  
K. J. Shahgeldyan ◽  
V. Y. Rublev ◽  
V. N. Kotelnikov ◽  
A. B. Krieger ◽  
...  

Aim      To compare the accuracy of predicting an in-hospital fatal outcome for models based on current machine-learning technologies in patients with ischemic heart disease (IHD) after coronary bypass (CB) surgery.Material and methods  A retrospective analysis of 866 electronic medical records was performed for patients (685 men and 181 women) who have had a CB surgery for IHD in 2008–2018. Results of clinical, laboratory, and instrumental evaluations obtained prior to the CB surgery were analyzed. Patients were divided into two groups: group 1 included 35 (4 %) patients who died within the first 20 days of CB, and group 2 consisted of 831 (96 %) patients with a beneficial outcome of the surgery. Predictors of the in-hospital fatal outcome were identified by a multistep selection procedure with analysis of statistical hypotheses and calculation of weight coefficients. For construction of models and verification of predictors, machine-learning methods were used, including the multifactorial logistic regression (LR), random forest (RF), and artificial neural networks (ANN). Model accuracy was evaluated by three metrics: area under the ROC curve (AUC), sensitivity, and specificity. Cross validation of the models was performed on test samples, and the control validation was performed on a cohort of patients with IHD after CB, whose data were not used in development of the models.Results The following 7 risk factors for in-hospital fatal outcome with the greatest predictive potential were isolated from the EuroSCORE II scale: ejection fraction (EF) <30 %, EF 30-50 %, age of patients with recent MI, damage of peripheral arterial circulation, urgency of CB, functional class III-IV chronic heart failure, and 5 additional predictors, including heart rate, systolic blood pressure, presence of aortic stenosis, posterior left ventricular (LV) wall relative thickness index (RTI), and LV relative mass index (LVRMI). The models developed by the authors using LR, RF and ANN methods had higher AUC values and sensitivity compared to the classical EuroSCORE II scale. The ANN models including the RTI and LVRMI predictors demonstrated a maximum level of prognostic accuracy, which was illustrated by values of the quality metrics, AUC 93 %, sensitivity 90 %, and specificity 96 %. The predictive robustness of the models was confirmed by results of the control validation.Conclusion      The use of current machine-learning technologies allowed developing a novel algorithm for selection of predictors and highly accurate models for predicting an in-hospital fatal outcome after CB. 


2020 ◽  
pp. 5-18
Author(s):  
N. N. Kiselyova ◽  
◽  
V. A. Dudarev ◽  
V. V. Ryazanov ◽  
O. V. Sen’ko ◽  
...  

New chalcospinels of the most common compositions were predicted: AIBIIICIVX4 (X — S or Se) and AIIBIIICIIIS4 (A, B, and C are various chemical elements). They are promising for the search for new materials for magneto-optical memory elements, sensors and anodes in sodium-ion batteries. The parameter “a” values of their crystal lattice are estimated. When predicting only the values of chemical elements properties were used. The calculations were carried out using machine learning programs that are part of the information-analytical system developed by the authors (various ensembles of algorithms of: the binary decision trees, the linear machine, the search for logical regularities of classes, the support vector machine, Fisher linear discriminant, the k-nearest neighbors, the learning a multilayer perceptron and a neural network), — for predicting chalcospinels not yet obtained, as well as an extensive family of regression methods, presented in the scikit-learn package for the Python language, and multilevel machine learning methods that were proposed by the authors — for estimation of the new chalcospinels lattice parameter value). The prediction accuracy of new chalcospinels according to the results of the cross-validation is not lower than 80%, and the prediction accuracy of the parameter of their crystal lattice (according to the results of calculating the mean absolute error (when cross-validation in the leave-one-out mode)) is ± 0.1 Å. The effectiveness of using multilevel machine learning methods to predict the physical properties of substances was shown.


Risks ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 133
Author(s):  
Andrey Koltays ◽  
Anton Konev ◽  
Alexander Shelupanov

The need to assess the risks of the trustworthiness of counterparties is increasing every year. The identification of increasing cases of unfair behavior among counterparties only confirms the relevance of this topic. The existing work in the field of information and economic security does not create a reasonable methodology that allows for a comprehensive study and an adequate assessment of a counterparty (for example, a developer company) in the field of software design and development. The purpose of this work is to assess the risks of a counterparty’s trustworthiness in the context of the digital transformation of the economy, which in turn will reduce the risk of offenses and crimes that constitute threats to the security of organizations. This article discusses the main methods used in the construction of a mathematical model for assessing the trustworthiness of a counterparty. The main difficulties in assessing the accuracy and completeness of the model are identified. The use of cross-validation to eliminate difficulties in building a model is described. The developed model, using machine learning methods, gives an accurate result with a small number of compared counterparties, which corresponds to the order of checking a counterparty in a real system. The results of calculations in this model show the possibility of using machine learning methods in assessing the risks of counterparty trustworthiness.


2021 ◽  
Vol 27 (3) ◽  
pp. 189-199
Author(s):  
Ilias Tougui ◽  
Abdelilah Jilbab ◽  
Jamal El Mhamdi

Objectives: With advances in data availability and computing capabilities, artificial intelligence and machine learning technologies have evolved rapidly in recent years. Researchers have taken advantage of these developments in healthcare informatics and created reliable tools to predict or classify diseases using machine learning-based algorithms. To correctly quantify the performance of those algorithms, the standard approach is to use cross-validation, where the algorithm is trained on a training set, and its performance is measured on a validation set. Both datasets should be subject-independent to simulate the expected behavior of a clinical study. This study compares two cross-validation strategies, the subject-wise and the record-wise techniques; the subject-wise strategy correctly mimics the process of a clinical study, while the record-wise strategy does not.Methods: We started by creating a dataset of smartphone audio recordings of subjects diagnosed with and without Parkinson’s disease. This dataset was then divided into training and holdout sets using subject-wise and the record-wise divisions. The training set was used to measure the performance of two classifiers (support vector machine and random forest) to compare six cross-validation techniques that simulated either the subject-wise process or the record-wise process. The holdout set was used to calculate the true error of the classifiers.Results: The record-wise division and the record-wise cross-validation techniques overestimated the performance of the classifiers and underestimated the classification error.Conclusions: In a diagnostic scenario, the subject-wise technique is the proper way of estimating a model’s performance, and record-wise techniques should be avoided.


2017 ◽  
Vol 53 (1) ◽  
pp. 77-86 ◽  
Author(s):  
Sheshadri Iyengar Raghavan Bhagyashree ◽  
Kiran Nagaraj ◽  
Martin Prince ◽  
Caroline H. D. Fall ◽  
Murali Krishna

Sign in / Sign up

Export Citation Format

Share Document