scholarly journals Applying machine learning in motor activity time series of depressed bipolar and unipolar patients

2020 ◽  
Author(s):  
Petter Jakobsen ◽  
Enrique Garcia-Ceja ◽  
Michael Riegler ◽  
Lena Antonsen Stabell ◽  
Tine Nordgreen ◽  
...  

ABSTRACTCurrent practice of assessing mood episodes in affective disorders largely depends on subjective observations combined with semi-structured clinical rating scales. Motor activity is an objective observation of the inner physiological state expressed in behavior patterns. Alterations of motor activity are essential features of bipolar and unipolar depression. The aim was to investigate if objective measures of motor activity can aid existing diagnostic practice, by applying machine-learning techniques to analyze activity patterns in depressed patients and healthy controls. Random Forrest, Deep Neural Network and Convolutional Neural Network algorithms were used to analyze 14 days of actigraph recorded motor activity from 23 depressed patients and 32 healthy controls. Statistical features analyzed in the dataset were mean activity, standard deviation of mean activity and proportion of zero activity. Various techniques to handle data imbalance were applied, and to ensure generalizability and avoid overfitting a Leave-One-User-Out validation strategy was utilized. All outcomes reports as measures of accuracy for binary tests. A Deep Neural Network combined with random oversampling class balancing technique performed a cut above the rest with a true positive rate of 0.82 (sensitivity) and a true negative rate of 0.84 (specificity). Accuracy was 0.84 and the Matthews Correlation Coefficient 0.65. Misclassifications appear related to data overlapping among the classes, so an appropriate future approach will be to compare mood states intra-individualistic. In summary, machine-learning techniques present promising abilities in discriminating between depressed patients and healthy controls in motor activity time series.

PLoS ONE ◽  
2020 ◽  
Vol 15 (8) ◽  
pp. e0231995
Author(s):  
Petter Jakobsen ◽  
Enrique Garcia-Ceja ◽  
Michael Riegler ◽  
Lena Antonsen Stabell ◽  
Tine Nordgreen ◽  
...  

Author(s):  
Kayalvizhi S. ◽  
Thenmozhi D.

Catch phrases are the important phrases that precisely explain the document. They represent the context of the whole document. They can also be used to retrieve relevant prior cases by the judges and lawyers for assuring justice in the domain of law. Currently, catch phrases are extracted using statistical methods, machine learning techniques, and deep learning techniques. The authors propose a sequence to sequence (Seq2Seq) deep neural network to extract catch phrases from legal documents. They have employed several layers, namely embedding layer, encoder-decoder layer, projection layer, and loss layer to build the deep neural network. The methodology is evaluated on IRLeD@FIRE-2017 dataset and the method has obtained 0.787 and 0.607 as mean average precision and recall scores respectively. Results show that the proposed method outperforms the existing systems.


2019 ◽  
Vol 2019 (3) ◽  
pp. 191-209 ◽  
Author(s):  
Se Eun Oh ◽  
Saikrishna Sunkam ◽  
Nicholas Hopper

Abstract Recent advances in Deep Neural Network (DNN) architectures have received a great deal of attention due to their ability to outperform state-of-the-art machine learning techniques across a wide range of application, as well as automating the feature engineering process. In this paper, we broadly study the applicability of deep learning to website fingerprinting. First, we show that unsupervised DNNs can generate lowdimensional informative features that improve the performance of state-of-the-art website fingerprinting attacks. Second, when used as classifiers, we show that they can exceed performance of existing attacks across a range of application scenarios, including fingerprinting Tor website traces, fingerprinting search engine queries over Tor, defeating fingerprinting defenses, and fingerprinting TLS-encrypted websites. Finally, we investigate which site-level features of a website influence its fingerprintability by DNNs.


Circulation ◽  
2019 ◽  
Vol 140 (Suppl_2) ◽  
Author(s):  
Tomohisa Seki ◽  
Tomoyoshi Tamura ◽  
Kazuhiko Ohe ◽  
Masaru Suzuki

Background: Outcome prediction for patients with out-of-hospital cardiac arrest (OHCA) using prehospital information has been one of the major challenges in resuscitation medicine. Recently, machine learning techniques have been shown to be highly effective in predicting outcomes using clinical registries. In this study, we aimed to establish a prediction model for outcomes of OHCA of presumed cardiac cause using machine learning techniques. Methods: We analyzed data from the All-Japan Utstein Registry of the Fire and Disaster Management Agency between 2005 and 2016. Of 1,423,338 cases, data of OHCA patients aged ≥18 years with presumed cardiac etiology were retrieved and divided into two groups: training set, n = 584,748 (between 2005 and 2013) and test set, n = 223,314 (between 2014 and 2016). The endpoints were neurologic outcome at 1-month and survival at 1-month. Of 47 variables evaluated during the prehospital course, 19 variables (e.g.,sex, age, ECG waveform, and practice of bystander CPR) were used for outcome prediction. Performances of logistic regression, random forests, and deep neural network were examined in this study. Results: For prediction of neurologic outcomes (cerebral performance category 1 or 2) using the test set, the generated models showed area under the receiver operating characteristic curve (AUROC) values of 0.942 (95% confidence interval [CI] 0.941-0.943), 0.947 (95% CI 0.946-0.948), and 0.948 (95% CI 0.948-0.950) in logistic regression, random forest, and deep neural network, respectively. For survival prediction, the generated models showed AUROC values of 0.901 (95% CI 0.900-0.902), 0.913 (95% CI 0.912-0.914), and 0.912 (95% CI 0.911-0.913) in logistic regression, random forest, and deep neural network, respectively. Conclusions: Machine learning techniques using prehospital variables showed favorable prediction capability for 1-month neurologic outcome and survival in OHCA of presumed cardiac cause.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


2020 ◽  
Vol 8 (10) ◽  
pp. 766
Author(s):  
Dohan Oh ◽  
Julia Race ◽  
Selda Oterkus ◽  
Bonguk Koo

Mechanical damage is recognized as a problem that reduces the performance of oil and gas pipelines and has been the subject of continuous research. The artificial neural network in the spotlight recently is expected to be another solution to solve the problems relating to the pipelines. The deep neural network, which is on the basis of artificial neural network algorithm and is a method amongst various machine learning methods, is applied in this study. The applicability of machine learning techniques such as deep neural network for the prediction of burst pressure has been investigated for dented API 5L X-grade pipelines. To this end, supervised learning is employed, and the deep neural network model has four layers with three hidden layers, and the neural network uses the fully connected layer. The burst pressure computed by deep neural network model has been compared with the results of finite element analysis based parametric study, and the burst pressure calculated by the experimental results. According to the comparison results, it showed good agreement. Therefore, it is concluded that deep neural networks can be another solution for predicting the burst pressure of API 5L X-grade dented pipelines.


2018 ◽  
Vol 10 (1) ◽  
pp. 203 ◽  
Author(s):  
Xianming Dou ◽  
Yongguo Yang ◽  
Jinhui Luo

Approximating the complex nonlinear relationships that dominate the exchange of carbon dioxide fluxes between the biosphere and atmosphere is fundamentally important for addressing the issue of climate change. The progress of machine learning techniques has offered a number of useful tools for the scientific community aiming to gain new insights into the temporal and spatial variation of different carbon fluxes in terrestrial ecosystems. In this study, adaptive neuro-fuzzy inference system (ANFIS) and generalized regression neural network (GRNN) models were developed to predict the daily carbon fluxes in three boreal forest ecosystems based on eddy covariance (EC) measurements. Moreover, a comparison was made between the modeled values derived from these models and those of traditional artificial neural network (ANN) and support vector machine (SVM) models. These models were also compared with multiple linear regression (MLR). Several statistical indicators, including coefficient of determination (R2), Nash-Sutcliffe efficiency (NSE), bias error (Bias) and root mean square error (RMSE) were utilized to evaluate the performance of the applied models. The results showed that the developed machine learning models were able to account for the most variance in the carbon fluxes at both daily and hourly time scales in the three stands and they consistently and substantially outperformed the MLR model for both daily and hourly carbon flux estimates. It was demonstrated that the ANFIS and ANN models provided similar estimates in the testing period with an approximate value of R2 = 0.93, NSE = 0.91, Bias = 0.11 g C m−2 day−1 and RMSE = 1.04 g C m−2 day−1 for daily gross primary productivity, 0.94, 0.82, 0.24 g C m−2 day−1 and 0.72 g C m−2 day−1 for daily ecosystem respiration, and 0.79, 0.75, 0.14 g C m−2 day−1 and 0.89 g C m−2 day−1 for daily net ecosystem exchange, and slightly outperformed the GRNN and SVM models. In practical terms, however, the newly developed models (ANFIS and GRNN) are more robust and flexible, and have less parameters needed for selection and optimization in comparison with traditional ANN and SVM models. Consequently, they can be used as valuable tools to estimate forest carbon fluxes and fill the missing carbon flux data during the long-term EC measurements.


2021 ◽  
Author(s):  
Hugo Abreu Mendes ◽  
João Fausto Lorenzato Oliveira ◽  
Paulo Salgado Gomes Mattos Neto ◽  
Alex Coutinho Pereira ◽  
Eduardo Boudoux Jatoba ◽  
...  

Within the context of clean energy generation, solar radiation forecast is applied for photovoltaic plants to increase maintainability and reliability. Statistical models of time series like ARIMA and machine learning techniques help to improve the results. Hybrid Statistical + ML are found in all sorts of time series forecasting applications. This work presents a new way to automate the SARIMAX modeling, nesting PSO and ACO optimization algorithms, differently from R's AutoARIMA, its searches optimal seasonality parameter and combination of the exogenous variables available. This work presents 2 distinct hybrid models that have MLPs as their main elements, optimizing the architecture with Genetic Algorithm. A methodology was used to obtain the results, which were compared to LSTM, CLSTM, MMFF and NARNN-ARMAX topologies found in recent works. The obtained results for the presented models is promising for use in automatic radiation forecasting systems since it outperformed the compared models on at least two metrics.


2020 ◽  
Vol 80 (8) ◽  
Author(s):  
Nana Cabo Bizet ◽  
Cesar Damian ◽  
Oscar Loaiza-Brito ◽  
Damián Kaloni Mayorga Peña ◽  
J. A. Montañez-Barrera

Abstract We consider Type IIB compactifications on an isotropic torus $$T^6$$T6 threaded by geometric and non geometric fluxes. For this particular setup we apply supervised machine learning techniques, namely an artificial neural network coupled to a genetic algorithm, in order to obtain more than sixty thousand flux configurations yielding to a scalar potential with at least one critical point. We observe that both stable AdS vacua with large moduli masses and small vacuum energy as well as unstable dS vacua with small tachyonic mass and large energy are absent, in accordance to the refined de Sitter conjecture. Moreover, by considering a hierarchy among fluxes, we observe that perturbative solutions with small values for the vacuum energy and moduli masses are favored, as well as scenarios in which the lightest modulus mass is much smaller than the corresponding AdS vacuum scale. Finally we apply some results on random matrix theory to conclude that the most probable mass spectrum derived from this string setup is that satisfying the Refined de Sitter and AdS scale conjectures.


Sign in / Sign up

Export Citation Format

Share Document