scholarly journals Recognition of Sedentary Behavior by Machine Learning Analysis of Wearable Sensors during Activities of Daily Living for Telemedical Assessment of Cardiovascular Risk

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3219 ◽  
Author(s):  
Eliasz Kańtoch

With the recent advancement in wearable computing, sensor technologies, and data processing approaches, it is possible to develop smart clothing that integrates sensors into garments. The main objective of this study was to develop the method of automatic recognition of sedentary behavior related to cardiovascular risk based on quantitative measurement of physical activity. The solution is based on the designed prototype of the smart shirt equipped with a processor, wearable sensors, power supply and telemedical interface. The data derived from wearable sensors were used to create feature vector that consisted of the estimation of the user-specific relative intensity and the variance of filtered accelerometer data. The method was validated using an experimental protocol which was designed to be safe for the elderly and was based on clinically validated short physical performance battery (SPPB) test tasks. To obtain the recognition model six classifiers were examined and compared including Linear Discriminant Analysis, Support Vector Machines, K-Nearest Neighbors, Naive Bayes, Binary Decision Trees and Artificial Neural Networks. The classification models were able to identify the sedentary behavior with an accuracy of 95.00% ± 2.11%. Experimental results suggested that high accuracy can be obtained by estimating sedentary behavior pattern using the smart shirt and machine learning approach. The main advantage of the developed method to continuously monitor patient activities in a free-living environment and could potentially be used for early detection of increased cardiovascular risk.

Author(s):  
Nishanth P

Falls have become one of the reasons for death. It is common among the elderly. According to World Health Organization (WHO), 3 out of 10 living alone elderly people of age 65 and more tend to fall. This rate may get higher in the upcoming years. In recent years, the safety of elderly residents alone has received increased attention in a number of countries. The fall detection system based on the wearable sensors has made its debut in response to the early indicator of detecting the fall and the usage of the IoT technology, but it has some drawbacks, including high infiltration, low accuracy, poor reliability. This work describes a fall detection that does not reliant on wearable sensors and is related on machine learning and image analysing in Python. The camera's high-frequency pictures are sent to the network, which uses the Convolutional Neural Network technique to identify the main points of the human. The Support Vector Machine technique uses the data output from the feature extraction to classify the fall. Relatives will be notified via mobile message. Rather than modelling individual activities, we use both motion and context information to recognize activities in a scene. This is based on the notion that actions that are spatially and temporally connected rarely occur alone and might serve as background for one another. We propose a hierarchical representation of action segments and activities using a two-layer random field model. The model allows for the simultaneous integration of motion and a variety of context features at multiple levels, as well as the automatic learning of statistics that represent the patterns of the features.


2019 ◽  
Vol 20 (5) ◽  
pp. 488-500 ◽  
Author(s):  
Yan Hu ◽  
Yi Lu ◽  
Shuo Wang ◽  
Mengying Zhang ◽  
Xiaosheng Qu ◽  
...  

Background: Globally the number of cancer patients and deaths are continuing to increase yearly, and cancer has, therefore, become one of the world&#039;s highest causes of morbidity and mortality. In recent years, the study of anticancer drugs has become one of the most popular medical topics. </P><P> Objective: In this review, in order to study the application of machine learning in predicting anticancer drugs activity, some machine learning approaches such as Linear Discriminant Analysis (LDA), Principal components analysis (PCA), Support Vector Machine (SVM), Random forest (RF), k-Nearest Neighbor (kNN), and Naïve Bayes (NB) were selected, and the examples of their applications in anticancer drugs design are listed. </P><P> Results: Machine learning contributes a lot to anticancer drugs design and helps researchers by saving time and is cost effective. However, it can only be an assisting tool for drug design. </P><P> Conclusion: This paper introduces the application of machine learning approaches in anticancer drug design. Many examples of success in identification and prediction in the area of anticancer drugs activity prediction are discussed, and the anticancer drugs research is still in active progress. Moreover, the merits of some web servers related to anticancer drugs are mentioned.


2021 ◽  
Author(s):  
Zhong Zhao ◽  
Haiming Tang ◽  
Xiaobin Zhang ◽  
Xingda Qu ◽  
Jianping Lu

BACKGROUND Abnormal gaze behavior is a prominent feature of the autism spectrum disorder (ASD). Previous eye tracking studies had participants watch images (i.e., picture, video and webpage), and the application of machine learning (ML) on these data showed promising results in identify ASD individuals. Given the fact that gaze behavior differs in face-to-face interaction from image viewing tasks, no study has investigated whether natural social gaze behavior could accurately identify ASD. OBJECTIVE The objective of this study was to examine whether and what area of interest (AOI)-based features extracted from the natural social gaze behavior could identify ASD. METHODS Both children with ASD and typical development (TD) were eye-tracked when they were engaged in a face-to-face conversation with an interviewer. Four ML classifiers (support vector machine, SVM; linear discriminant analysis, LDA; decision tree, DT; and random forest, RF) were used to determine the maximum classification accuracy and the corresponding features. RESULTS A maximum classification accuracy of 84.62% were achieved with three classifiers (LDA, DT and RF). Results showed that the mouth, but not the eyes AOI, was a powerful feature in detecting ASD. CONCLUSIONS Natural gaze behavior could be leveraged to identify ASD, suggesting that ASD might be objectively screened with eye tracking technology in everyday social interaction. In addition, the comparison between our and previous findings suggests that eye tracking features that could identify ASD might be culture dependent and context sensitive.


2021 ◽  
Author(s):  
Chen Bai ◽  
Yu-Peng Chen ◽  
Adam Wolach ◽  
Lisa Anthony ◽  
Mamoun Mardini

BACKGROUND Frequent spontaneous facial self-touches, predominantly during outbreaks, have the theoretical potential to be a mechanism of contracting and transmitting diseases. Despite the recent advent of vaccines, behavioral approaches remain an integral part of reducing the spread of COVID-19 and other respiratory illnesses. Real-time biofeedback of face touching can potentially mitigate the spread of respiratory diseases. The gap addressed in this study is the lack of an on-demand platform that utilizes motion data from smartwatches to accurately detect face touching. OBJECTIVE The aim of this study was to utilize the functionality and the spread of smartwatches to develop a smartwatch application to identifying motion signatures that are mapped accurately to face touching. METHODS Participants (n=10, 50% women, aged 20-83) performed 10 physical activities classified into: face touching (FT) and non-face touching (NFT) categories, in a standardized laboratory setting. We developed a smartwatch application on Samsung Galaxy Watch to collect raw accelerometer data from participants. Then, data features were extracted from consecutive non-overlapping windows varying from 2-16 seconds. We examined the performance of state-of-the-art machine learning methods on face touching movements recognition (FT vs NFT) and individual activity recognition (IAR): logistic regression, support vector machine, decision trees and random forest. RESULTS Machine learning models were accurate in recognizing face touching categories; logistic regression achieved the best performance across all metrics (Accuracy: 0.93 +/- 0.08, Recall: 0.89 +/- 0.16, Precision: 0.93 +/- 0.08, F1-score: 0.90 +/- 0.11, AUC: 0.95 +/- 0.07) at the window size of 5 seconds. IAR models resulted in lower performance; the random forest classifier achieved the best performance across all metrics (Accuracy: 0.70 +/- 0.14, Recall: 0.70 +/- 0.14, Precision: 0.70 +/- 0.16, F1-score: 0.67 +/- 0.15) at the window size of 9 seconds. CONCLUSIONS Wearable devices, powered with machine learning, are effective in detecting facial touches. This is highly significant during respiratory infection outbreaks, as it has a great potential to refrain people from touching their faces and potentially mitigate the possibility of transmitting COVID-19 and future respiratory diseases.


2020 ◽  
Author(s):  
Nazrul Anuar Nayan ◽  
Hafifah Ab Hamid ◽  
Mohd Zubir Suboh ◽  
Noraidatulakma Abdullah ◽  
Rosmina Jaafar ◽  
...  

Abstract Background: Cardiovascular disease (CVD) is the leading cause of deaths worldwide. In 2017, CVD contributed to 13,503 deaths in Malaysia. The current approaches for CVD prediction are usually invasive and costly. Machine learning (ML) techniques allow an accurate prediction by utilizing the complex interactions among relevant risk factors. Results: This study presents a case–control study involving 60 participants from The Malaysian Cohort, which is a prospective population-based project. Five parameters, namely, the R–R interval and root mean square of successive differences extracted from electrocardiogram (ECG), systolic and diastolic blood pressures, and total cholesterol level, were statistically significant in predicting CVD. Six ML algorithms, namely, linear discriminant analysis, linear and quadratic support vector machines, decision tree, k-nearest neighbor, and artificial neural network (ANN), were evaluated to determine the most accurate classifier in predicting CVD risk. ANN, which achieved 90% specificity, 90% sensitivity, and 90% accuracy, demonstrated the highest prediction performance among the six algorithms. Conclusions: In summary, by utilizing ML techniques, ECG data can serve as a good parameter for CVD prediction among the Malaysian multiethnic population.


Author(s):  
S. R. Mani Sekhar ◽  
G. M. Siddesh

Machine learning is one of the important areas in the field of computer science. It helps to provide an optimized solution for the real-world problems by using past knowledge or previous experience data. There are different types of machine learning algorithms present in computer science. This chapter provides the overview of some selected machine learning algorithms such as linear regression, linear discriminant analysis, support vector machine, naive Bayes classifier, neural networks, and decision trees. Each of these methods is illustrated in detail with an example and R code, which in turn assists the reader to generate their own solutions for the given problems.


2020 ◽  
Vol 9 (4) ◽  
pp. 252 ◽  
Author(s):  
Kwanele Phinzi ◽  
Dávid Abriha ◽  
László Bertalan ◽  
Imre Holb ◽  
Szilárd Szabó

Gullies reduce both the quality and quantity of productive land, posing a serious threat to sustainable agriculture, hence, food security. Machine Learning (ML) algorithms are essential tools in the identification of gullies and can assist in strategic decision-making relevant to soil conservation. Nevertheless, accurate identification of gullies is a function of the selected ML algorithms, the image and number of classes used, i.e., binary (two classes) and multiclass. We applied Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and Random Forest (RF) on a Systeme Pour l’Observation de la Terre (SPOT-7) image to extract gullies and investigated whether the multiclass (m) approach can offer better classification accuracy than the binary (b) approach. Using repeated k-fold cross-validation, we generated 36 models. Our findings revealed that, of these models, both RFb (98.70%) and SVMm (98.01%) outperformed the LDA in terms of overall accuracy (OA). However, the LDAb (99.51%) recorded the highest producer’s accuracy (PA) but had low corresponding user’s accuracy (UA) with 18.5%. The binary approach was generally better than the multiclass approach; however, on class level, the multiclass approach outperformed the binary approach in gully identification. Despite low spectral resolution, the pan-sharpened SPOT-7 product successfully identified gullies. The proposed methodology is relatively simple, but practically sound, and can be used to monitor gullies within and beyond the study region.


2019 ◽  
Vol 11 (10) ◽  
pp. 1195 ◽  
Author(s):  
Minsang Kim ◽  
Myung-Sook Park ◽  
Jungho Im ◽  
Seonyoung Park ◽  
Myong-In Lee

This study compared detection skill for tropical cyclone (TC) formation using models based on three different machine learning (ML) algorithms-decision trees (DT), random forest (RF), and support vector machines (SVM)-and a model based on Linear Discriminant Analysis (LDA). Eight predictors were derived from WindSat satellite measurements of ocean surface wind and precipitation over the western North Pacific for 2005–2009. All of the ML approaches performed better with significantly higher hit rates ranging from 94 to 96% compared with LDA performance (~77%), although false alarm rate by MLs is slightly higher (21–28%) than that by LDA (~13%). Besides, MLs could detect TC formation at the time as early as 26–30 h before the first time diagnosed as tropical depression by the JTWC best track, which was also 5 to 9 h earlier than that by LDA. The skill differences across MLs were relatively smaller than difference between MLs and LDA. Large yearly variation in forecast lead time was common in all models due to the limitation in sampling from orbiting satellite. This study highlights that ML approaches provide an improved skill for detecting TC formation compared with conventional linear approaches.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1235
Author(s):  
Giuseppe Varone ◽  
Sara Gasparini ◽  
Edoardo Ferlazzo ◽  
Michele Ascoli ◽  
Giovanbattista Gaspare Tripodi ◽  
...  

The diagnosis of psychogenic nonepileptic seizures (PNES) by means of electroencephalography (EEG) is not a trivial task during clinical practice for neurologists. No clear PNES electrophysiological biomarker has yet been found, and the only tool available for diagnosis is video EEG monitoring with recording of a typical episode and clinical history of the subject. In this paper, a data-driven machine learning (ML) pipeline for classifying EEG segments (i.e., epochs) of PNES and healthy controls (CNT) is introduced. This software pipeline consists of a semiautomatic signal processing technique and a supervised ML classifier to aid clinical discriminative diagnosis of PNES by means of an EEG time series. In our ML pipeline, statistical features like the mean, standard deviation, kurtosis, and skewness are extracted in a power spectral density (PSD) map split up in five conventional EEG rhythms (delta, theta, alpha, beta, and the whole band, i.e., 1–32 Hz). Then, the feature vector is fed into three different supervised ML algorithms, namely, the support vector machine (SVM), linear discriminant analysis (LDA), and Bayesian network (BN), to perform EEG segment classification tasks for CNT vs. PNES. The performance of the pipeline algorithm was evaluated on a dataset of 20 EEG signals (10 PNES and 10 CNT) that was recorded in eyes-closed resting condition at the Regional Epilepsy Centre, Great Metropolitan Hospital of Reggio Calabria, University of Catanzaro, Italy. The experimental results showed that PNES vs. CNT discrimination tasks performed via the ML algorithm and validated with random split (RS) achieved an average accuracy of 0.97 ± 0.013 (RS-SVM), 0.99 ± 0.02 (RS-LDA), and 0.82 ± 0.109 (RS-BN). Meanwhile, with leave-one-out (LOO) validation, an average accuracy of 0.98 ± 0.0233 (LOO-SVM), 0.98 ± 0.124 (LOO-LDA), and 0.81 ± 0.109 (LOO-BN) was achieved. Our findings showed that BN was outperformed by SVM and LDA. The promising results of the proposed software pipeline suggest that it may be a valuable tool to support existing clinical diagnosis.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1557 ◽  
Author(s):  
Ilaria Conforti ◽  
Ilaria Mileti ◽  
Zaccaria Del Prete ◽  
Eduardo Palermo

Ergonomics evaluation through measurements of biomechanical parameters in real time has a great potential in reducing non-fatal occupational injuries, such as work-related musculoskeletal disorders. Assuming a correct posture guarantees the avoidance of high stress on the back and on the lower extremities, while an incorrect posture increases spinal stress. Here, we propose a solution for the recognition of postural patterns through wearable sensors and machine-learning algorithms fed with kinematic data. Twenty-six healthy subjects equipped with eight wireless inertial measurement units (IMUs) performed manual material handling tasks, such as lifting and releasing small loads, with two postural patterns: correctly and incorrectly. Measurements of kinematic parameters, such as the range of motion of lower limb and lumbosacral joints, along with the displacement of the trunk with respect to the pelvis, were estimated from IMU measurements through a biomechanical model. Statistical differences were found for all kinematic parameters between the correct and the incorrect postures (p < 0.01). Moreover, with the weight increase of load in the lifting task, changes in hip and trunk kinematics were observed (p < 0.01). To automatically identify the two postures, a supervised machine-learning algorithm, a support vector machine, was trained, and an accuracy of 99.4% (specificity of 100%) was reached by using the measurements of all kinematic parameters as features. Meanwhile, an accuracy of 76.9% (specificity of 76.9%) was reached by using the measurements of kinematic parameters related to the trunk body segment.


Sign in / Sign up

Export Citation Format

Share Document