scholarly journals Sensor-Based Prediction of Mental Effort during Learning from Physiological Data: A Longitudinal Case Study

Signals ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 886-901
Author(s):  
Ankita Agarwal ◽  
Josephine Graft ◽  
Noah Schroeder ◽  
William Romine

Trackers for activity and physical fitness have become ubiquitous. Although recent work has demonstrated significant relationships between mental effort and physiological data such as skin temperature, heart rate, and electrodermal activity, we have yet to demonstrate their efficacy for the forecasting of mental effort such that a useful mental effort tracker can be developed. Given prior difficulty in extracting relationships between mental effort and physiological responses that are repeatable across individuals, we make the case that fusing self-report measures with physiological data within an internet or smartphone application may provide an effective method for training a useful mental effort tracking system. In this case study, we utilized over 90 h of data from a single participant over the course of a college semester. By fusing the participant’s self-reported mental effort in different activities over the course of the semester with concurrent physiological data collected with the Empatica E4 wearable sensor, we explored questions around how much data were needed to train such a device, and which types of machine-learning algorithms worked best. We concluded that although baseline models such as logistic regression and Markov models provided useful explanatory information on how the student’s physiology changed with mental effort, deep-learning algorithms were able to generate accurate predictions using the first 28 h of data for training. A system that combines long short-term memory and convolutional neural networks is recommended in order to generate smooth predictions while also being able to capture transitions in mental effort when they occur in the individual using the device.

BMJ Open ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. e039292
Author(s):  
Jean-Michel Roué ◽  
Iris Morag ◽  
Wassim M Haddad ◽  
Behnood Gholami ◽  
Kanwaljeet J S Anand

IntroductionObjective pain assessment in non-verbal populations is clinically challenging due to their inability to express their pain via self-report. Repetitive exposures to acute or prolonged pain lead to clinical instability, with long-term behavioural and cognitive sequelae in newborn infants. Strong analgesics are also associated with medical complications, potential neurotoxicity and altered brain development. Pain scores performed by bedside nurses provide subjective, observer-dependent assessments rather than objective data for infant pain management; the required observations are labour intensive, difficult to perform by a nurse who is concurrently performing the procedure and increase the nursing workload. Multimodal pain assessment, using sensor-fusion and machine-learning algorithms, can provide a patient-centred, context-dependent, observer-independent and objective pain measure.Methods and analysisIn newborns undergoing painful procedures, we use facial electromyography to record facial muscle activity-related infant pain, ECG to examine heart rate (HR) changes and HR variability, electrodermal activity (skin conductance) to measure catecholamine-induced palmar sweating, changes in oxygen saturations and skin perfusion, and electroencephalography using active electrodes to assess brain activity in real time. This multimodal approach has the potential to improve the accuracy of pain assessment in non-verbal infants and may even allow continuous pain monitoring at the bedside. The feasibility of this approach will be evaluated in an observational prospective study of clinically required painful procedures in 60 preterm and term newborns, and infants aged 6 months or less.Ethics and disseminationThe Institutional Review Board of the Stanford University approved the protocol. Study findings will be published in peer-reviewed journals, presented at scientific meetings, taught via webinars, podcasts and video tutorials, and listed on academic/scientific websites. Future studies will validate and refine this approach using the minimum number of sensors required to assess neonatal/infant pain.Trial registration numberClinicalTrials.gov Registry (NCT03330496).


2021 ◽  
Author(s):  
William Romine ◽  
Noah Schroeder ◽  
Anjali Edwards ◽  
Tanvi Banerjee

Recent studies show that physiological data can detect changes in mental effort, making way for the development of wearable sensors to monitor mental effort in school, work, and at home. We have yet to explore how such a device would work with a single participant over an extended time duration. We used a longitudinal case study design with ~38 hours of data to explore the efficacy of electrodermal activity, skin temperature, and heart rate for classifying mental effort. We utilized a 2-state Markov switching regression model to understand the efficacy of these physiological measures for predicting self-reported mental effort during logged activities. On average, a model with state-dependent relationships predicted within one unit of reported mental effort (training RMSE = 0.4, testing RMSE = 0.7). This automated sensing of mental effort can have applications in various domains including student engagement detection and cognitive state assessment in drivers, pilots, and caregivers.


2020 ◽  
Author(s):  
Seyed Amir Hossein Aqajari ◽  
Rui Cao ◽  
Emad Kasaeyan Naeini ◽  
Michael-David Calderon ◽  
Kai Zheng ◽  
...  

BACKGROUND Accurate objective pain assessment is required in the healthcare domain and clinical settings for appropriate pain management. Automated objective pain detection from physiological data in patients provides valuable information to hospital staff and caregivers to better manage pain, in particular for those patients who are unable to self-report. Galvanic Skin Response (GSR) is one of the physiologic signals that refers to the changes in sweat gland activity, which can identify the features of emotional states and anxiety induced by varying pain levels. In this study, we used different statistical features extracted from GSR data collected from postoperative patients to detect their pain intensity. To the best of our knowledge, we are the first work building pain models using postoperative adult patients instead of healthy subjects. OBJECTIVE The goal of this paper is to present an automatic pain assessment tool using GSR signals to predict different pain intensities in non-communicative postoperative patients. METHODS The study was designed to collect biomedical data from post-operative patients reporting moderate to high pain levels. 25 subjects were recruited with the age range of 23 to 89. First, a Transcutaneous Electrical Nerve Stimulation (TENS) unit was employed to obtain patients' baselines. In the second part, the Empatica E4 wristband was attached to patients while they were performing low intensity activities. Patient self-report based on the NRS was used to record pain intensities used to correlate with the objective measured data. The labels were downsampled from 11 pain levels to 5 different pain intensities including the baseline. Two different machine learning algorithms were used to construct the models. The mean decrease impurity method was used to find the top important features for pain prediction and improve the accuracy. We compared our results with a previously published research study to estimate the true performance of our models. RESULTS Four different binary classification models were constructed using each machine learning algorithm to classify the baseline and other pain intensities (Baseline (BL) vs. Pain Level (PL) 1, BL vs. PL2, BL vs. PL3, and BL vs. PL4). Our models achieved the higher accuracy for the first three pain models in comparison with BioVid paper approach despite the challenges in analyzing real patient data. For BL vs. PL1, BL vs. PL2, and BL vs. PL4, the highest prediction accuracies were achieved when using a Random Forest classifier (86.0, 70.0, and 61.5, respectively). For BL vs. PL3, we achieved the accuracy of 72.1 using a K-nearest neighbors classifier. CONCLUSIONS We are the first to propose and validate the pain assessment tool to predict different pain levels in real postoperative adult patients using GSR signals. We also exploited feature selection algorithms to find the top important features related to different pain intensities. INTERNATIONAL REGISTERED REPORT RR2-10.2196/17783


2021 ◽  
Author(s):  
William Romine ◽  
Noah Schroeder ◽  
Anjali Edwards ◽  
Tanvi Banerjee

Recent studies show that physiological data can detect changes in mental effort, making way for the development of wearable sensors to monitor mental effort in school, work, and at home. We have yet to explore how such a device would work with a single participant over an extended time duration. We used a longitudinal case study design with ~38 hours of data to explore the efficacy of electrodermal activity, skin temperature, and heart rate for classifying mental effort. We utilized a 2-state Markov switching regression model to understand the efficacy of these physiological measures for predicting self-reported mental effort during logged activities. On average, a model with state-dependent relationships predicted within one unit of reported mental effort (training RMSE = 0.4, testing RMSE = 0.7). This automated sensing of mental effort can have applications in various domains including student engagement detection and cognitive state assessment in drivers, pilots, and caregivers.


Water ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2927
Author(s):  
Jiyeong Hong ◽  
Seoro Lee ◽  
Joo Hyun Bae ◽  
Jimin Lee ◽  
Woon Ji Park ◽  
...  

Predicting dam inflow is necessary for effective water management. This study created machine learning algorithms to predict the amount of inflow into the Soyang River Dam in South Korea, using weather and dam inflow data for 40 years. A total of six algorithms were used, as follows: decision tree (DT), multilayer perceptron (MLP), random forest (RF), gradient boosting (GB), recurrent neural network–long short-term memory (RNN–LSTM), and convolutional neural network–LSTM (CNN–LSTM). Among these models, the multilayer perceptron model showed the best results in predicting dam inflow, with the Nash–Sutcliffe efficiency (NSE) value of 0.812, root mean squared errors (RMSE) of 77.218 m3/s, mean absolute error (MAE) of 29.034 m3/s, correlation coefficient (R) of 0.924, and determination coefficient (R2) of 0.817. However, when the amount of dam inflow is below 100 m3/s, the ensemble models (random forest and gradient boosting models) performed better than MLP for the prediction of dam inflow. Therefore, two combined machine learning (CombML) models (RF_MLP and GB_MLP) were developed for the prediction of the dam inflow using the ensemble methods (RF and GB) at precipitation below 16 mm, and the MLP at precipitation above 16 mm. The precipitation of 16 mm is the average daily precipitation at the inflow of 100 m3/s or more. The results show the accuracy verification results of NSE 0.857, RMSE 68.417 m3/s, MAE 18.063 m3/s, R 0.927, and R2 0.859 in RF_MLP, and NSE 0.829, RMSE 73.918 m3/s, MAE 18.093 m3/s, R 0.912, and R2 0.831 in GB_MLP, which infers that the combination of the models predicts the dam inflow the most accurately. CombML algorithms showed that it is possible to predict inflow through inflow learning, considering flow characteristics such as flow regimes, by combining several machine learning algorithms.


2019 ◽  
Vol 63 (6) ◽  
pp. 60413-1-60413-11
Author(s):  
Yunfang Niu ◽  
Danli Wang ◽  
Ziwei Wang ◽  
Fan Sun ◽  
Kang Yue ◽  
...  

Abstract At present, the research on emotion in the virtual environment is limited to the subjective materials, and there are very few studies based on objective physiological signals. In this article, the authors conducted a user experiment to study the user emotion experience of virtual reality (VR) by comparing subjective feelings and physiological data in VR and two-dimensional display (2D) environments. First, they analyzed the data of self-report questionnaires, including Self-assessment Manikin (SAM), Positive And Negative Affect Schedule (PANAS) and Simulator Sickness Questionnaire (SSQ). The result indicated that VR causes a higher level of arousal than 2D, and easily evokes positive emotions. Both 2D and VR environments are prone to eye fatigue, but VR is more likely to cause symptoms of dizziness and vertigo. Second, they compared the differences of electrocardiogram (ECG), skin temperature (SKT) and electrodermal activity (EDA) signals in two circumstances. Through mathematical analysis, all three signals had significant differences. Participants in the VR environment had a higher degree of excitement, and the mood fluctuations are more frequent and more intense. In addition, the authors used different machine learning models for emotion detection, and compared the accuracies on VR and 2D datasets. The accuracies of all algorithms in the VR environment are higher than that of 2D, which corroborated that the volunteers in the VR environment have more obvious skin electrical signals, and had a stronger sense of immersion. This article effectively compensated for the inadequacies of existing work. The authors first used objective physiological signals for experience evaluation and used different types of subjective materials to make contrast. They hope their study can provide helpful guidance for the engineering reality of virtual reality.


Sign in / Sign up

Export Citation Format

Share Document