ADAM-sense: Anxiety-displaying activities recognition by motion sensors

2021 ◽  
pp. 101485
Author(s):  
Nida Saddaf Khan ◽  
Muhammad Sayeed Ghani ◽  
Gulnaz Anjum
Keyword(s):  
Author(s):  
Giuseppe Placidi ◽  
Danilo Avola ◽  
Luigi Cinque ◽  
Matteo Polsinelli ◽  
Eleni Theodoridou ◽  
...  

AbstractVirtual Glove (VG) is a low-cost computer vision system that utilizes two orthogonal LEAP motion sensors to provide detailed 4D hand tracking in real–time. VG can find many applications in the field of human-system interaction, such as remote control of machines or tele-rehabilitation. An innovative and efficient data-integration strategy, based on the velocity calculation, for selecting data from one of the LEAPs at each time, is proposed for VG. The position of each joint of the hand model, when obscured to a LEAP, is guessed and tends to flicker. Since VG uses two LEAP sensors, two spatial representations are available each moment for each joint: the method consists of the selection of the one with the lower velocity at each time instant. Choosing the smoother trajectory leads to VG stabilization and precision optimization, reduces occlusions (parts of the hand or handling objects obscuring other hand parts) and/or, when both sensors are seeing the same joint, reduces the number of outliers produced by hardware instabilities. The strategy is experimentally evaluated, in terms of reduction of outliers with respect to a previously used data selection strategy on VG, and results are reported and discussed. In the future, an objective test set has to be imagined, designed, and realized, also with the help of an external precise positioning equipment, to allow also quantitative and objective evaluation of the gain in precision and, maybe, of the intrinsic limitations of the proposed strategy. Moreover, advanced Artificial Intelligence-based (AI-based) real-time data integration strategies, specific for VG, will be designed and tested on the resulting dataset.


Author(s):  
Kirti Sundar Sahu ◽  
Arlene Oetomo ◽  
Niloofar Jalali ◽  
Plinio P. Morita

The World Health Organization declared the coronavirus outbreak as a pandemic on March 11, 2020. To inhibit the spread of COVID-19, governments around the globe, including Canada, have implemented physical distancing and lockdown measures, including a work-from-home policy. Canada in 2020 has developed a 24-Hour Movement Guideline for all ages laying guidance on the ideal amount of physical activity, sedentary behaviour, and sleep (PASS) for an individual in a day. The purpose of this study was to investigate changes on the household and population-level in lifestyle behaviours (PASS) and time spent indoors at the household level, following the implementation of physical distancing protocols and stay-at-home guidelines. For this study, we used 2019 and 2020 data from ecobee, a Canadian smart Wi-Fi thermostat company, through the Donate Your Data (DYD) program. Using motion sensors data, we quantified the amount of sleep by using the absence of movement, and similarly, increased sensor activation to show a longer duration of household occupancy. The key findings of this study were; during the COVID-19 pandemic, overall household-level activity increased significantly compared to pre-pandemic times, there was no significant difference between household-level behaviours between weekdays and weekends during the pandemic, average sleep duration has not changed, but the pattern of sleep behaviour significantly changed, specifically, bedtime and wake up time delayed, indoor time spent has been increased and outdoor time significantly reduced. Our data analysis shows the feasibility of using big data to monitor the impact of the COVID-19 pandemic on the household and population-level behaviours and patterns of change.


2010 ◽  
Vol 7 (4) ◽  
pp. 1558-1576 ◽  
Author(s):  
Roman Cuberek ◽  
Walid El Ansari ◽  
Karel Frömel ◽  
Krzysztof Skalik ◽  
Erik Sigmund

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1877
Author(s):  
Rieke Trumpf ◽  
Wiebren Zijlstra ◽  
Peter Haussermann ◽  
Tim Fleiner

Applicable and accurate assessment methods are required for a clinically relevant quantification of habitual physical activity (PA) levels and sedentariness in older adults. The aim of this study is to compare habitual PA and sedentariness, as assessed with (1) a wrist-worn actigraph, (2) a hybrid motion sensor attached to the lower back, and (3) a self-estimation based on a questionnaire. Over the course of one week, PA of 58 community-dwelling subjectively healthy older adults was recorded. The results indicate that actigraphy overestimates the PA levels in older adults, whereas sedentariness is underestimated when compared to the hybrid motion sensor approach. Significantly longer durations (hh:mm/day) for all PA intensities were assessed with the actigraph (light: 04:19; moderate to vigorous: 05:08) when compared to the durations (hh:mm/day) that were assessed with the hybrid motion sensor (light: 01:24; moderate to vigorous: 02:21) and the self-estimated durations (hh:mm/day) (light: 02:33; moderate to vigorous: 03:04). Actigraphy-assessed durations of sedentariness (14:32 hh:mm/day) were significantly shorter when compared to the durations assessed with the hybrid motion sensor (20:15 hh:mm/day). Self-estimated duration of light intensity was significantly shorter when compared to the results of the hybrid motion sensor. The results of the present study highlight the importance of an accurate quantification of habitual PA levels and sedentariness in older adults. The use of hybrid motion sensors can offer important insights into the PA levels and PA types (e.g., sitting, lying) and it can increase the knowledge about mobility-related PA and patterns of sedentariness, while actigraphy appears to be not recommendable for this purpose.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4482
Author(s):  
Rodrigo Colnago Contreras ◽  
Avinash Parnandi ◽  
Bruno Gomes Coelho ◽  
Claudio Silva ◽  
Heidi Schambra ◽  
...  

A large number of stroke survivors suffer from a significant decrease in upper extremity (UE) function, requiring rehabilitation therapy to boost recovery of UE motion. Assessing the efficacy of treatment strategies is a challenging problem in this context, and is typically accomplished by observing the performance of patients during their execution of daily activities. A more detailed assessment of UE impairment can be undertaken with a clinical bedside test, the UE Fugl–Meyer Assessment, but it fails to examine compensatory movements of functioning body segments that are used to bypass impairment. In this work, we use a graph learning method to build a visualization tool tailored to support the analysis of stroke patients. Called NE-Motion, or Network Environment for Motion Capture Data Analysis, the proposed analytic tool handles a set of time series captured by motion sensors worn by patients so as to enable visual analytic resources to identify abnormalities in movement patterns. Developed in close collaboration with domain experts, NE-Motion is capable of uncovering important phenomena, such as compensation while revealing differences between stroke patients and healthy individuals. The effectiveness of NE-Motion is shown in two case studies designed to analyze particular patients and to compare groups of subjects.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 634
Author(s):  
Tarek Frahi ◽  
Francisco Chinesta ◽  
Antonio Falcó ◽  
Alberto Badias ◽  
Elias Cueto ◽  
...  

We are interested in evaluating the state of drivers to determine whether they are attentive to the road or not by using motion sensor data collected from car driving experiments. That is, our goal is to design a predictive model that can estimate the state of drivers given the data collected from motion sensors. For that purpose, we leverage recent developments in topological data analysis (TDA) to analyze and transform the data coming from sensor time series and build a machine learning model based on the topological features extracted with the TDA. We provide some experiments showing that our model proves to be accurate in the identification of the state of the user, predicting whether they are relaxed or tense.


Author(s):  
Ahmed Ezzat ◽  
Alexandros Kogkas ◽  
Josephine Holt ◽  
Rudrik Thakkar ◽  
Ara Darzi ◽  
...  

Abstract Background Within surgery, assistive robotic devices (ARD) have reported improved patient outcomes. ARD can offer the surgical team a “third hand” to perform wider tasks and more degrees of motion in comparison with conventional laparoscopy. We test an eye-tracking based robotic scrub nurse (RSN) in a simulated operating room based on a novel real-time framework for theatre-wide 3D gaze localization in a mobile fashion. Methods Surgeons performed segmental resection of pig colon and handsewn end-to-end anastomosis while wearing eye-tracking glasses (ETG) assisted by distributed RGB-D motion sensors. To select instruments, surgeons (ST) fixed their gaze on a screen, initiating the RSN to pick up and transfer the item. Comparison was made between the task with the assistance of a human scrub nurse (HSNt) versus the task with the assistance of robotic and human scrub nurse (R&HSNt). Task load (NASA-TLX), technology acceptance (Van der Laan’s), metric data on performance and team communication were measured. Results Overall, 10 ST participated. NASA-TLX feedback for ST on HSNt vs R&HSNt usage revealed no significant difference in mental, physical or temporal demands and no change in task performance. ST reported significantly higher frustration score with R&HSNt. Van der Laan’s scores showed positive usefulness and satisfaction scores in using the RSN. No significant difference in operating time was observed. Conclusions We report initial findings of our eye-tracking based RSN. This enables mobile, unrestricted hands-free human–robot interaction intra-operatively. Importantly, this platform is deemed non-inferior to HSNt and accepted by ST and HSN test users.


Sign in / Sign up

Export Citation Format

Share Document