Approaching the Real-World

Author(s):  
Hyeokhyen Kwon ◽  
Bingyao Wang ◽  
Gregory D. Abowd ◽  
Thomas Plötz

Recently, IMUTube introduced a paradigm change for bootstrapping human activity recognition (HAR) systems for wearables. The key idea is to utilize videos of activities to support training activity recognizers based on inertial measurement units (IMUs). This system retrieves video from public repositories and subsequently generates virtual IMU data from this. The ultimate vision for such a system is to make large amounts of weakly labeled videos accessible for model training in HAR and, as such, to overcome one of the most pressing issues in the field: the lack of significant amounts of labeled sample data. In this paper we present the first in-detail exploration of IMUTube in a realistic assessment scenario: the analysis of free-weight gym exercises. We make significant progress towards a flexible, fully-functional IMUTube system by extending it such that it can handle a range of artifacts that are common in unrestricted online videos, including various forms of video noise, non-human poses, body part occlusions, and extreme camera and human motion. By overcoming these real-world challenges, we are able to generate high-quality virtual IMU data, which allows us to employ IMUTube for practical analysis tasks. We show that HAR systems trained by incorporating virtual sensor data generated by IMUTube significantly outperform baseline models trained only with real IMU data. In doing so we demonstrate the practical utility of IMUTube and the progress made towards the final vision of the new bootstrapping paradigm.

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 194
Author(s):  
Sarah Gonzalez ◽  
Paul Stegall ◽  
Harvey Edwards ◽  
Leia Stirling ◽  
Ho Chit Siu

The field of human activity recognition (HAR) often utilizes wearable sensors and machine learning techniques in order to identify the actions of the subject. This paper considers the activity recognition of walking and running while using a support vector machine (SVM) that was trained on principal components derived from wearable sensor data. An ablation analysis is performed in order to select the subset of sensors that yield the highest classification accuracy. The paper also compares principal components across trials to inform the similarity of the trials. Five subjects were instructed to perform standing, walking, running, and sprinting on a self-paced treadmill, and the data were recorded while using surface electromyography sensors (sEMGs), inertial measurement units (IMUs), and force plates. When all of the sensors were included, the SVM had over 90% classification accuracy using only the first three principal components of the data with the classes of stand, walk, and run/sprint (combined run and sprint class). It was found that sensors that were placed only on the lower leg produce higher accuracies than sensors placed on the upper leg. There was a small decrease in accuracy when the force plates are ablated, but the difference may not be operationally relevant. Using only accelerometers without sEMGs was shown to decrease the accuracy of the SVM.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 111
Author(s):  
Pengjia Tu ◽  
Junhuai Li ◽  
Huaijun Wang ◽  
Ting Cao ◽  
Kan Wang

Human activity recognition (HAR) has vital applications in human–computer interaction, somatosensory games, and motion monitoring, etc. On the basis of the human motion accelerate sensor data, through a nonlinear analysis of the human motion time series, a novel method for HAR that is based on non-linear chaotic features is proposed in this paper. First, the C-C method and G-P algorithm are used to, respectively, compute the optimal delay time and embedding dimension. Additionally, a Reconstructed Phase Space (RPS) is formed while using time-delay embedding for the human accelerometer motion sensor data. Subsequently, a two-dimensional chaotic feature matrix is constructed, where the chaotic feature is composed of the correlation dimension and largest Lyapunov exponent (LLE) of attractor trajectory in the RPS. Next, the classification algorithms are used in order to classify and recognize the two different activity classes, i.e., basic and transitional activities. The experimental results show that the chaotic feature has a higher accuracy than traditional time and frequency domain features.


Author(s):  
Dibyanshu Jaiswal ◽  
Debatri Chatterjee ◽  
Rahul Gavas ◽  
Ramesh Kumar Ramakrishnan ◽  
Arpan Pal

10.2196/13961 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13961
Author(s):  
Kim Sarah Sczuka ◽  
Lars Schwickert ◽  
Clemens Becker ◽  
Jochen Klenk

Background Falls are a common health problem, which in the worst cases can lead to death. To develop reliable fall detection algorithms as well as suitable prevention interventions, it is important to understand circumstances and characteristics of real-world fall events. Although falls are common, they are seldom observed, and reports are often biased. Wearable inertial sensors provide an objective approach to capture real-world fall signals. However, it is difficult to directly derive visualization and interpretation of body movements from the fall signals, and corresponding video data is rarely available. Objective The re-enactment method uses available information from inertial sensors to simulate fall events, replicate the data, validate the simulation, and thereby enable a more precise description of the fall event. The aim of this paper is to describe this method and demonstrate the validity of the re-enactment approach. Methods Real-world fall data, measured by inertial sensors attached to the lower back, were selected from the Fall Repository for the Design of Smart and Self-Adaptive Environments Prolonging Independent Living (FARSEEING) database. We focused on well-described fall events such as stumbling to be re-enacted under safe conditions in a laboratory setting. For the purposes of exemplification, we selected the acceleration signal of one fall event to establish a detailed simulation protocol based on identified postures and trunk movement sequences. The subsequent re-enactment experiments were recorded with comparable inertial sensor configurations as well as synchronized video cameras to analyze the movement behavior in detail. The re-enacted sensor signals were then compared with the real-world signals to adapt the protocol and repeat the re-enactment method if necessary. The similarity between the simulated and the real-world fall signals was analyzed with a dynamic time warping algorithm, which enables the comparison of two temporal sequences varying in speed and timing. Results A fall example from the FARSEEING database was used to show the feasibility of producing a similar sensor signal with the re-enactment method. Although fall events were heterogeneous concerning chronological sequence and curve progression, it was possible to reproduce a good approximation of the motion of a person’s center of mass during fall events based on the available sensor information. Conclusions Re-enactment is a promising method to understand and visualize the biomechanics of inertial sensor-recorded real-world falls when performed in a suitable setup, especially if video data is not available.


2018 ◽  
Vol 5 (2) ◽  
pp. 248-257 ◽  
Author(s):  
Ari Muzakir ◽  
Christofora Desi Kusmindari

Push-up is the simplest and most widely performed sport. Although simple, it also has a high risk of injury risk if done not in accordance with the rules. Push-up detector is a good push-up motion monitoring solution. In this way, nonstandard movements can be detected and corrected immediately. It has two motion sensors integrated with Arduino-based microcontroller. From this detector tool got the data of push-up result from sensor mounted. Sensor data will be displayed in the application in real-time. Quality function development is used to determine the criteria of the user. The sample data involved 200 participants who followed the testing of this tool and got 90% who can do the push-up correctly. Factors that affect the height, age, and weight. Tests conducted on adolescent boys aged 18-23 years. The results of this study is an application capable of monitoring each push-up movement to position in accordance with the provisions to minimize injuries resulting from movement errors.


Author(s):  
Takeshi Okadome ◽  
Yasue Kishino ◽  
Takuya Maekawa ◽  
Koji Kamei ◽  
Yutaka Yanagisawa ◽  
...  

In a remote or local environment in which a sensor network always collects data produced by sensors attached to physical objects, the engine presented here saves the data sent through the Internet and searches for data segments that correspond to real-world events by using natural language (NL) words in a query that are input in an web browser. The engine translates each query into a physical quantity representation searches for a sensor data segment that satisfies the representation, and sends back the event occurrence time, place, or related objects as a reply to the query to the remote or local environment in which the web browser displays them. The engine, which we expect to be one of the upcoming Internet services, exemplifies the concept of symbiosis that bridges the gaps between the real space and the digital space.


2019 ◽  
Vol 8 (1) ◽  
pp. 4 ◽  
Author(s):  
Saleh Altowaijri ◽  
Mohamed Ayari ◽  
Yamen El Touati

By nature, some jobs are always in closed environments and employees may stay for long periods. This is the case for many professional activities such as military watch tours of borders, civilian buildings and facilities that need efficient control processes. The role assigned to personnel in such environments is usually sensitive and of high importance, especially in terms of security and protection. With this in mind, we proposed in our research a novel approach using multi-sensor technology to monitor many safety and security parameters including the health status of indoor workers, such as those in watchtowers and at guard posts. In addition, the data gathered for those employees (heart rate, temperature, eye movement, human motion, etc.) combined with the room’s sensor data (temperature, oxygen ratio, toxic gases, air quality, etc.) were saved by appropriate cloud services, which ensured easy access to the data without ignoring the privacy protection aspect of such critical material. This information can be used later by specialists to monitor the evolution of the worker’s health status as well as its cost-effectiveness, which gives the possibility to improve productivity in the workplace and general employee health.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S9-S9
Author(s):  
Neil Alexander ◽  
Shirley Handelzalts-Pereg ◽  
Linda Nyquist ◽  
Debbie Strasburg ◽  
Nicholas Mastruserio ◽  
...  

Abstract Losses of balance (LOBs) such as trips can lead to falls in older adults; what actually happens during real-world LOBs is unclear. With 4 wearable inertial measurement units (IMUs), we recorded feet, trunk and wrist movements over 2 weeks. Using a wrist voice recorder to report the LOBs, we applied our IMU processing algorithms and reconstructed the full body LOB and recovery motions. We recruited 7 at-risk older adults (M=76 yrs) who reported 114 LOBs of which we reconstructed over 90%. Using a rating system, 52% of the LOBs involved a significant trip, stumble, recovery step, and/or large trunk motion. 25% involved double or stutter steps and smaller trunk motions. The other 23% had less striking associated motions. These data suggest that most, but not all, self-reported real world LOBs involve substantial postural destabilization and near falls. Analyses of the voice-recorded context under which the LOBs occurred are ongoing.


Biosensors ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. 109
Author(s):  
Binbin Su ◽  
Christian Smith ◽  
Elena Gutierrez Farewik

Gait phase recognition is of great importance in the development of assistance-as-needed robotic devices, such as exoskeletons. In order for a powered exoskeleton with phase-based control to determine and provide proper assistance to the wearer during gait, the user’s current gait phase must first be identified accurately. Gait phase recognition can potentially be achieved through input from wearable sensors. Deep convolutional neural networks (DCNN) is a machine learning approach that is widely used in image recognition. User kinematics, measured from inertial measurement unit (IMU) output, can be considered as an ‘image’ since it exhibits some local ‘spatial’ pattern when the sensor data is arranged in sequence. We propose a specialized DCNN to distinguish five phases in a gait cycle, based on IMU data and classified with foot switch information. The DCNN showed approximately 97% accuracy during an offline evaluation of gait phase recognition. Accuracy was highest in the swing phase and lowest in terminal stance.


Sign in / Sign up

Export Citation Format

Share Document