Validation of inertial-magnetic wearable sensors for full-body motion tracking of automotive manufacturing operations

2020 ◽  
Vol 79 ◽  
pp. 103005
Author(s):  
Xiaoxu Ji ◽  
Davide Piovesan
Author(s):  
Simon Biggs

This paper discusses the immersive full body motion tracking installation Dark Matter, developed by the author and completed in early 2016. The paper outlines the conceptual focus of the project, including the use of the metaphor of dark matter to explore questions around interactive systems and assemblage. The primary technical considerations involved in the project are also outlined. ‘Co-reading' is proposed as a framework for a generative ontology, within the context of assemblage theory, deployed within a multimodal multi-agent interactive system.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245717
Author(s):  
Shlomi Haar ◽  
Guhan Sundar ◽  
A. Aldo Faisal

Motor-learning literature focuses on simple laboratory-tasks due to their controlled manner and the ease to apply manipulations to induce learning and adaptation. Recently, we introduced a billiards paradigm and demonstrated the feasibility of real-world-neuroscience using wearables for naturalistic full-body motion-tracking and mobile-brain-imaging. Here we developed an embodied virtual-reality (VR) environment to our real-world billiards paradigm, which allows to control the visual feedback for this complex real-world task, while maintaining sense of embodiment. The setup was validated by comparing real-world ball trajectories with the trajectories of the virtual balls, calculated by the physics engine. We then ran our short-term motor learning protocol in the embodied VR. Subjects played billiard shots when they held the physical cue and hit a physical ball on the table while seeing it all in VR. We found comparable short-term motor learning trends in the embodied VR to those we previously reported in the physical real-world task. Embodied VR can be used for learning real-world tasks in a highly controlled environment which enables applying visual manipulations, common in laboratory-tasks and rehabilitation, to a real-world full-body task. Embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor-learning components, thus enabling addressing the current questions of motor-learning in real-world tasks. Such a setup can potentially be used for rehabilitation, where VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2108
Author(s):  
Maik Boltes ◽  
Juliane Adrian ◽  
Anna-Katharina Raytarowski

For our understanding of the dynamics inside crowds, reliable empirical data are needed, which could enable increases in safety and comfort for pedestrians and the design of models reflecting the real dynamics. A well-calibrated camera system can extract absolute head position with high accuracy. The inclusion of inertial sensors or even self-contained full-body motion capturing systems allows the relative tracking of invisible people or body parts or capturing the locomotion of the whole body even in dense crowds. The newly introduced hybrid system maps the trajectory of the top of the head coming from a full-body motion tracking system to the head trajectory of a camera system in global space. The fused data enable the analysis of possible correlations of all observables. In this paper we present an experiment of people passing though a bottleneck and show by example the influences of bottleneck width and motivation on the overall movement, velocity, stepping locomotion and rotation of the pelvis. The hybrid tracking system opens up new possibilities for analyzing pedestrian dynamics inside crowds, such as the space requirement while passing through a bottleneck. The system allows linking any body motion to characteristics describing the situation of a person inside a crowd, such as the density or movements of other participants nearby.


Author(s):  
Shlomi Haar ◽  
Guhan Sundar ◽  
A. Aldo Faisal

AbstractMotor-learning literature focuses on simple laboratory-tasks due to their controlled manner and the ease to apply manipulations to induce learning and adaptation. Recently, we introduced a billiards paradigm and demonstrated the feasibility of real-world-neuroscience using wearables for naturalistic full-body motion-tracking and mobile-brain-imaging. Here we developed an embodied virtual-reality (VR) environment to our real-world billiards paradigm, which allows to control the visual feedback for this complex real-world task, while maintaining sense of embodiment. The setup was validated by comparing real-world ball trajectories with the trajectories of the virtual balls, calculated by the physics engine. We then ran our learning protocol in the embodied VR. Subjects played billiard shots when they held the physical cue and hit a physical ball on the table while seeing it all in VR. We found comparable learning trends in the embodied VR to those we previously reported in the physical real-world task. Embodied VR can be used for learning real-world tasks in a highly controlled environment which enables applying visual manipulations, common in laboratory-tasks and rehabilitation, to a real-world full-body task. Embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor-learning components, thus enabling addressing the current questions of motor-learning in real-world tasks. Such a setup can be used for rehabilitation, where VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment.


2020 ◽  
Author(s):  
Karl K. Kopiske ◽  
Daniel Koska ◽  
Thomas Baumann ◽  
Christian Maiwald ◽  
Wolfgang Einhäuser

Most humans can walk effortlessly across uniform terrain even without paying much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. In a controlled yet naturalistic environment, we simulated terrain difficulty through slip-like perturbations that were either unpredictable (experiment 1) or sometimes followed visual cues (experiment 2) while recording eye and body movements using mobile eye tracking and full-body motion tracking. We quantified the distinct roles of eye and head movements for adjusting gaze on different time scales. While motor perturbations mainly influenced head movements, eye movements were primarily affected by visual cues, both immediately following slips, and – to a lesser extent – over 5-minute blocks. We find adapted gaze parameters already after the first perturbation in each block, with little transfer between blocks. In conclusion, gaze-gait interactions in experimentally perturbed naturalistic walking are adaptive, flexible, and effector-specific.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ramon J. Boekesteijn ◽  
José M. H. Smolders ◽  
Vincent J. J. F. Busch ◽  
Alexander C. H. Geurts ◽  
Katrijn Smulders

Abstract Background Although it is well-established that osteoarthritis (OA) impairs daily-life gait, objective gait assessments are not part of routine clinical evaluation. Wearable inertial sensors provide an easily accessible and fast way to routinely evaluate gait quality in clinical settings. However, during these assessments, more complex and meaningful aspects of daily-life gait, including turning, dual-task performance, and upper body motion, are often overlooked. The aim of this study was therefore to investigate turning, dual-task performance, and upper body motion in individuals with knee or hip OA in addition to more commonly assessed spatiotemporal gait parameters using wearable sensors. Methods Gait was compared between individuals with unilateral knee (n = 25) or hip OA (n = 26) scheduled for joint replacement, and healthy controls (n = 27). For 2 min, participants walked back and forth along a 6-m trajectory making 180° turns, with and without a secondary cognitive task. Gait parameters were collected using 4 inertial measurement units on the feet and trunk. To test if dual-task gait, turning, and upper body motion had added value above spatiotemporal parameters, a factor analysis was conducted. Effect sizes were computed as standardized mean difference between OA groups and healthy controls to identify parameters from these gait domains that were sensitive to knee or hip OA. Results Four independent domains of gait were obtained: speed-spatial, speed-temporal, dual-task cost, and upper body motion. Turning parameters constituted a gait domain together with cadence. From the domains that were obtained, stride length (speed-spatial) and cadence (speed-temporal) had the strongest effect sizes for both knee and hip OA. Upper body motion (lumbar sagittal range of motion), showed a strong effect size when comparing hip OA with healthy controls. Parameters reflecting dual-task cost were not sensitive to knee or hip OA. Conclusions Besides more commonly reported spatiotemporal parameters, only upper body motion provided non-redundant and sensitive parameters representing gait adaptations in individuals with hip OA. Turning parameters were sensitive to knee and hip OA, but were not independent from speed-related gait parameters. Dual-task parameters had limited additional value for evaluating gait in knee and hip OA, although dual-task cost constituted a separate gait domain. Future steps should include testing responsiveness of these gait domains to interventions aiming to improve mobility.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2331
Author(s):  
Stefano Di Paolo ◽  
Nicola Francesco Lopomo ◽  
Francesco Della Villa ◽  
Gabriele Paolini ◽  
Giulio Figari ◽  
...  

The aim of the present study was to quantify joint kinematics through a wearable sensor system in multidirectional high-speed complex movements used in a protocol for rehabilitation and return to sport assessment after Anterior Cruciate Ligament (ACL) injury, and to validate it against a gold standard optoelectronic marker-based system. Thirty-four healthy athletes were evaluated through a full-body wearable sensor (MTw Awinda, Xsens) and a marker-based optoelectronic (Vicon Nexus, Vicon) system during the execution of three tasks: drop jump, forward sprint, and 90° change of direction. Clinically relevant joint angles of lower limbs and trunk were compared through Pearson’s correlation coefficient (r), and the Coefficient of Multiple Correlation (CMC). An excellent agreement (r > 0.94, CMC > 0.96) was found for knee and hip sagittal plane kinematics in all the movements. A fair-to-excellent agreement was found for frontal (r 0.55–0.96, CMC 0.63–0.96) and transverse (r 0.45–0.84, CMC 0.59–0.90) plane kinematics. Movement complexity slightly affected the agreement between the systems. The system based on wearable sensors showed fair-to-excellent concurrent validity in the evaluation of the specific joint parameters commonly used in rehabilitation and return to sport assessment after ACL injury for complex movements. The ACL professionals could benefit from full-body wearable technology in the on-field rehabilitation of athletes.


Sign in / Sign up

Export Citation Format

Share Document