scholarly journals Interaction Preference Differences between Elderly and Younger Exergame Users

Author(s):  
Ying Wang ◽  
Yuanyuan Huang ◽  
Junjie Xu ◽  
Defu Bao

Existing motion capture technology can efficiently track whole-body motion and be applied to many areas of the body. This whole-body interaction design has gained the attention of many researchers. However, few scholars have studied its suitability for elderly users. We were interested in exercise-based whole-body interactive games, which can provide mental and physical exercise for elderly users. We used heuristic evaluation to measure participants’ actions during exergame tasks and analyzed preference differences between elderly and younger users through the distribution of actions in four dimensions. We found that age affected the actions performed by users in exergame tasks. We discuss the mental model of elderly users during the process of performing these tasks and put forward some suggestions for interactive actions. This model and these suggestions theoretically have guiding significance for the research and application of exergame design for elderly users and may help designers develop more effective exergames or other whole-body interaction interfaces suitable for elderly users.

1994 ◽  
Vol 6 (2) ◽  
pp. 99-116 ◽  
Author(s):  
M. W. Oram ◽  
D. I. Perrett

Cells have been found in the superior temporal polysensory area (STPa) of the macaque temporal cortex that are selectively responsive to the sight of particular whole body movements (e.g., walking) under normal lighting. These cells typically discriminate the direction of walking and the view of the body (e.g., left profile walking left). We investigated the extent to which these cells are responsive under “biological motion” conditions where the form of the body is defined only by the movement of light patches attached to the points of limb articulation. One-third of the cells (25/72) selective for the form and motion of walking bodies showed sensitivity to the moving light displays. Seven of these cells showed only partial sensitivity to form from motion, in so far as the cells responded more to moving light displays than to moving controls but failed to discriminate body view. These seven cells exhibited directional selectivity. Eighteen cells showed statistical discrimination for both direction of movement and body view under biological motion conditions. Most of these cells showed reduced responses to the impoverished moving light stimuli compared to full light conditions. The 18 cells were thus sensitive to detailed form information (body view) from the pattern of articulating motion. Cellular processing of the global pattern of articulation was indicated by the observations that none of these cells were found sensitive to movement of individual limbs and that jumbling the pattern of moving limbs reduced response magnitude. A further 10 cells were tested for sensitivity to moving light displays of whole body actions other than walking. Of these cells 5/10 showed selectivity for form displayed by biological motion stimuli that paralleled the selectivity under normal lighting conditions. The cell responses thus provide direct evidence for neural mechanisms computing form from nonrigid motion. The selectivity of the cells was for body view, specific direction, and specific type of body motion presented by moving light displays and is not predicted by many current computational approaches to the extraction of form from motion.


2017 ◽  
Vol 118 (4) ◽  
pp. 2499-2506 ◽  
Author(s):  
A. Pomante ◽  
L. P. J. Selen ◽  
W. P. Medendorp

The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical—as a proxy for the tilt percept—during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s2peak acceleration, 80 cm displacement). While subjects ( n=10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model’s prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical.NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion.


Author(s):  
Pyeong-Gook Jung ◽  
Sehoon Oh ◽  
Gukchan Lim ◽  
Kyoungchul Kong

Motion capture systems play an important role in health-care and sport-training systems. In particular, there exists a great demand on a mobile motion capture system that enables people to monitor their health condition and to practice sport postures anywhere at any time. The motion capture systems with infrared or vision cameras, however, require a special setting, which hinders their application to a mobile system. In this paper, a mobile three-dimensional motion capture system is developed based on inertial sensors and smart shoes. Sensor signals are measured and processed by a mobile computer; thus, the proposed system enables the analysis and diagnosis of postures during outdoor sports, as well as indoor activities. The measured signals are transformed into quaternion to avoid the Gimbal lock effect. In order to improve the precision of the proposed motion capture system in an open and outdoor space, a frequency-adaptive sensor fusion method and a kinematic model are utilized to construct the whole body motion in real-time. The reference point is continuously updated by smart shoes that measure the ground reaction forces.


2019 ◽  
Vol 121 (6) ◽  
pp. 2392-2400 ◽  
Author(s):  
Romy S. Bakker ◽  
Luc P. J. Selen ◽  
W. Pieter Medendorp

In daily life, we frequently reach toward objects while our body is in motion. We have recently shown that body accelerations influence the decision of which hand to use for the reach, possibly by modulating the body-centered computations of the expected reach costs. However, head orientation relative to the body was not manipulated, and hence it remains unclear whether vestibular signals contribute in their head-based sensory frame or in a transformed body-centered reference frame to these cost calculations. To test this, subjects performed a preferential reaching task to targets at various directions while they were sinusoidally translated along the lateral body axis, with their head either aligned with the body (straight ahead) or rotated 18° to the left. As a measure of hand preference, we determined the target direction that resulted in equiprobable right/left-hand choices. Results show that head orientation affects this balanced target angle when the body is stationary but does not further modulate hand preference when the body is in motion. Furthermore, reaction and movement times were larger for reaches to the balanced target angle, resembling a competitive selection process, and were modulated by head orientation when the body was stationary. During body translation, reaction and movement times depended on the phase of the motion, but this phase-dependent modulation had no interaction with head orientation. We conclude that the brain transforms vestibular signals to body-centered coordinates at the early stage of reach planning, when the decision of hand choice is computed. NEW & NOTEWORTHY The brain takes inertial acceleration into account in computing the anticipated biomechanical costs that guide hand selection during whole body motion. Whereas these costs are defined in a body-centered, muscle-based reference frame, the otoliths detect the inertial acceleration in head-centered coordinates. By systematically manipulating head position relative to the body, we show that the brain transforms otolith signals into body-centered coordinates at an early stage of reach planning, i.e., before the decision of hand choice is computed.


2021 ◽  
Author(s):  
Omid A Zobeiri ◽  
Kathleen E Cullen

The ability to accurately control our posture and perceive spatial orientation during self-motion requires knowledge of the motion of both the head and body. However, whereas the vestibular sensors and nuclei directly encode head motion, no sensors directly encode body motion. Instead, the integration of vestibular and neck proprioceptive inputs is necessary to transform vestibular information into the body-centric reference frame required for postural control. The anterior vermis of the cerebellum is thought to play a key role in this transformation, yet how its Purkinje cells integrate these inputs or what information they dynamically encode during self-motion remains unknown. Here we recorded the activity of individual anterior vermis Purkinje cells in alert monkeys during passively applied whole-body, body-under-head, and head-on-body rotations. Most neurons dynamically encoded an intermediate representation of self-motion between head and body motion. Notably, these neurons responded to both vestibular and neck proprioceptive stimulation and showed considerable heterogeneity in their response dynamics. Furthermore, their vestibular responses demonstrated tuning in response to changes in head-on-body position. In contrast, a small remaining percentage of neurons sensitive only to vestibular stimulation unambiguously encoded head-in-space motion across conditions. Using a simple population model, we establish that combining responses from 40 Purkinje cells can explain the responses of their target neurons in deep cerebellar nuclei across all self-motion conditions. We propose that the observed heterogeneity in Purkinje cells underlies the cerebellum's capacity to compute the dynamic representation of body motion required to ensure accurate postural control and perceptual stability in our daily lives.


2018 ◽  
pp. 60-73
Author(s):  
Marina Castán ◽  
Daniel Suárez

This research aims to contribute to the current field of architectural design by offering evidence of how a collaborative and embodied approach to soft architecture can inform a new physical-digital design process. Current design technologies (e.g. sensors, 3D scanners, procedural modelling software), together with the use of the body as a source for designing a space, offer new methods and tools for designing architecture (Hirschberg, Sayegh, Frühwirth and Zedlacher 2006). However, the potential for experiencing and digitally capturing a soft and elastic material interaction through the body as a dynamic system capable of informing soft architectural design has not yet been widely explored. By using the felt experience as a tool for design, we allow the material to express its qualities when activated by the body, revealing its form instead of it being imposed from outside (DeLanda 2015). Taking an embodied approach used in interaction design and fashion design (Loke and Robertson 2011; Wilde, Vallgårda, and Tomico 2017), this research proposes a hybrid method to explore a textile-body ontology as an entity that has the potential to design a space, along with the use of motion capture technology in an effort to re-connect the experiential (the body) with the architecture (the space). Through a custom-made interface, made of soft and hard materials, we explored the dynamic and spatial qualities of material elasticity through choreographed body movements. The interface acts as a deformable space that can be shaped by the body, producing a collection of form expressions, ranging from subtle surface modifications to more prominent deformations. Such form-giving processes were captured in real time by three Kinect sensors, offering a distinct digital raw material that can be conveniently manipulated and translated into architectural simulations, validating the method as a new way to inform soft architectural design processes. The findings showed that: 1) the direct experience of collaboratively interacting with a soft and elastic interface allows the identification of the dynamic qualities of the material in relation to oneself and others, facilitating an immediate spatial meaning-making process; 2) exploring the design of a soft and elastic space through choreography and motion capture technology contributes to the creation of augmented relational scales across physical and digital realms, proposing a new hybrid design method; 3) the soft and elastic interface becomes a new entity when shaped by the body (textile-body ontology) giving the opportunity for a variety of formal expressions and offering a source of digital raw material for architectural design.


2021 ◽  
Vol 33 (5) ◽  
pp. 1029-1042
Author(s):  
Sho Sakurai ◽  
◽  
Takumi Goto ◽  
Takuya Nojima ◽  
Koichi Hirota

People infer the internal characteristics (attitude, intent, thoughts, ability, relationship, etc.) of others (interpersonal cognition, IC) from the impressions they form from the personality or attributes of those others (impression formation). Studies premised on interpersonal communication in a seated condition have confirmed that, regardless of whether the communication is in the real world or in a media environment, the appearance of the other person affects IC and the outcome of the communication. People also develop relationships based on impressions or images of the other party. The psychological relationship manifests in physical relationships, that is, the relative positions of the body or the movement. In this study, we evaluate the effects of the appearance of the opponent’s avatar on the players’ IC in whole-body interaction taking place in a virtual reality (VR) space. Moreover, we examine the feasibility of constructing a method of changing the players’ relationship in interpersonal interactions that accompany the control and interference of the entire body, “whole-body interaction,” by manipulating their appearances. In this study, we selected the party game Twister as a case model of whole-body interaction and developed a system that allows users to play Twister in VR space. Using this system environment, we conducted an experiment to evaluate the players’ IC based on the gender and realism of the opponent’s avatar. The results showed that differences in the appearance of the opponent’s avatar affected the IC of male players. We also indicated that changes in IC observed in the experiment can affect the players’ relationship, thus identifying issues that must be resolved in order to realize the method.


2013 ◽  
Vol 722 ◽  
pp. 454-458
Author(s):  
Shu Ai Li ◽  
Yong Sheng Wang ◽  
Rui Pai Xiang

To solve the bottleneck problem of defining motion trajectory of virtual role in animation creation process, this paper presents a solution of mechanical human body motion capture technology, mainly involving inertia sensing technology, Bluetooth, the design of sensor network nodes and the development of reconstruction software of human body motion model. The system uses sensor network to collect motion data of the body key joints, and the data are delivered to workstation through Bluetooth, the software on workstation uses analytical inverse kinematics algorithm to analyze the motion data. So the system has advantages of lower cost and high precision. Meanwhile, the paper also provides a solid foundation for the research of multiplayer real-time motion capture technology.


Sign in / Sign up

Export Citation Format

Share Document