Multimodal and Motor Influences on Orientation: Implications for Adapting to Weightless and Virtual Environments1

1992 ◽  
Vol 2 (4) ◽  
pp. 307-322
Author(s):  
James R. Lackner

Human sensory-motor control and orientation involve the correlation of sensory information from many modalities with motor information about ongoing patterns of voluntary and reflexive activation of the body musculature. The vestibular system represents only one of the acceleration-sensitive receptor systems of the body conveying spatial information. Touch- and pressure-dependent receptors, somatosensory and interoceptive, as well as proprioceptive receptors contribute, along with visual and auditory signals specifying relative motion between self and surround. Control of body movement and orientation is dynamically adapted to the 1G force background of Earth. Exposure to non-1G environments such as in space travel produces a variety of sensory-motor disturbances, and often motion sickness, until adaptation is achieved. Exposure to virtual environments in which body movements are not accompanied by normal patterns of inertial and sensory feedback can also lead to control errors and elicit motion sickness.

2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


2021 ◽  
Vol 7 (1) ◽  
pp. a1en
Author(s):  
Ana D'Arc Martins de Azevedo ◽  
Camila Rodrigues Neiva ◽  
Edgar Monteiro Chagas Junior ◽  
Maria Betânia de Carvalho Fidalgo Arroyo

This article highlights Carimbó as a symbol of traditional culture and the state of Pará identity, being an instrument to work according to Wallon theory on sensory-motor development. The matter discussed is about to know how Carimbó contributes to students sensory-motor development in Early Childhood Education (toddlers) in Physical Education classes. It's a qualitative case study research, that used a participatory observation instrument and an open interview with a teacher and six students. For data analysis, data triangulation was used. As a result, we understand that Carimbó made viable the students sensory-motor development. On this account it was possible to observe the development of the researched aspects.


Author(s):  
Christopher J. Rich ◽  
Curt C. Braun

Virtual reality (VR) users are frequently limited by motion sickness-like symptoms. One factor that might influence sickness in VR is the level of control one has in a virtual environment. Reason's Sensory Conflict Theory suggested that motion sickness occurs when incompatibilities exist between four sensory inputs. It is possible that control and sensory compatibility are positively related. If this is the case, increasing control in a virtual environment should result in decreasing symptomology. To test this, the present study used the Simulator Sickness Questionnaire to measure symptomology of 163 participants after exposure to a virtual environment. Three levels of control and compatibility were assessed. It was hypothesized that the participants with control and compatible sensory information would experience fewer symptoms than participants in either the control/incompatible or no control/incompatible conditions. Although significant main effects were found for both gender and condition, the findings were opposite of those hypothesized. Possible explanations for this finding are discussed.


2021 ◽  
pp. 117-132
Author(s):  
Daniele Romano ◽  
Angelo Maravita

The ability of humans to manufacture objects and represent physical causality makes them the master species in the use of tools. What is the impact of such a specific skill on the processing of bodily body-related spatial information? To what extent does the skilful manipulation of tools require specific embodiment of the device into one’s body representation? The present chapter reviews the effect of tool use on the representation of the body and space surrounding it, by analysing the cognitive effects of tool use and its neural representations. Studies on animals, healthy humans, and neuropsychological patients suggest that multisensory integration of stimuli far from the body is enhanced when a tool can reach those stimuli. Such a spatial remapping indicates that the body schema may adapt to include the device into one’s body representation. Notably, tool use-related changes are not limited to spatial processing, but also to the processing of body-related sensory-motor information. Understanding the cognitive mechanisms underlying tool use and the effect of tool use in the representation of the space around us is a paramount challenge to the understanding of body representation, especially considering that modern and more sophisticated technological tools, such as functional prostheses, robotic interfaces, and virtual reality devices, continually shape the central role of the body in human–environment interactions.


2018 ◽  
Vol 120 (5) ◽  
pp. 2164-2181
Author(s):  
Kristin M. Quick ◽  
Jessica L. Mischel ◽  
Patrick J. Loughlin ◽  
Aaron P. Batista

Everyday behaviors require that we interact with the environment, using sensory information in an ongoing manner to guide our actions. Yet, by design, many of the tasks used in primate neurophysiology laboratories can be performed with limited sensory guidance. As a consequence, our knowledge about the neural mechanisms of motor control is largely limited to the feedforward aspects of the motor command. To study the feedback aspects of volitional motor control, we adapted the critical stability task (CST) from the human performance literature (Jex H, McDonnell J, Phatak A. IEEE Trans Hum Factors Electron 7: 138–145, 1966). In the CST, our monkey subjects interact with an inherently unstable (i.e., divergent) virtual system and must generate sensory-guided actions to stabilize it about an equilibrium point. The difficulty of the CST is determined by a single parameter, which allows us to quantitatively establish the limits of performance in the task for different sensory feedback conditions. Two monkeys learned to perform the CST with visual or vibrotactile feedback. Performance was better under visual feedback, as expected, but both monkeys were able to utilize vibrotactile feedback alone to successfully perform the CST. We also observed changes in behavioral strategy as the task became more challenging. The CST will have value for basic science investigations of the neural basis of sensory-motor integration during ongoing actions, and it may also provide value for the design and testing of bidirectional brain computer interface systems. NEW & NOTEWORTHY Currently, most behavioral tasks used in motor neurophysiology studies require primates to make short-duration, stereotyped movements that do not necessitate sensory feedback. To improve our understanding of sensorimotor integration, and to engineer meaningful artificial sensory feedback systems for brain-computer interfaces, it is crucial to have a task that requires sensory feedback for good control. The critical stability task demands that sensory information be used to guide long-duration movements.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Mohamed Khateb ◽  
Jackie Schiller ◽  
Yitzhak Schiller

The primary vibrissae motor cortex (vM1) is responsible for generating whisking movements. In parallel, vM1 also sends information directly to the sensory barrel cortex (vS1). In this study, we investigated the effects of vM1 activation on processing of vibrissae sensory information in vS1 of the rat. To dissociate the vibrissae sensory-motor loop, we optogenetically activated vM1 and independently passively stimulated principal vibrissae. Optogenetic activation of vM1 supra-linearly amplified the response of vS1 neurons to passive vibrissa stimulation in all cortical layers measured. Maximal amplification occurred when onset of vM1 optogenetic activation preceded vibrissa stimulation by 20 ms. In addition to amplification, vM1 activation also sharpened angular tuning of vS1 neurons in all cortical layers measured. Our findings indicated that in addition to output motor signals, vM1 also sends preparatory signals to vS1 that serve to amplify and sharpen the response of neurons in the barrel cortex to incoming sensory input signals.


2021 ◽  
Vol 12 ◽  
Author(s):  
Irene Valori ◽  
Phoebe E. McKenna-Plumley ◽  
Rena Bayramova ◽  
Teresa Farroni

Atypical sensorimotor developmental trajectories greatly contribute to the profound heterogeneity that characterizes Autism Spectrum Disorders (ASD). Individuals with ASD manifest deviations in sensorimotor processing with early markers in the use of sensory information coming from both the external world and the body, as well as motor difficulties. The cascading effect of these impairments on the later development of higher-order abilities (e.g., executive functions and social communication) underlines the need for interventions that focus on the remediation of sensorimotor integration skills. One of the promising technologies for such stimulation is Immersive Virtual Reality (IVR). In particular, head-mounted displays (HMDs) have unique features that fully immerse the user in virtual realities which disintegrate and otherwise manipulate multimodal information. The contribution of each individual sensory input and of multisensory integration to perception and motion can be evaluated and addressed according to a user’s clinical needs. HMDs can therefore be used to create virtual environments aimed at improving people’s sensorimotor functioning, with strong potential for individualization for users. Here we provide a narrative review of the sensorimotor atypicalities evidenced by children and adults with ASD, alongside some specific relevant features of IVR technology. We discuss how individuals with ASD may interact differently with IVR versus real environments on the basis of their specific atypical sensorimotor profiles and describe the unique potential of HMD-delivered immersive virtual environments to this end.


2021 ◽  
Author(s):  
Elena Fuehrer ◽  
Dimitris Voudouris ◽  
Alexandra Lezkan ◽  
Knut Drewing ◽  
Katja Fiehler

The ability to sample sensory information with our hands is crucial for smooth and efficient interactions with the world. Despite this important role of touch, tactile sensations on a moving hand are perceived weaker than when presented on the same but stationary hand.1-3 This phenomenon of tactile suppression has been explained by predictive mechanisms, such as forward models, that estimate future sensory states of the body on the basis of the motor command and suppress the associated predicted sensory feedback.4 The origins of tactile suppression have sparked a lot of debate, with contemporary accounts claiming that suppression is independent of predictive mechanisms and is instead akin to unspecific gating.5 Here, we target this debate and provide evidence for sensation-specific tactile suppression due to sensorimotor predictions. Participants stroked with their finger over textured surfaces that caused predictable vibrotactile feedback signals on that finger. Shortly before touching the texture, we applied external vibrotactile probes on the moving finger that either matched or mismatched the frequency generated by the stroking movement. We found stronger suppression of the probes that matched the predicted sensory feedback. These results show that tactile suppression is not limited to unspecific gating but is specifically tuned to the predicted sensory states of a movement.


2020 ◽  
Vol 2020 (17) ◽  
pp. 2-1-2-6
Author(s):  
Shih-Wei Sun ◽  
Ting-Chen Mou ◽  
Pao-Chi Chang

To improve the workout efficiency and to provide the body movement suggestions to users in a “smart gym” environment, we propose to use a depth camera for capturing a user’s body parts and mount multiple inertial sensors on the body parts of a user to generate deadlift behavior models generated by a recurrent neural network structure. The contribution of this paper is trifold: 1) The multimodal sensing signals obtained from multiple devices are fused for generating the deadlift behavior classifiers, 2) the recurrent neural network structure can analyze the information from the synchronized skeletal and inertial sensing data, and 3) a Vaplab dataset is generated for evaluating the deadlift behaviors recognizing capability in the proposed method.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3771
Author(s):  
Alexey Kashevnik ◽  
Walaa Othman ◽  
Igor Ryabchikov ◽  
Nikolay Shilov

Meditation practice is mental health training. It helps people to reduce stress and suppress negative thoughts. In this paper, we propose a camera-based meditation evaluation system, that helps meditators to improve their performance. We rely on two main criteria to measure the focus: the breathing characteristics (respiratory rate, breathing rhythmicity and stability), and the body movement. We introduce a contactless sensor to measure the respiratory rate based on a smartphone camera by detecting the chest keypoint at each frame, using an optical flow based algorithm to calculate the displacement between frames, filtering and de-noising the chest movement signal, and calculating the number of real peaks in this signal. We also present an approach to detecting the movement of different body parts (head, thorax, shoulders, elbows, wrists, stomach and knees). We have collected a non-annotated dataset for meditation practice videos consists of ninety videos and the annotated dataset consists of eight videos. The non-annotated dataset was categorized into beginner and professional meditators and was used for the development of the algorithm and for tuning the parameters. The annotated dataset was used for evaluation and showed that human activity during meditation practice could be correctly estimated by the presented approach and that the mean absolute error for the respiratory rate is around 1.75 BPM, which can be considered tolerable for the meditation application.


Sign in / Sign up

Export Citation Format

Share Document