visual delay
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 2)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Raul Rodriguez ◽  
Benjamin Thomas Crane

Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2 s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading towards the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0 ± 0.5° with a 30° offset, 12.2 ± 0.5° with a 60° offset, 11.7 ± 0.6° with a 90° offset, and 9.8 ± 0.7° with a 120° offset (mean bias towards visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects the effect of delay was similar.



Author(s):  
Jian Li ◽  
Shaokai Xu ◽  
Yanmin Liu ◽  
Xiangdong Liu ◽  
Zhen Li ◽  
...  


2019 ◽  
Author(s):  
Marte Roel Lesur ◽  
Marieke Lieve Weijs ◽  
Colin Simon ◽  
Oliver Alan Kannape ◽  
Bigna Lenggenhager

AbstractThe loss of body ownership, the feeling that your body and its limbs no longer belong to you, presents a severe clinical condition that has proven difficult to study directly. We here propose a novel paradigm using mixed reality to interfere with natural embodiment using temporally conflicting sensory signals from the own hand. In Experiment 1 we investigated how such a mismatch affects phenomenological and physiological aspects of embodiment, and identified its most important dimensions using a principle component analysis. The results suggest that such a mismatch induces a strong reduction in embodiment accompanied by an increase in feelings of disownership and deafference, which was, however, not reflected in physiological changes. In Experiment 2 we refined the paradigm to measure perceptual thresholds for temporal mismatches and compared how different multimodal, mismatching information alters the sense of embodiment. The results showed that while visual delay decreased embodiment both while actively moving and during passive touch, the effect was stronger for the former. Our results extend previous findings as they demonstrate that a sense of disembodiment can be induced through controlled multimodal mismatches about one’s own body and more so during active movement as compared to passive touch. Based on the ecologically more valid protocol we propose here, we argue that such a sense of disembodiment may fundamentally differ from disownership sensations as discussed in the rubber hand illusion literature, and emphasize its clinical relevance. This might importantly advance the current debate on the relative contribution of different modalities to our sense of body and its plasticity.



2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.



2018 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. Here we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants' movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception.



2018 ◽  
Author(s):  
Stephanie Hu ◽  
Raz Leib ◽  
Ilana Nisky

AbstractOur sensorimotor system estimates stiffness to form stiffness perception, such as for choosing a ripe fruit, and to generate actions, such as to adjust grip force to avoid slippage of a scalpel during surgery. We examined how temporal manipulation of the haptic and visual feedback affect stiffness perception and grip force adjustment during a stiffness discrimination task. We used delayed force feedback and delayed visual feedback to break the natural relations between these modalities when participants tried to choose the harder spring between pairs of springs. We found that visual delay caused participants to slightly overestimate stiffness while force feedback delay caused a mixed effect on perception; for some it caused underestimation and for some overestimation of stiffness. Interestingly and in contrast to previous findings without vision, we found that participants increased the magnitude of their applied grip force for all conditions. We propose a model that suggests that this increase was a result of coupling the grip force adjustment to their proprioceptive hand position, which was the only modality which we could not delay. Our findings shed light on how the sensorimotor system combines information from different sensory modalities for perception and action. These results are important for the design of improved teleoperation systems that suffer from unavoidable delays.



2014 ◽  
Vol 14 (10) ◽  
pp. 1138-1138
Author(s):  
P. Jaekl ◽  
D. Tadin
Keyword(s):  


2013 ◽  
Author(s):  
Bern Stegeman ◽  
Herman J. Damveld ◽  
Olaf Stroosma ◽  
Marinus van Paassen ◽  
Max Mulder


2012 ◽  
Vol 85 (3) ◽  
pp. 371
Author(s):  
L. Wombacher ◽  
N. Bischof ◽  
O. Christ


Sign in / Sign up

Export Citation Format

Share Document