scholarly journals Learned rather than online relative weighting of visual-proprioceptive sensory cues

2018 ◽  
Vol 119 (5) ◽  
pp. 1981-1992 ◽  
Author(s):  
Laura Mikula ◽  
Valérie Gaveau ◽  
Laure Pisella ◽  
Aarlenne Z. Khan ◽  
Gunnar Blohm

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand’s specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.

2012 ◽  
Vol 25 (0) ◽  
pp. 58
Author(s):  
Katrina Quinn ◽  
Francia Acosta-Saltos ◽  
Jan W. de Fockert ◽  
Charles Spence ◽  
Andrew J. Bremner

Information about where our hands are arises from different sensory modalities; chiefly proprioception and vision. These inputs differ in variability from situation to situation (or task to task). According to the idea of ‘optimal integration’, the information provided by different sources is combined in proportion to their relative reliabilities, thus maximizing the reliability of the combined estimate. It is uncertain whether optimal multisensory integration of multisensory contributions to limb position requires executive resources. If so, then it should be possible to observe effects of secondary task performance and/or working memory load (WML) on the relative weighting of the senses under conditions of crossmodal sensory conflict. Alternatively, an integrated signal may be affected by upstream influences of WML or a secondary task on the reliabilities of the individual sensory inputs. We examine these possibilities in two experiments which examine effects of WML on reaching tasks in which bisensory visual-proprioceptive (Exp. 1), and unisensory proprioceptive (Exp. 2) cues to hand position are provided. WML increased visual capture under conditions of visual-proprioceptive conflict, regardless of the direction of visual-proprioceptive conflict, and the degree of load imposed. This indicates that task-switching (rather than WML load) leads to an increased reliance on visual information regardless of its task-specific reliability (Exp. 1). This could not be explained due to an increase in the variability of proprioception under secondary working memory task conditions (Exp. 2). We conclude that executive resources are involved in the relative weighting of visual and proprioceptive cues to hand position.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 127-127
Author(s):  
M Desmurget ◽  
Y Rossetti ◽  
C Prablanc

The problem whether movement accuracy is better in the full open-loop condition (FOL, hand never visible) than in the static closed-loop condition (SCL, hand only visible prior to movement onset) remains widely debated. To investigate this controversial question, we studied conditions for which visual information available to the subject prior to movement onset was strictly controlled. The results of our investigation showed that the accuracy improvement observed when human subjects were allowed to see their hand, in the peripheral visual field, prior to movement: (1) concerned only the variable errors; (2) did not depend on the simultaneous vision of the hand and target (hand and target viewed simultaneously vs sequentially); (3) remained significant when pointing to proprioceptive targets; and (4) was not suppressed when the visual information was temporally (visual presentation for less than 300 ms) or spatially (vision of only the index fingertip) restricted. In addition, dissociating vision and proprioception with wedge prisms showed that a weighed hand position was used to program hand trajectory. When considered together, these results suggest that: (i) knowledge of the initial upper limb configuration or position is necessary to plan accurately goal-directed movements; (ii) static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body, and (iii) visual and proprioceptive information is not used in an exclusive way, but combined to furnish an accurate representation of the state of the effector prior to movement.


2021 ◽  
Vol 15 ◽  
Author(s):  
Natasha Ratcliffe ◽  
Katie Greenfield ◽  
Danielle Ropar ◽  
Ellen M. Howard ◽  
Roger Newport

Forming an accurate representation of the body relies on the integration of information from multiple sensory inputs. Both vision and proprioception are important for body localization. Whilst adults have been shown to integrate these sources in an optimal fashion, few studies have investigated how children integrate visual and proprioceptive information when localizing the body. The current study used a mediated reality device called MIRAGE to explore how the brain weighs visual and proprioceptive information in a hand localization task across early childhood. Sixty-four children aged 4–11 years estimated the position of their index finger after viewing congruent or incongruent visuo-proprioceptive information regarding hand position. A developmental trajectory analysis was carried out to explore the effect of age on condition. An age effect was only found in the incongruent condition which resulted in greater mislocalization of the hand toward the visual representation as age increased. Estimates by younger children were closer to the true location of the hand compared to those by older children indicating less weighting of visual information. Regression analyses showed localizations errors in the incongruent seen condition could not be explained by proprioceptive accuracy or by general attention or social differences. This suggests that the way in which visual and proprioceptive information are integrated optimizes throughout development, with the bias toward visual information increasing with age.


1995 ◽  
Vol 74 (1) ◽  
pp. 457-463 ◽  
Author(s):  
Y. Rossetti ◽  
M. Desmurget ◽  
C. Prablanc

1. Subjects were asked to point toward visual targets without visual reafference from the moving hand in two conditions. In both conditions the pointing fingertip was viewed only before movement onset. 2. In one condition, the pointing fingertip was viewed through prisms that created a visual displacement without altering the view of the target. In another experimental condition, vision of the fingertip was not displaced. Comparison of these two conditions showed that virtually shifting finger position before movement through prisms induced a pointing bias in the direction opposite to the shift. The extent of this pointing bias was about one third of the prismatic shift applied to the fingertip. 3. Analysis of movement initial direction demonstrated that it was also less deviated than predicted from the prismatic shift. In addition, the reaction time and movement time of the reaching movement were increased. 4. This result is interpreted in the framework of the vectorial coding of reaching movement. Proprioception and vision provide two possible sources of information about initial hand position, i.e., the origin of the movement vector. The question remains as to how these two sources of information interact in specifying initial hand position when they are simultaneously available. 5. Our results are thus discussed with respect to a visual-to-visual movement vector hypothesis and a proprioceptive-to-visual vector hypothesis. It is argued that the origin of the putative movement vector is encoded by weighted fusion of the visual and the proprioceptive information about hand initial position.


2020 ◽  
Vol 7 (8) ◽  
pp. 192056
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer

Successful computer use requires the operator to link the movement of the cursor to that of his or her hand. Previous studies suggest that the brain establishes this perceptual link through multisensory integration, whereby the causality evidence that drives the integration is provided by the correlated hand and cursor movement trajectories. Here, we explored the temporal window during which this causality evidence is effective. We used a basic cursor-control task, in which participants performed out-and-back reaching movements with their hand on a digitizer tablet. A corresponding cursor movement could be shown on a monitor, yet slightly rotated by an angle that varied from trial to trial. Upon completion of the backward movement, participants judged the endpoint of the outward hand or cursor movement. The mutually biased judgements that typically result reflect the integration of the proprioceptive information on hand endpoint with the visual information on cursor endpoint. We here manipulated the time period during which the cursor was visible, thereby selectively providing causality evidence either before or after sensory information regarding the to-be-judged movement endpoint was available. Specifically, the cursor was visible either during the outward or backward hand movement (conditions Out and Back , respectively). Our data revealed reduced integration in the condition Back compared with the condition Out , suggesting that causality evidence available before the to-be-judged movement endpoint is more powerful than later evidence in determining how strongly the brain integrates the endpoint information. This finding further suggests that sensory integration is not delayed until a judgement is requested.


2020 ◽  
Vol 6 (2) ◽  
pp. eaay6036 ◽  
Author(s):  
R. C. Feord ◽  
M. E. Sumner ◽  
S. Pusdekar ◽  
L. Kalra ◽  
P. T. Gonzalez-Bellido ◽  
...  

The camera-type eyes of vertebrates and cephalopods exhibit remarkable convergence, but it is currently unknown whether the mechanisms for visual information processing in these brains, which exhibit wildly disparate architecture, are also shared. To investigate stereopsis in a cephalopod species, we affixed “anaglyph” glasses to cuttlefish and used a three-dimensional perception paradigm. We show that (i) cuttlefish have also evolved stereopsis (i.e., the ability to extract depth information from the disparity between left and right visual fields); (ii) when stereopsis information is intact, the time and distance covered before striking at a target are shorter; (iii) stereopsis in cuttlefish works differently to vertebrates, as cuttlefish can extract stereopsis cues from anticorrelated stimuli. These findings demonstrate that although there is convergent evolution in depth computation, cuttlefish stereopsis is likely afforded by a different algorithm than in humans, and not just a different implementation.


2001 ◽  
Vol 31 (5) ◽  
pp. 915-922 ◽  
Author(s):  
S. KÉRI ◽  
O. KELEMEN ◽  
G. BENEDEK ◽  
Z. JANKA

Background. The aim of this study was to assess visual information processing and cognitive functions in unaffected siblings of patients with schizophrenia, bipolar disorder and control subjects with a negative family history.Methods. The siblings of patients with schizophrenia (N = 25), bipolar disorder (N = 20) and the controls subjects (N = 20) were matched for age, education, IQ, and psychosocial functioning, as indexed by the Global Assessment of Functioning scale. Visual information processing was measured using two visual backward masking (VBM) tests (target location and target identification). The evaluation of higher cognitive functions included spatial and verbal working memory, Wisconsin Card Sorting Test, letter fluency, short/long delay verbal recall and recognition.Results. The relatives of schizophrenia patients were impaired in the VBM procedure, more pronouncedly at short interstimulus intervals (14, 28, 42 ms) and in the target location task. Marked dysfunctions were also found in the spatial working memory task and in the long delay verbal recall test. In contrast, the siblings of patients with bipolar disorder exhibited spared performances with the exception of a deficit in the long delay recall task.Conclusions. Dysfunctions of sensory-perceptual analysis (VBM) and working memory for spatial information distinguished the siblings of schizophrenia patients from the siblings of individuals with bipolar disorder. Verbal recall deficit was present in both groups, suggesting a common impairment of the fronto-hippocampal system.


2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


Sign in / Sign up

Export Citation Format

Share Document