scholarly journals Visuo-proprioceptive integration and recalibration with multiple visual stimuli

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer ◽  
Christoph Kayser

AbstractTo organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.

2021 ◽  
Author(s):  
Nienke B Debats ◽  
Herbert Heuer ◽  
Christoph Kayser

To organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one potentially relevant visual stimulus, hence paving the way to studying multisensory perception under more naturalistic settings with multiple signals per sensory modality. By manipulating the spatio-temporal correspondence between the hand position and two visual stimuli during a cursor-control task, we contrasted two alternative accounts: that only the temporally more proximal signal enters integration and recalibration similar to a winner-takes-all process, or that the influences of both visual signals superpose. Our results show that integration - and likely also recalibration - are shaped by the superposed contributions of multiple stimuli rather than by only individual ones.


2021 ◽  
Vol 12 ◽  
Author(s):  
LomaJohn T. Pendergraft ◽  
John M. Marzluff ◽  
Donna J. Cross ◽  
Toru Shimizu ◽  
Christopher N. Templeton

Social interaction among animals can occur under many contexts, such as during foraging. Our knowledge of the regions within an avian brain associated with social interaction is limited to the regions activated by a single context or sensory modality. We used 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) to examine American crow (Corvus brachyrhynchos) brain activity in response to conditions associated with communal feeding. Using a paired approach, we exposed crows to either a visual stimulus (the sight of food), an audio stimulus (the sound of conspecifics vocalizing while foraging) or both audio/visual stimuli presented simultaneously and compared to their brain activity in response to a control stimulus (an empty stage). We found two regions, the nucleus taenia of the amygdala (TnA) and a medial portion of the caudal nidopallium, that showed increased activity in response to the multimodal combination of stimuli but not in response to either stimulus when presented unimodally. We also found significantly increased activity in the lateral septum and medially within the nidopallium in response to both the audio-only and the combined audio/visual stimuli. We did not find any differences in activation in response to the visual stimulus by itself. We discuss how these regions may be involved in the processing of multimodal stimuli in the context of social interaction.


2020 ◽  
Vol 7 (8) ◽  
pp. 192056
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer

Successful computer use requires the operator to link the movement of the cursor to that of his or her hand. Previous studies suggest that the brain establishes this perceptual link through multisensory integration, whereby the causality evidence that drives the integration is provided by the correlated hand and cursor movement trajectories. Here, we explored the temporal window during which this causality evidence is effective. We used a basic cursor-control task, in which participants performed out-and-back reaching movements with their hand on a digitizer tablet. A corresponding cursor movement could be shown on a monitor, yet slightly rotated by an angle that varied from trial to trial. Upon completion of the backward movement, participants judged the endpoint of the outward hand or cursor movement. The mutually biased judgements that typically result reflect the integration of the proprioceptive information on hand endpoint with the visual information on cursor endpoint. We here manipulated the time period during which the cursor was visible, thereby selectively providing causality evidence either before or after sensory information regarding the to-be-judged movement endpoint was available. Specifically, the cursor was visible either during the outward or backward hand movement (conditions Out and Back , respectively). Our data revealed reduced integration in the condition Back compared with the condition Out , suggesting that causality evidence available before the to-be-judged movement endpoint is more powerful than later evidence in determining how strongly the brain integrates the endpoint information. This finding further suggests that sensory integration is not delayed until a judgement is requested.


1992 ◽  
Vol 67 (6) ◽  
pp. 1447-1463 ◽  
Author(s):  
K. Nakamura ◽  
A. Mikami ◽  
K. Kubota

1. The activity of single neurons was recorded extracellularly from the monkey amygdala while monkeys performed a visual discrimination task. The monkeys were trained to remember a visual stimulus during a delay period (0.5-3.0 s), to discriminate a new visual stimulus from the stimulus, and to release a lever when the new stimulus was presented. Colored photographs (human faces, monkeys, foods, and nonfood objects) or computer-generated two-dimensional shapes (a yellow triangle, a red circle, etc.) were used as visual stimuli. 2. The activity of 160 task-related neurons was studied. Of these, 144 (90%) responded to visual stimuli, 13 (8%) showed firing during the delay period, and 9 (6%) responded to the reward. 3. Task-related neurons were categorized according to the way in which various stimuli activated the neurons. First, to evaluate the proportion of all tested stimuli that elicited changes in activity of a neuron, selectivity index 1 (SI1) was employed. Second, to evaluate the ability of a neuron to discriminate a stimulus from another stimulus, SI2 was employed. On the basis of the calculated values of SI1 and SI2, neurons were classified as selective and nonselective. Most visual neurons were categorized as selective (131/144), and a few were characterized as nonselective (13/144). Neurons active during the delay period were also categorized as selective visual and delay neurons (6/13) and as nonselective delay neurons (7/13). 4. Responses of selective visual neurons had various temporal and stimulus-selective properties. Latencies ranged widely from 60 to 300 ms. Response durations also ranged widely from 20 to 870 ms. When the natures of the various effective stimuli were studied for each neuron, one-fourth of the responses of these neurons were considered to reflect some categorical aspect of the stimuli, such as human, monkey, food, or nonfood object. Furthermore, the responses of some neurons apparently reflected a certain behavioral significance of the stimuli that was separate from the task, such as the face of a particular person, smiling human faces, etc. 5. Nonselective visual neurons responded to a visual stimulus, regardless of its nature. They also responded in the absence of a visual stimulus when the monkey anticipated the appearance of the next stimulus. 6. Selective visual and delay neurons fired in response to particular stimuli and throughout the subsequent delay periods. Nonselective delay neurons increased their discharge rates gradually during the delay period, and the discharge rate decreased after the next stimulus was presented. 7. Task-related neurons were identified in six histologically distinct nuclei of the amygdala.(ABSTRACT TRUNCATED AT 400 WORDS)


1995 ◽  
Vol 12 (4) ◽  
pp. 723-741 ◽  
Author(s):  
W. Guido ◽  
S.-M. Lu ◽  
J.W. Vaughan ◽  
Dwayne W. Godwin ◽  
S. Murray Sherman

AbstractRelay cells of the lateral geniculate nucleus respond to visual stimuli in one of two modes: burst and tonic. The burst mode depends on the activation of a voltage-dependent, Ca2+ conductance underlying the low threshold spike. This conductance is inactivated at depolarized membrane potentials, but when activated from hyperpolarized levels, it leads to a large, triangular, nearly all-or-none depolarization. Typically, riding its crest is a high-frequency barrage of action potentials. Low threshold spikes thus provide a nonlinear amplification allowing hyperpolarized relay neurons to respond to depolarizing inputs, including retinal EPSPs. In contrast, the tonic mode is characterized by a steady stream of unitary action potentials that more linearly reflects the visual stimulus. In this study, we tested possible differences in detection between response modes of 103 geniculate neurons by constructing receiver operating characteristic (ROC) curves for responses to visual stimuli (drifting sine-wave gratings and flashing spots). Detectability was determined from the ROC curves by computing the area under each curve, known as the ROC area. Most cells switched between modes during recording, evidently due to small shifts in membrane potential that affected the activation state of the low threshold spike. We found that the more often a cell responded in burst mode, the larger its ROC area. This was true for responses to optimal and nonoptimal visual stimuli, the latter including nonoptimal spatial frequencies and low stimulus contrasts. The larger ROC areas associated with burst mode were due to a reduced spontaneous activity and roughly equivalent level of visually evoked response when compared to tonic mode. We performed a within-cell analysis on a subset of 22 cells that switched modes during recording. Every cell, whether tested with a low contrast or high contrast visual stimulus exhibited a larger ROC area during its burst response mode than during its tonic mode. We conclude that burst responses better support signal detection than do tonic responses. Thus, burst responses, while less linear and perhaps less useful in providing a detailed analysis of visual stimuli, improve target detection. The tonic mode, with its more linear response, seems better suited for signal analysis rather than signal detection.


1996 ◽  
Vol 76 (3) ◽  
pp. 1439-1456 ◽  
Author(s):  
P. Mazzoni ◽  
R. M. Bracewell ◽  
S. Barash ◽  
R. A. Andersen

1. The lateral intraparietal area (area LIP) of the monkey's posterior parietal cortex (PPC) contains neurons that are active during saccadic eye movements. These neurons' activity includes visual and saccade-related components. These responses are spatially tuned and the location of a neuron's visual receptive field (RF) relative to the fovea generally overlaps its preferred saccade amplitude and direction (i.e., its motor field, MF). When a delay is imposed between the presentation of a visual stimulus and a saccade made to its location (memory saccade task), many LIP neurons maintain elevated activity during the delay (memory activity, M), which appears to encode the metrics of the next intended saccadic eye movements. Recent studies have alternatively suggested that LIP neurons encode the locations of visual stimuli regardless of where the animal intends to look. We examined whether the M activity of LIP neurons specifically encodes movement intention or the locations of recent visual stimuli, or a combination of both. In the accompanying study, we investigated whether the intended-movement activity reflects changes in motor plan. 2. We trained monkeys (Macaca mulatta) to memorize the locations of two visual stimuli and plan a sequence of two saccades, one to each remembered target, as we recorded the activity of single LIP neurons. Two targets were flashed briefly while the monkey maintained fixation; after a delay the fixation point was extinguished, and the monkey made two saccades in sequence to each target's remembered location, in the order in which the targets were presented. This "delayed double saccade" (DDS) paradigm allowed us to dissociate the location of visual stimulation from the direction of the planned saccade and thus distinguish neuronal activity related to the target's location from activity related to the saccade plan. By imposing a delay, we eliminated the confounding effect of any phasic responses coincident with the appearance of the stimulus and with the saccade. 3. We arranged the two visual stimuli so that in one set of conditions at least the first one was in the neuron's visual RF, and thus the first saccade was in the neuron's motor field (MF). M activity should be high in these conditions according to both the sensory memory and motor plan hypotheses. In another set of conditions, the second stimulus appeared in the RF but the first one was presented outside the RF, instructing the monkey to plan the first saccade away from the neuron's MF. If the M activity encodes the motor plan, it should be low in these conditions, reflecting the plan for the first saccade (away from the MF). If it is a sensory trace of the stimulus' location, it should be high, reflecting stimulation of the RF by the second target. 4. We tested 49 LIP neurons (in 3 hemispheres of 2 monkeys) with M activity on the DDS task. Of these, 38 (77%) had M activity related to the next intended saccade. They were active in the delay period, as expected, if the first saccade was in their preferred direction. They were less active or silent if the next saccade was not in their preferred direction, even when the second stimulus appeared in their RF. 5. The M activity of 8 (16%) of the remaining neurons specifically encoded the location of the most recent visual stimulus. Their firing rate during the delay reflected stimulation of the RF independently of the saccade being planned. The remaining 3 neurons had M activity that did not consistently encode either the next saccade or the stimulus' location. 6. We also recorded the activity of a subset of neurons (n = 38) in a condition in which no stimulus appeared in a neuron's RF, but the second saccade was in the neuron's MF. In this case the majority of neurons tested (23/38, 60%) became active in the period between the first and second saccade, even if neither stimulus had appeared in their RF. Moreover, this activity appeared only after the first saccade had started in all but two of


2021 ◽  
pp. 1-12
Author(s):  
Anna Borgolte ◽  
Ahmad Bransi ◽  
Johanna Seifert ◽  
Sermin Toto ◽  
Gregor R. Szycik ◽  
...  

Abstract Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


2021 ◽  
Author(s):  
Constantinos Eleftheriou

The goal of this protocol is to assess visuomotor learning and motor flexibility in freely-moving mice, using the Visiomode touchscreen platform. Water-restricted mice first learn to associate touching a visual stimulus on the screen with a water reward. They then learn to discriminate between different visual stimuli on the touchscreen by nose-poking, before asked to switch their motor strategy to forelimb reaching. Version 1 of the protocol uses traditional water deprivation and water rewards in the task as a means of motivating mice to perform the task. Version 2 of the protocol uses Citric Acid for water restriction and sucrose as rewards in the task instead of the traditional water deprivation protocol.


2020 ◽  
Vol 46 (Supplement_1) ◽  
pp. S241-S241
Author(s):  
Martin Roy ◽  
Elsa Gilbert ◽  
Michel Maziade ◽  
Pierre Marquet

Abstract Background Major psychiatric disorders (MPD) such as schizophrenia, bipolar disorder and recurrent major depression have shared neurodevelopmental vulnerability due to early neuronal and sensory defect as revealed by sensory and cognitive endophenotypes observed in our cohorts (e.g. Gagné et al., Schizophr. Res., 2019). There is considerable evidence that a harmonious self-development - known to be disrupted in MPDs - requires a synchronized multisensory perception and an adequate integration of sensory afferences (e.g. tactile, visual, auditory and proprio / interoception) with cognition. Early impairment in intermodal transfer (IMT) and multisensory integration (MSI) may jeopardize a stable and unified self’s and world’s representation and then would undermine self-development and represent a risk factor for MPD. IMT is the capability to transfer a percept coming exclusively from a sensory modality (e.g. tactile) to another modality (e.g. visual). MSI is the ability to integrate sensory inputs from different modalities (e.g. visual and auditory) to have a better information processing. This study shows that impairment in IMT/MSI may be a vulnerability marker in children genetically at-risk. Methods Sample: Forty-four offspring (21 girls) of patients suffering from a MPD and thus genetically at-risk for MPD (GatR) aged from 9–15 years old (mean age = 12.06) were recruited from the cohort study INTERCEPT through the HoPE program of the CIUSSS de la Capitale-Nationale. Twenty-five controls (19 girls) with no family history of MPD and no DSM-V disorder aged from 9–15 years old (mean age = 12.87) were recruited using advertisements or control bank. IMT Task: Each condition has 12 trials and the shapes are hidden from sight during palpation. MSI Task: - Simple reaction time (RT) task comprising 80 trials with unimodal stimuli (Auditory OR Visual) and 40 trials with AV (Auditory and Visual simultaneously) multimodal stimuli presented randomly. Results IMT task: When compared to controls, GatR were impaired in the three conditions (T-T: 9.77 vs. 10.32, T-V: 9.89 vs. 9.96, V-T: 9,11 vs. 9.92) with significant impairments both for T-T (t(60.53) = 2.18, p = 0.017) and V-T (t (57.28) = 2.33, p = 0.012) conditions. MSI task: GatR showed a deficit in MSI for almost all RT ranges (except for a peak at 185 ms), while control participants showed MSI facilitation for ranges from 150 to 200 ms. Discussion Developmentally genetically high-risk children would show significant impairments both in IMT and MSI that might enter into the group of indicators of brain dysfunctions, or risk endophenotypes, that both children at risk and adult patients carry (Paccalet et al., Schizophr. Res., 2016; Maziade, New Eng J Medicine, 2017). In addition, the two tasks would be valid and sensitive to the early sensory alterations in self-development. Finally, the battery is brief, user-friendly and playful for children.


Sign in / Sign up

Export Citation Format

Share Document