scholarly journals Dissociable Contribution of Extrastriate Responses to Representational Enhancement of Gaze Targets

2021 ◽  
pp. 1-14
Author(s):  
Yaser Merrikhi ◽  
Mohammad Shams-Ahmar ◽  
Hamid Karimi-Rouzbahani ◽  
Kelsey Clark ◽  
Reza Ebrahimpour ◽  
...  

Abstract Before saccadic eye movements, our perception of the saccade targets is enhanced. Changes in the visual representation of saccade targets, which presumably underlie this perceptual benefit, emerge even before the eye begins to move. This perisaccadic enhancement has been shown to involve changes in the response magnitude, selectivity, and reliability of visual neurons. In this study, we quantified multiple aspects of perisaccadic changes in the neural response, including gain, feature tuning, contrast response function, reliability, and correlated activity between neurons. We then assessed the contributions of these various perisaccadic modulations to the population's enhanced perisaccadic representation of saccade targets. We found a partial dissociation between the motor information, carried entirely by gain changes, and visual information, which depended on all three types of modulation. These findings expand our understanding of the perisaccadic enhancement of visual representations and further support the existence of multiple sources of motor modulation and visual enhancement within extrastriate visual cortex.

2019 ◽  
Vol 30 (3) ◽  
pp. 1068-1086 ◽  
Author(s):  
Bruno Oliveira Ferreira de Souza ◽  
Nelson Cortes ◽  
Christian Casanova

Abstract The pulvinar is the largest extrageniculate visual nucleus in mammals. Given its extensive reciprocal connectivity with the visual cortex, it allows the cortico-thalamocortical transfer of visual information. Nonetheless, knowledge of the nature of the pulvinar inputs to the cortex remains elusive. We investigated the impact of silencing the pulvinar on the contrast response function of neurons in 2 distinct hierarchical cortical areas in the cat (areas 17 and 21a). Pulvinar inactivation altered the response gain in both areas, but with larger changes observed in area 21a. A theoretical model was proposed, simulating the pulvinar contribution to cortical contrast responses by modifying the excitation-inhibition balanced state of neurons across the cortical hierarchy. Our experimental and theoretical data showed that the pulvinar exerts a greater modulatory influence on neuronal activity in area 21a than in the primary visual cortex, indicating that the pulvinar impact on cortical visual neurons varies along the cortical hierarchy.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


1997 ◽  
Vol 14 (3) ◽  
pp. 577-587 ◽  
Author(s):  
Jonathan D. Victor ◽  
Mary M. Conte ◽  
Keith P. Purpura

AbstractWe recorded visual evoked potentials in response to square-wave contrast-reversal checkerboards undergoing a transition in the mean contrast level. Checkerboards were modulated at 4.22 Hz (8.45-Hz reversal rate). After each set of 16 cycles of reversals, stimulus contrast abruptly switched between a “high” contrast level (0.06 to 1.0) to a “low” contrast level (0.03 to 0.5). Higher contrasts attenuated responses to lower contrasts by up to a factor of 2 during the period immediately following the contrast change. Contrast-response functions derived from the initial second following a conditioning contrast shifted by a factor of 2–4 along the contrast axis. For low-contrast stimuli, response phase was an advancing function of the contrast level in the immediately preceding second. For high-contrast stimuli, response phase was independent of the prior contrast history. Steady stimulation for periods as long as 1 min produced only minor effects on response amplitude, and no detectable effects on response phase. These observations delineate the dynamics of a contrast gain control in human vision.


1981 ◽  
Vol 36 (9-10) ◽  
pp. 910-912 ◽  
Author(s):  
Simon Laughlin

Abstract The contrast-response function of a class of first order intemeurons in the fly's compound eye approximates to the cumulative probability distribution of contrast levels in natural scenes. Elementary information theory shows that this matching enables the neurons to encode contrast fluctuations most efficiently.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


2011 ◽  
Vol 29 (1) ◽  
pp. 61-71 ◽  
Author(s):  
KEVIN J. FORD ◽  
MARLA B. FELLER

AbstractIn the few weeks prior to the onset of vision, the retina undergoes a dramatic transformation. Neurons migrate into position and target appropriate synaptic partners to assemble the circuits that mediate vision. During this period of development, the retina is not silent but rather assembles and disassembles a series of transient circuits that use distinct mechanisms to generate spontaneous correlated activity called retinal waves. During the first postnatal week, this transient circuit is comprised of reciprocal cholinergic connections between starburst amacrine cells. A few days before the eyes open, these cholinergic connections are eliminated as the glutamatergic circuits involved in processing visual information are formed. Here, we discuss the assembly and disassembly of this transient cholinergic network and the role it plays in various aspects of retinal development.


2008 ◽  
Vol 48 (16) ◽  
pp. 1726-1734 ◽  
Author(s):  
Patrick H.W. Chu ◽  
Henry H.L. Chan ◽  
Yiu-fai Ng ◽  
Brian Brown ◽  
Andrew W. Siu ◽  
...  

2009 ◽  
Vol 276 (1662) ◽  
pp. 1545-1554 ◽  
Author(s):  
Karin Nordström ◽  
David C O'Carroll

Motion adaptation is a widespread phenomenon analogous to peripheral sensory adaptation, presumed to play a role in matching responses to prevailing current stimulus parameters and thus to maximize efficiency of motion coding. While several components of motion adaptation (contrast gain reduction, output range reduction and motion after-effect) have been described, previous work is inconclusive as to whether these are separable phenomena and whether they are locally generated. We used intracellular recordings from single horizontal system neurons in the fly to test the effect of local adaptation on the full contrast-response function for stimuli at an unadapted location. We show that contrast gain and output range reductions are primarily local phenomena and are probably associated with spatially distinct synaptic changes, while the antagonistic after-potential operates globally by transferring to previously unadapted locations. Using noise analysis and signal processing techniques to remove ‘spikelets’, we also characterize a previously undescribed alternating current component of adaptation that can explain several phenomena observed in earlier studies.


Sign in / Sign up

Export Citation Format

Share Document