scholarly journals Robust effects of corticothalamic feedback during naturalistic visual stimulation

2019 ◽  
Author(s):  
Martin A. Spacek ◽  
Gregory Born ◽  
Davide Crombie ◽  
Yannik Bauer ◽  
Xinyu Liu ◽  
...  

AbstractNeurons in the dorsolateral geniculate nucleus (dLGN) of the thalamus are contacted by a large number of feedback synapses from cortex, whose role in visual processing is poorly understood. Past studies investigating this role have mostly used simple visual stimuli and anesthetized animals, but corticothalamic (CT) feedback might be particularly relevant during processing of complex visual stimuli, and its effects might depend on behavioral state. Here, we find that CT feedback robustly modulates responses to naturalistic movie clips by increasing response gain and promoting tonic firing mode. Compared to these robust effects for naturalistic movies, CT feedback effects on firing rates were less consistent for simple grating stimuli, likely related to differences in spatial context. Finally, while CT feedback and locomotion affected dLGN responses in similar ways, we found their effects to be largely independent. We propose that CT feedback and behavioral state use separate circuits to modulate visual information on its way to cortex in a context-dependent manner.

2017 ◽  
Vol 118 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Bastian Schledde ◽  
F. Orlando Galashan ◽  
Magdalena Przybyla ◽  
Andreas K. Kreiter ◽  
Detlef Wegener

Nonspatially selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA), in particular, is a well-studied process that dynamically and selectively addresses neurons preferentially processing the attended feature attribute (e.g., leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g., motion), but experimental evidence for a feature dimension-specific attentional modulation on a cellular level is lacking. Therefore, we investigated neuronal activity in macaque motion-selective mediotemporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced when the task requires perceptual sensitivity to motion. In line with this, we found that mean firing rates were higher in the motion task and that response variability and latency were lower compared with values in the color task, despite identical visual stimulation. This task-specific, dimension-based modulation of motion processing emerged already in the absence of visual input, was independent of the relation between the attended and stimulating motion direction, and was accompanied by a spatially global reduction of neuronal variability. The results provide single-cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class. NEW & NOTEWORTHY Cortical processing serving visual perception prioritizes information according to current task requirements. We provide evidence in favor of a dimension-based attentional mechanism addressing all neurons that process visual information in the task-relevant feature domain. Behavioral tasks required monkeys to attend either color or motion, causing modulations of response strength, variability, latency, and baseline activity of motion-selective monkey area MT neurons irrespective of the attended motion direction but specific to the attended feature dimension.


2021 ◽  
Author(s):  
Kimberly Reinhold ◽  
Arbora Resulaj ◽  
Massimo Scanziani

The behavioral state of a mammal impacts how the brain responds to visual stimuli as early as in the dorsolateral geniculate nucleus of the thalamus (dLGN), the primary relay of visual information to the cortex. A clear example of this is the markedly stronger response of dLGN neurons to higher temporal frequencies of the visual stimulus in alert as compared to quiescent animals. The dLGN receives strong feedback from the visual cortex, yet whether this feedback contributes to these state-dependent responses to visual stimuli is poorly understood. Here we show that in mice, silencing cortico-thalamic feedback abolishes state-dependent differences in the response of dLGN neurons to visual stimuli. This holds true for dLGN responses to both temporal and spatial features of the visual stimulus. These results reveal that the state-dependent shift of the response to visual stimuli in an early stage of visual processing depends on cortico-thalamic feedback.


1994 ◽  
Vol 71 (1) ◽  
pp. 146-149 ◽  
Author(s):  
J. Cudeiro ◽  
C. Rivadulla ◽  
R. Rodriguez ◽  
S. Martinez-Conde ◽  
C. Acuna ◽  
...  

1. Using an in vivo preparation we have examined the actions of two inhibitors of nitric oxide synthase (NOS), NG-nitro-L-arginine (L-NOArg) and NG-methyl-L-arginine (L-MeArg), in the feline dorsal lateral geniculate nucleus (dLGN). We compared the responses obtained to iontophoretic application of these substances during visual stimulation with those elicited by visual stimulation alone. The effects of concurrent ejection of L-arginine (L-Arg), the normal physiological substrate of NOS, and D-arginine, the inactive isomer, were tested on these responses. 2. Extracellular application of L-NOArg and L-MeArg produced clear and repeatable effects, consisting of substantial reduction in discharge rate without affecting response selectivity, on 94% of tested cells. These effects were prevented by simultaneous application of L-Arg, which when ejected alone produced no change on visual evoked responses. 3. The data suggest that nitric oxide (NO) is necessary for the transmission of the visual input under normal visual stimulation and show a direct involvement of NO in visual information processing at the level of dLGN, suggesting that its contribution to brain mechanisms is more profound than previously thought.


2021 ◽  
Vol 15 ◽  
Author(s):  
Thorben Hülsdünker ◽  
David Riedel ◽  
Hannes Käsbauer ◽  
Diemo Ruhnow ◽  
Andreas Mierau

Although vision is the dominating sensory system in sports, many situations require multisensory integration. Faster processing of auditory information in the brain may facilitate time-critical abilities such as reaction speed however previous research was limited by generic auditory and visual stimuli that did not consider audio-visual characteristics in ecologically valid environments. This study investigated the reaction speed in response to sport-specific monosensory (visual and auditory) and multisensory (audio-visual) stimulation. Neurophysiological analyses identified the neural processes contributing to differences in reaction speed. Nineteen elite badminton players participated in this study. In a first recording phase, the sound profile and shuttle speed of smash and drop strokes were identified on a badminton court using high-speed video cameras and binaural recordings. The speed and sound characteristics were transferred into auditory and visual stimuli and presented in a lab-based experiment, where participants reacted in response to sport-specific monosensory or multisensory stimulation. Auditory signal presentation was delayed by 26 ms to account for realistic audio-visual signal interaction on the court. N1 and N2 event-related potentials as indicators of auditory and visual information perception/processing, respectively were identified using a 64-channel EEG. Despite the 26 ms delay, auditory reactions were significantly faster than visual reactions (236.6 ms vs. 287.7 ms, p < 0.001) but still slower when compared to multisensory stimulation (224.4 ms, p = 0.002). Across conditions response times to smashes were faster when compared to drops (233.2 ms, 265.9 ms, p < 0.001). Faster reactions were paralleled by a lower latency and higher amplitude of the auditory N1 and visual N2 potentials. The results emphasize the potential of auditory information to accelerate the reaction time in sport-specific multisensory situations. This highlights auditory processes as a promising target for training interventions in racquet sports.


2019 ◽  
Author(s):  
Milton A. V. Ávila ◽  
Rafael N. Ruggiero ◽  
João P. Leite ◽  
Lezio S. Bueno-Junior ◽  
Cristina M. Del-Ben

ABSTRACTAudiovisual integration may improve unisensory perceptual performance and learning. Interestingly, this integration may occur even when one of the sensory modalities is not conscious to the subject, e.g., semantic auditory information may impact nonconscious visual perception. Studies have shown that the flow of nonconscious visual information is mostly restricted to early cortical processing, without reaching higher-order areas such as the parieto-frontal network. Thus, because multisensory cortical interactions may already occur in early stages of processing, we hypothesized that nonconscious visual stimulation might facilitate auditory pitch learning. In this study we used a pitch learning paradigm, in which individuals had to identify six pitches in a scale with constant intervals of 50 cents. Subjects were assigned to one of three training groups: the test group (Auditory + congruent unconscious visual, AV), and two control groups (Auditory only, A, and Auditory + incongruent unconscious visual, AVi). Auditory-only tests were done before and after training in all groups. Electroencephalography (EEG) was recorded throughout the experiment. Results show that the test group (AV, with congruent nonconscious visual stimuli) performed better during the training, and showed a greater improvement from pre-to post-test. Control groups did not differ from one another. Changes in the AV group were mainly due to performances in the first and last pitches of the scale. We also observed consistent EEG patterns associated with this performance improvement in the AV group, especially maintenance of higher theta-band power after training in central and temporal areas, and stronger theta-band synchrony between visual and auditory cortices. Therefore, we show that nonconscious multisensory interactions are powerful enough to boost auditory perceptual learning, and that increased functional connectivity between early visual and auditory cortices after training might play a role in this effect. Moreover, we provide a methodological contribution for future studies on auditory perceptual learning, particularly those applied to relative and absolute pitch training.


2001 ◽  
Vol 85 (3) ◽  
pp. 1107-1118 ◽  
Author(s):  
Theodore G. Weyand ◽  
Michael Boudreaux ◽  
William Guido

Thalamic neurons can exhibit two distinct firing modes: tonic and burst. In the lateral geniculate nucleus (LGN), the tonic mode appears as a relatively faithful relay of visual information from retina to cortex. The function of the burst mode is less understood. Its prevalence during slow-wave sleep (SWS) and linkage to synchronous cortical electroencephalogram (EEG) suggest that it has an important role during this form of sleep. Although not nearly as common, bursting can also occur during wakefulness. The goal of this study was to identify conditions that affect burst probability, and to compare burst incidence during sleeping and waking. LGN neurons are extraordinarily heterogenous in the degree to which they burst, during both sleeping and waking. Some LGN neurons never burst under any conditions during wakefulness, and several never burst during slow-wave sleep. During wakefulness, <1% of action potentials were associated with bursting, whereas during sleep this fraction jumps to 18%. Although bursting was most common during slow-wave sleep, more than 50% of the bursting originated from 14% of the LGN cells. Bursting during sleep was largely restricted to episodes lasting 1–5 s, with ∼47% of these episodes being rhythmic and in the delta frequency range (0.5–4 Hz). In wakefulness, although visual stimulation accounted for the greatest number of bursts, it was still a small fraction of the total response (4%, 742 bursts/17,744 cycles in 93 cells). We identified two variables that appeared to influence burst probability: size of the visual stimuli used to elicit responses and behavioral state. Increased stimulus size increased burst probability. We attribute this to the increased influence large stimuli have on a cell's inhibitory mechanisms. As with sleep, a large fraction of bursting originated from a small number of cells. During visual stimulation, 50% of bursting was generated by 9% of neurons. Increased vigilance was negatively correlated with burst probability. Visual stimuli presented during active fixation (i.e., when the animal must fixate on an overt fixation point) were less likely to produce bursting, than when the same visual stimuli were presented but no fixation point present (“passive” fixation). Such observations suggest that even brief departures from attentive states can hyperpolarize neurons sufficiently to de-inactivate the burst mechanism. Our results provide a new view of the temporal structure of bursting during slow-wave sleep; one that supports episodic rhythmic activity in the intact animal. In addition, because bursting could be tied to specific conditions within wakefulness, we suggest that bursting has a specific function within that state.


2016 ◽  
Vol 10 (1) ◽  
pp. 51-61
Author(s):  
Xiaoyuan Li ◽  
Qiwei Li ◽  
Li Shi ◽  
Liucheng Jiao

The response properties of individual neurons in the primary visual cortex (V1) are among the most thoroughly described in the mammalian central nervous system, but they reveal less about higher-order processes like visual perception. Neural activity is highly nonlinear and non-stationary over time, greatly complicating the relationships among the spatiotemporal characteristics of visual stimuli, local field potential (LFP) signal components, and the underlying neuronal activity patterns. We applied discrete wavelet transformation to detect new features of the LFP that may better describe the association between visual input and neural ensemble activity. The relative wavelet energy (RWE), wavelet entropy (WS), and the mean WS were computed from LFPs recorded in rat V1 during three distinct visual stimuli: low ambient light, a uniform grey computer screen, and simple pictures of common scenes. The time evolution of the RWE within the γ band (31-62.5 Hz) was the dominant component over certain periods during visual stimulation. Mean WS decreased with increasing complexity of the visual image, and the time-dependent WS alternated between periods of highly ordered and disordered population activity. In conclusion, these alternating periods of high and low WS may correspond to different aspects of visual processing, such as feature extraction and perception.


2019 ◽  
Author(s):  
Clément Vinauger ◽  
Floris Van Breugel ◽  
Lauren T. Locke ◽  
Kennedy K.S. Tobin ◽  
Michael H. Dickinson ◽  
...  

SummaryMosquitoes rely on the integration of multiple sensory cues, including olfactory, visual, and thermal stimuli, to detect, identify and locate their hosts [1–4]. Although we increasingly know more about the role of chemosensory behaviours in mediating mosquito-host interactions [1], the role of visual cues remains comparatively less studied [3], and how the combination of olfactory and visual information is integrated in the mosquito brain remains unknown. In the present study, we used a tethered-flight LED arena, which allowed for quantitative control over the stimuli, to show that CO2 exposure affects target-tracking responses, but not responses to large-field visual stimuli. In addition, we show that CO2 modulates behavioural responses to visual objects in a time-dependent manner. To gain insight into the neural basis of this olfactory and visual coupling, we conducted two-photon microscopy experiments in a new GCaMP6s-expressing mosquito line. Imaging revealed that the majority of ROIs in the lobula region of the optic lobe exhibited strong responses to small-field stimuli, but showed little response to a large-field stimulus. Approximately 20% of the neurons we imaged were modulated when an attractive odour preceded the visual stimulus; these same neurons also elicited a small response when the odour was presented alone. By contrast, imaging in the antennal lobe revealed no modulation when visual stimuli were presented before or after the olfactory stimulus. Together, our results are the first to reveal the dynamics of olfactory modulation in visually evoked behaviours of mosquitoes, and suggest that coupling between these sensory systems is asymmetrical and time-dependent.


2020 ◽  
Author(s):  
Liang Liang ◽  
Alex Fratzl ◽  
Omar El Mansour ◽  
Jasmine D.S. Reggiani ◽  
Chinfei Chen ◽  
...  

SummaryHow sensory information is processed by the brain can depend on behavioral state. In the visual thalamus and cortex, arousal/locomotion is associated with changes in the magnitude of responses to visual stimuli. Here, we asked whether such modulation of visual responses might already occur at an earlier stage in this visual pathway. We measured neural activity of retinal axons using wide-field and two-photon calcium imaging in awake mouse thalamus across arousal states associated with different pupil sizes. Surprisingly, visual responses to drifting gratings in retinal axonal boutons were robustly modulated by arousal level, in a manner that varied across stimulus dimensions and across functionally distinct subsets of boutons. At low and intermediate spatial frequencies, the majority of boutons were suppressed by arousal. In contrast, at high spatial frequencies, the proportions of boutons showing enhancement or suppression were more similar, particularly for boutons tuned to regions of visual space ahead of the mouse. Arousal-related modulation also varied with a bouton’s sensitivity to luminance changes and direction of motion, with greater response suppression in boutons tuned to luminance decrements vs. increments, and in boutons preferring motion along directions or axes of optic flow. Together, our results suggest that differential filtering of distinct visual information channels by arousal state occurs at very early stages of visual processing, before the information is transmitted to neurons in visual thalamus. Such early filtering may provide an efficient means of optimizing central visual processing and perception of state-relevant visual stimuli.


2000 ◽  
Vol 84 (6) ◽  
pp. 2984-2997 ◽  
Author(s):  
Per Jenmalm ◽  
Seth Dahlstedt ◽  
Roland S. Johansson

Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing.


Sign in / Sign up

Export Citation Format

Share Document