scholarly journals Spatio-temporal patterns of population responses in the visual cortex under Isoflurane: from wakefulness to loss of consciousness

Author(s):  
Shany Nivinsky Margalit ◽  
Neta Gery Golomb ◽  
Omer Tsur ◽  
Aeyal Raz ◽  
Hamutal Slovin

AbstractAnesthetic drugs are widely used in medicine and research to mediate loss of consciousness (LOC). Despite the vast use of anesthesia, how LOC affects cortical sensory processing and the underlying neural circuitry, is not well understood. We measured neuronal population activity in the visual cortices of awake and isoflurane anesthetized mice and compared the visually evoked responses under different levels of consciousness. We used voltage-sensitive dye imaging (VSDI) to characterize the temporal and spatial properties of cortical responses to visual stimuli over a range of states from wakefulness to deep anesthesia. VSDI enabled measuring the neuronal population responses at high spatial (meso-scale) and temporal resolution from several visual regions (V1, extrastiate-lateral (ESL) and extrastiate-medial (ESM)) simultaneously. We found that isoflurane has multiple effects on the population evoked response that augmented with anesthetic depth, where the largest changes occurred at LOC. Isoflurane reduced the response amplitude and prolonged the latency of response in all areas. In addition, the intra-areal spatial spread of the visually evoked activity decreased. During visual stimulation, intra-areal and inter-areal correlation between neuronal populations decreased with increasing doses of isoflurane. Finally, while in V1 the majority of changes occurred at higher doses of isoflurane, higher visual areas showed marked changes at lower doses of isoflurane. In conclusion, our results demonstrate a reverse hierarchy shutdown of the visual cortices regions: low-dose isoflurane diminishes the visually evoked activity in higher visual areas before lower order areas and cause a reduction in inter-areal connectivity leading to a disconnected network.

2016 ◽  
Vol 23 (2) ◽  
pp. 220-227 ◽  
Author(s):  
Tal Benoliel ◽  
Noa Raz ◽  
Tamir Ben-Hur ◽  
Netta Levin

Background: We have recently suggested that delayed visual evoked potential (VEP) latencies in the fellow eye (FE) of optic neuritis patients reflect a cortical adaptive process, to compensate for the delayed arrival of visual information via the affected eye (AE). Objective: To define the cortical mechanism that underlies this adaptive process. Methods: Cortical activations to moving stimuli and connectivity patterns within the visual network were tested using functional magnetic resonance imaging (MRI) in 11 recovered optic neuritis patients and in 11 matched controls. Results: Reduced cortical activation in early but not in higher visual areas was seen in both eyes, compared to controls. VEP latencies in the AEs inversely correlated with activation in motion-related visual cortices. Inter-eye differences in VEP latencies inversely correlated with cortical activation following FE stimulation, throughout the visual hierarchy. Functional correlation between visual regions was more pronounced in the FE compared with the AE. Conclusion: The different correlation patterns between VEP latencies and cortical activation in the AE and FE support different pathophysiology of VEP prolongation in each eye. Similar cortical activation patterns in both eyes and the fact that stronger links between early and higher visual areas were found following FE stimulation suggest a cortical modulatory process in the FE.


2020 ◽  
Author(s):  
Joshua J. Foster ◽  
William Thyer ◽  
Janna W. Wennberg ◽  
Edward Ahw

AbstractCovert spatial attention has a variety of effects on the responses of individual neurons. However, relatively little is known about the net effect of these changes on sensory population codes, even though perception ultimately depends on population activity. Here, we measured the electroencephalogram (EEG) in human observers (male and female), and isolated stimulus-evoked activity that was phase-locked to the onset of attended and ignored visual stimuli. Using an encoding model, we reconstructed spatially selective population tuning functions from the pattern of stimulus-evoked activity across the scalp. Our EEG-based approach allowed us to measure very early visually evoked responses occurring ~100 ms after stimulus onset. In Experiment 1, we found that covert attention increased the amplitude of spatially tuned population responses at this early stage of sensory processing. In Experiment 2, we parametrically varied stimulus contrast to test how this effect scaled with stimulus contrast. We found that the effect of attention on the amplitude of spatially tuned responses increased with stimulus contrast, and was well-described by an increase in response gain (i.e., a multiplicative scaling of the population response). Together, our results show that attention increases the gain of spatial population codes during the first wave of visual processing.Significance StatementWe know relatively little about how attention improves population codes, even though perception is thought to critically depend on population activity. In this study, we used an encoding-model approach to test how attention modulates the spatial tuning of stimulus-evoked population responses measured with EEG. We found that attention multiplicatively scales the amplitude of spatially tuned population responses. Furthermore, this effect was present within 100 ms of stimulus onset. Thus, our results show that attention improves spatial population codes by increasing their gain at this early stage of processing.


2021 ◽  
Author(s):  
João D. Semedo ◽  
Anna I. Jasper ◽  
Amin Zandvakili ◽  
Amir Aschner ◽  
Christian K. Machens ◽  
...  

AbstractBrain function relies on the coordination of activity across multiple, recurrently connected, brain areas. For instance, sensory information encoded in early sensory areas is relayed to, and further processed by, higher cortical areas and then fed back. However, the way in which feedforward and feedback signaling interact with one another is incompletely understood. Here we investigate this question by leveraging simultaneous neuronal population recordings in early and midlevel visual areas (V1-V2 and V1-V4). Using a dimensionality reduction approach, we find that population interactions are feedforward-dominated shortly after stimulus onset and feedback-dominated during spontaneous activity. The population activity patterns most correlated across areas were distinct during feedforward- and feedback-dominated periods. These results suggest that feedforward and feedback signaling rely on separate “channels”, such that feedback signaling does not directly affect activity that is fed forward.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Eslam Mounier ◽  
Bassem Abdullah ◽  
Hani Mahdi ◽  
Seif Eldawlatly

AbstractThe Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Katrina R. Quinn ◽  
Lenka Seillier ◽  
Daniel A. Butts ◽  
Hendrikje Nienborg

AbstractFeedback in the brain is thought to convey contextual information that underlies our flexibility to perform different tasks. Empirical and computational work on the visual system suggests this is achieved by targeting task-relevant neuronal subpopulations. We combine two tasks, each resulting in selective modulation by feedback, to test whether the feedback reflected the combination of both selectivities. We used visual feature-discrimination specified at one of two possible locations and uncoupled the decision formation from motor plans to report it, while recording in macaque mid-level visual areas. Here we show that although the behavior is spatially selective, using only task-relevant information, modulation by decision-related feedback is spatially unselective. Population responses reveal similar stimulus-choice alignments irrespective of stimulus relevance. The results suggest a common mechanism across tasks, independent of the spatial selectivity these tasks demand. This may reflect biological constraints and facilitate generalization across tasks. Our findings also support a previously hypothesized link between feature-based attention and decision-related activity.


1998 ◽  
Vol 06 (03) ◽  
pp. 265-279 ◽  
Author(s):  
Shimon Edelman

The paper outlines a computational approach to face representation and recognition, inspired by two major features of biological perceptual systems: graded-profile overlapping receptive fields, and object-specific responses in the higher visual areas. This approach, according to which a face is ultimately represented by its similarities to a number of reference faces, led to the development of a comprehensive theory of object representation in biological vision, and to its subsequent psychophysical exploration and computational modeling.


2008 ◽  
Vol 20 (7) ◽  
pp. 1847-1872 ◽  
Author(s):  
Mark C. W. van Rossum ◽  
Matthijs A. A. van der Meer ◽  
Dengke Xiao ◽  
Mike W. Oram

Neurons in the visual cortex receive a large amount of input from recurrent connections, yet the functional role of these connections remains unclear. Here we explore networks with strong recurrence in a computational model and show that short-term depression of the synapses in the recurrent loops implements an adaptive filter. This allows the visual system to respond reliably to deteriorated stimuli yet quickly to high-quality stimuli. For low-contrast stimuli, the model predicts long response latencies, whereas latencies are short for high-contrast stimuli. This is consistent with physiological data showing that in higher visual areas, latencies can increase more than 100 ms at low contrast compared to high contrast. Moreover, when presented with briefly flashed stimuli, the model predicts stereotypical responses that outlast the stimulus, again consistent with physiological findings. The adaptive properties of the model suggest that the abundant recurrent connections found in visual cortex serve to adapt the network's time constant in accordance with the stimulus and normalizes neuronal signals such that processing is as fast as possible while maintaining reliability.


1998 ◽  
Vol 79 (2) ◽  
pp. 1017-1044 ◽  
Author(s):  
Kechen Zhang ◽  
Iris Ginzburg ◽  
Bruce L. McNaughton ◽  
Terrence J. Sejnowski

Zhang, Kechen, Iris Ginzburg, Bruce L. McNaughton, and Terrence J. Sejnowski. Interpreting neuronal population activity by reconstruction: unified framework with application to hippocampal place cells. J. Neurophysiol. 79: 1017–1044, 1998. Physical variables such as the orientation of a line in the visual field or the location of the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which the physical variables are estimated from observed neural activity. Reconstruction is useful first in quantifying how much information about the physical variables is present in the population and, second, in providing insight into how the brain might use distributed representations in solving related computational problems such as visual object recognition and spatial navigation. Two classes of reconstruction methods, namely, probabilistic or Bayesian methods and basis function methods, are discussed. They include important existing methods as special cases, such as population vector coding, optimal linear estimation, and template matching. As a representative example for the reconstruction problem, different methods were applied to multi-electrode spike train data from hippocampal place cells in freely moving rats. The reconstruction accuracy of the trajectories of the rats was compared for the different methods. Bayesian methods were especially accurate when a continuity constraint was enforced, and the best errors were within a factor of two of the information-theoretic limit on how accurate any reconstruction can be and were comparable with the intrinsic experimental errors in position tracking. In addition, the reconstruction analysis uncovered some interesting aspects of place cell activity, such as the tendency for erratic jumps of the reconstructed trajectory when the animal stopped running. In general, the theoretical values of the minimal achievable reconstruction errors quantify how accurately a physical variable is encoded in the neuronal population in the sense of mean square error, regardless of the method used for reading out the information. One related result is that the theoretical accuracy is independent of the width of the Gaussian tuning function only in two dimensions. Finally, all the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain feasibly could use to solve related problems.


Sign in / Sign up

Export Citation Format

Share Document