stimulus features
Recently Published Documents


TOTAL DOCUMENTS

396
(FIVE YEARS 134)

H-INDEX

44
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Barna Zajzon ◽  
David Dahmen ◽  
Abigail Morrison ◽  
Renato Duarte

Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system. We show that this is a robust and generic structural feature that enables a broad range of behaviorally-relevant operating regimes, and provide an in-depth theoretical analysis unravelling the dynamical principles underlying the mechanism.


2022 ◽  
Vol 18 (1) ◽  
pp. e1009739
Author(s):  
Nathan C. L. Kong ◽  
Eshed Margalit ◽  
Justin L. Gardner ◽  
Anthony M. Norcia

Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.


2022 ◽  
Vol 15 ◽  
Author(s):  
Auriane Duchemin ◽  
Martin Privat ◽  
Germán Sumbre

In the presence of moving visual stimuli, the majority of animals follow the Fourier motion energy (luminance), independently of other stimulus features (edges, contrast, etc.). While the behavioral response to Fourier motion has been studied in the past, how Fourier motion is represented and processed by sensory brain areas remains elusive. Here, we investigated how visual moving stimuli with or without the first Fourier component (square-wave signal or missing fundamental signal) are represented in the main visual regions of the zebrafish brain. First, we monitored the larva's optokinetic response (OKR) induced by square-wave and missing fundamental signals. Then, we used two-photon microscopy and GCaMP6f zebrafish larvae to monitor neuronal circuit dynamics in the optic tectum and the pretectum. We observed that both the optic tectum and the pretectum circuits responded to the square-wave gratings. However, only the pretectum responded specifically to the direction of the missing-fundamental signal. In addition, a group of neurons in the pretectum responded to the direction of the behavioral output (OKR), independently of the type of stimulus presented. Our results suggest that the optic tectum responds to the different features of the stimulus (e.g., contrast, spatial frequency, direction, etc.), but does not respond to the direction of motion if the motion information is not coherent (e.g., the luminance and the edges and contrast in the missing-fundamental signal). On the other hand, the pretectum mainly responds to the motion of the stimulus based on the Fourier energy.


2021 ◽  
pp. 1-21
Author(s):  
Daniel Gurman ◽  
Colin R. McCormick ◽  
Raymond M. Klein

Abstract Crossmodal correspondences are defined as associations between crossmodal stimuli based on seemingly irrelevant stimulus features (i.e., bright shapes being associated with high-pitched sounds). There is a large body of research describing auditory crossmodal correspondences involving pitch and volume, but not so much involving auditory timbre, the character or quality of a sound. Adeli and colleagues (2014, Front. Hum. Neurosci. 8, 352) found evidence of correspondences between timbre and visual shape. The present study aimed to replicate Adeli et al.’s findings, as well as identify novel timbre–shape correspondences. Participants were tested using two computerized tasks: an association task, which involved matching shapes to presented sounds based on best perceived fit, and a semantic task, which involved rating shapes and sounds on a number of scales. The analysis of association matches reveals nonrandom selection, with certain stimulus pairs being selected at a much higher frequency. The harsh/jagged and smooth/soft correspondences observed by Adeli et al. were found to be associated with a high level of consistency. Additionally, high matching frequency of sounds with unstudied timbre characteristics suggests the existence of novel correspondences. Finally, the ability of the semantic task to supplement existing crossmodal correspondence assessments was demonstrated. Convergent analysis of the semantic and association data demonstrates that the two datasets are significantly correlated (−0.36) meaning stimulus pairs associated with a high level of consensus were more likely to hold similar perceived meaning. The results of this study are discussed in both theoretical and applied contexts.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0259517
Author(s):  
Katerina Dolguikh ◽  
Tyrus Tracey ◽  
Mark R. Blair

Feedback is essential for many kinds of learning, but the cognitive processes involved in learning from feedback are unclear. Models of category learning incorporate selective attention to stimulus features while generating a response, but during the feedback phase of an experiment, it is assumed that participants receive complete information about stimulus features as well as the correct category. The present work looks at eye tracking data from six category learning datasets covering a variety of category complexities and types. We find that selective attention to task-relevant information is pervasive throughout feedback processing, suggesting a role for selective attention in memory encoding of category exemplars. We also find that error trials elicit additional stimulus processing during the feedback phase. Finally, our data reveal that participants increasingly skip the processing of feedback altogether. At the broadest level, these three findings reveal that selective attention is ubiquitous throughout the entire category learning task, functioning to emphasize the importance of certain stimulus features, the helpfulness of extra stimulus encoding during times of uncertainty, and the superfluousness of feedback once one has learned the task. We discuss the implications of our findings for modelling efforts in category learning from the perspective of researchers trying to capture the full dynamic interaction of selective attention and learning, as well as for researchers focused on other issues, such as category representation, whose work only requires simplifications that do a reasonable job of capturing learning.


2021 ◽  
Author(s):  
Tao Yao ◽  
Wim Vanduffel

Abstract The interplay between task-relevant and task-irrelevant stimulus features induces conflicts which impair human behavioral performance in many perceptual and cognitive tasks, a.k.a. a behavioral congruency effect. The neuronal mechanisms underlying behavioral congruency effects, however, are poorly understood. We recorded single unit activity in monkey frontal cortex using a novel task-switching paradigm and discovered a neuronal congruency effect that is carried by task-relevant and -irrelevant neurons. The former neurons provide more signal, the latter less noise in congruent compared to incongruent conditions. Their relative activity levels determine the neuronal congruency effect and behavioral performance. Although these neuronal congruency signals are sensitive to selective attention, they cannot be entirely explained by selective attention as gauged by response time. We propose that such neuronal congruency effects can explain behavioral congruency effects in general, as well as previous fMRI and EEG results in various conflict paradigms.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Christian H Poth

Intelligent behavior requires to act directed by goals despite competing action tendencies triggered by stimuli in the environment. For eye movements, it has recently been discovered that this ability is briefly reduced in urgent situations (Salinas et al., 2019). In a time-window before an urgent response, participants could not help but look at a suddenly appearing visual stimulus, even though their goal was to look away from it. Urgency seemed to provoke a new visual–oculomotor phenomenon: A period in which saccadic eye movements are dominated by external stimuli, and uncontrollable by current goals. This period was assumed to arise from brain mechanisms controlling eye movements and spatial attention, such as those of the frontal eye field. Here, we show that the phenomenon is more general than previously thought. We found that also in well-investigated manual tasks, urgency made goal-conflicting stimulus features dominate behavioral responses. This dominance of behavior followed established trial-to-trial signatures of cognitive control mechanisms that replicate across a variety of tasks. Thus together, these findings reveal that urgency temporarily forces stimulus-driven action by overcoming cognitive control in general, not only at brain mechanisms controlling eye movements.


2021 ◽  
Author(s):  
André Forster ◽  
Johannes Hewig ◽  
John JB Allen ◽  
Johannes Rodrigues ◽  
Philipp Ziebell ◽  
...  

The lateral frontal Cortex serves an important integrative function for converging information from a number of neural networks. It thus provides context and direction to both stimulus processing and accompanying responses. Especially in emotion related processing, the right hemisphere has often been described to serve a special role including a special sensitivity to stochastic learning and model building. In this study, the right inferior frontal gyrus (riFG) of 41 healthy participants was targeted via ultrasound neuromodulation to shed light on the involvement of this area in the representation of probabilistic context information and the processing of currently presented emotional faces. Analyses reveal that the riFG does not directly contribute to processing of currently depicted emotional stimuli but provides for information about the estimated likelihood of occurrence of stimulus features.


2021 ◽  
Author(s):  
Daniel Bennett ◽  
Angela Radulescu ◽  
Samuel Zorowitz ◽  
Valkyrie Felso ◽  
Yael Niv

Positive and negative affective states are respectively associated with optimistic and pessimistic expectations regarding future reward. One mechanism that might underlie these affect-related expectation biases is attention to positive- versus negative-valence stimulus features (e.g., attending to the positive reviews of a restaurant versus its expensive price). Here we tested the effects of experimentally induced positive and negative affect on feature-based attention in 120 participants completing a compound-generalization task with eye-tracking. We found that participants' reward expectations for novel compound stimuli were modulated by the affect induction in an affect-congruent way: positive affect increased reward expectations for compounds, whereas negative affect decreased reward expectations. Computational modelling and eye-tracking analyses each revealed that these effects were driven by affect-congruent changes in participants' allocation of attention to high- versus low-value features of compound stimuli. These results provide mechanistic insight into a process by which affect produces biases in generalized reward expectations.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Evan H Lyall ◽  
Daniel P Mossing ◽  
Scott R Pluta ◽  
Yun Wen Chu ◽  
Amir Dudai ◽  
...  

How cortical circuits build representations of complex objects is poorly understood. Individual neurons must integrate broadly over space, yet simultaneously obtain sharp tuning to specific global stimulus features. Groups of neurons identifying different global features must then assemble into a population that forms a comprehensive code for these global stimulus properties. Although the logic for how single neurons summate over their spatial inputs has been well-explored in anesthetized animals, how large groups of neurons compose a flexible population code of higher order features in awake animals is not known. To address this question, we probed the integration and population coding of higher order stimuli in the somatosensory and visual cortices of awake mice using two-photon calcium imaging across cortical layers. We developed a novel tactile stimulator that allowed the precise measurement of spatial summation even in actively whisking mice. Using this system, we found a sparse but comprehensive population code for higher order tactile features that depends on a heterogeneous and neuron-specific logic of spatial summation beyond the receptive field. Different somatosensory cortical neurons summed specific combinations of sensory inputs supra-linearly, but integrated other inputs sub-linearly, leading to selective responses to higher order features. Visual cortical populations employed a nearly identical scheme to generate a comprehensive population code for contextual stimuli. These results suggest that a heterogeneous logic of input-specific supra-linear summation may represent a widespread cortical mechanism for the synthesis of sparse higher order feature codes in neural populations. This may explain how the brain exploits the thalamocortical expansion of dimensionality to encode arbitrary complex features of sensory stimuli.


Sign in / Sign up

Export Citation Format

Share Document