scholarly journals Repetition suppression for visual actions in the macaque superior temporal sulcus

2016 ◽  
Vol 115 (3) ◽  
pp. 1324-1337 ◽  
Author(s):  
Pradeep Kuravi ◽  
Vittorio Caggiano ◽  
Martin Giese ◽  
Rufin Vogels

In many brain areas, repetition of a stimulus usually weakens the neural response. This “adaptation” or repetition suppression effect has been observed with mass potential measures such as event-related potentials (ERPs), in fMRI BOLD responses, and locally with local field potentials (LFPs) and spiking activity. Recently, it has been reported that macaque F5 mirror neurons do not show repetition suppression of their spiking activity for single repetitions of hand actions, which disagrees with human fMRI adaptation studies. This finding also contrasts with numerous studies showing repetition suppression in macaque inferior temporal cortex, including the rostral superior temporal sulcus (STS). Since the latter studies employed static stimuli, we assessed here whether the use of dynamic action stimuli abolishes repetition suppression in the awake macaque STS. To assess adaptation effects in the STS, we employed the same hand action movies as used when examining adaptation in F5. The upper bank STS neurons showed repetition suppression during the approaching phase of the hand action, which corresponded to the phase of the action for which these neurons responded overall the strongest. The repetition suppression was present for the spiking activity measured in independent single-unit and multiunit recordings as well as for the LFP power at frequencies > 50 Hz. Together with previous data in F5, these findings suggest that adaptation effects differ between F5 mirror neurons and the STS neurons.

2006 ◽  
Vol 18 (5) ◽  
pp. 818-832 ◽  
Author(s):  
O. Hauk ◽  
K Patterson ◽  
A. Woollams ◽  
L. Watling ◽  
F. Pulvermüller ◽  
...  

Using a speeded lexical decision task, event-related potentials (ERPs), and minimum norm current source estimates, we investigated early spatiotemporal aspects of cortical activation elicited by words and pseudowords that varied in their orthographic typicality, that is, in the frequency of their component letter pairs (bigrams) and triplets (trigrams). At around 100 msec after stimulus onset, the ERP pattern revealed a significant typicality effect, where words and pseudowords with atypical orthography (e.g., yacht, cacht) elicited stronger brain activation than items characterized by typical spelling patterns (cart, yart). At ~200 msec, the ERP pattern revealed a significant lexicality effect, with pseudowords eliciting stronger brain activity than words. The two main factors interacted significantly at around 160 msec, where words showed a typicality effect but pseudowords did not. The principal cortical sources of the effects of both typicality and lexicality were localized in the inferior temporal cortex. Around 160 msec, atypical words elicited the stronger source currents in the left anterior inferior temporal cortex, whereas the left perisylvian cortex was the site of greater activation to typical words. Our data support distinct but interactive processing stages in word recognition, with surface features of the stimulus being processed before the word as a meaningful lexical entry. The interaction of typicality and lexicality can be explained by integration of information from the early form-based system and lexicosemantic processes.


2005 ◽  
Vol 17 (6) ◽  
pp. 954-968 ◽  
Author(s):  
Kimihiro Nakamura ◽  
Stanislas Dehaene ◽  
Antoinette Jobert ◽  
Denis Le Bihan ◽  
Sid Kouider

Recent evidence has suggested that the human occipito-temporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.


Author(s):  
Francesco Fabbrini ◽  
Rufin Vogels

The decrease in response with stimulus repetition is a common property observed in many sensory brain areas. This repetition suppression (RS) is ubiquitous in neurons of macaque inferior temporal (IT) cortex, the end-stage of the ventral visual pathway. The neural mechanisms of RS in IT are still unclear, and one possibility is that it is inherited from areas upstream to IT that show also RS. Since neurons in IT have larger receptive fields compared to earlier visual areas, we examined the inheritance hypothesis by presenting adapter and test stimuli at widely different spatial locations along both vertical and horizontal meridians, and across hemifields. RS was present for distances between adapter and test stimuli up to 22°, and when the two stimuli were presented in different hemifields. Also, we examined the position tolerance of the stimulus selectivity of adaptation by comparing the responses to a test stimulus following the same (repetition trial) or a different adapter (alternation trial) at a different position than the test stimulus. Stimulus-selective adaptation was still present and consistently stronger in the later phase of the response for distances up to 18°. Finally, we observed stimulus-selective adaptation in repetition trials even without a measurable excitatory response to the adapter stimulus. To accommodate these and previous data, we propose that at least part of the stimulus-selective adaptation in IT is based on short-term plasticity mechanisms within IT and/or reflects top-down activity from areas downstream to IT.


2008 ◽  
Vol 20 (7) ◽  
pp. 1235-1249 ◽  
Author(s):  
Roel M. Willems ◽  
Aslı Özyürek ◽  
Peter Hagoort

Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.


2017 ◽  
Author(s):  
Daniel Feuerriegel ◽  
Owen Churches ◽  
Scott Coussens ◽  
Hannah A. D. Keage

AbstractRepeated stimulus presentation leads to complex changes in cortical neuron response properties, commonly known as repetition suppression or stimulus-specific adaptation. Circuit-based models of repetition suppression provide a framework for investigating patterns of repetition effects that propagate through cortical hierarchies. To further develop such models it is critical to determine whether (and if so, when) repetition effects are modulated by top-down influences, such as those related to perceptual expectation. We investigated this by presenting pairs of repeated and alternating face images, and orthogonally manipulating expectations regarding the likelihood of stimulus repetition. Event-related potentials (ERPs) were recorded from n=39 healthy adults, to map the spatiotemporal progression of stimulus repetition and expectation effects, and interactions between these factors, using mass univariate analyses. We also tested whether the ability to predict unrepeated (compared to repeated) face identities could influence the magnitude of observed repetition effects, by presenting separate blocks with predictable and unpredictable alternating faces. Multiple repetition and expectation effects were identified between 99-800ms from stimulus onset, which did not statistically interact at any point. Repetition effects in blocks with predictable alternating faces were smaller than in unpredictable alternating face blocks between 117-179ms and 506-652ms, and larger between 246-428ms. ERP repetition effects appear not to be modulated by perceptual expectations, supporting separate mechanisms for repetition and expectation suppression. However, previous studies that aimed to test for repetition effects, in which the repeated (but not unrepeated) stimulus was predictable, are likely to have conflated repetition and stimulus predictability effects.Highlights- ERP face image repetition effects were apparent between 99-800ms from stimulus onset- Expectations of stimulus image properties did not modulate face repetition effects- The predictability of unrepeated stimuli influenced repetition effect magnitudes


2021 ◽  
Vol 15 ◽  
Author(s):  
Junbo Wang ◽  
Jiahao Liu ◽  
Kaiyin Lai ◽  
Qi Zhang ◽  
Yiqing Zheng ◽  
...  

The mechanism underlying visual-induced auditory interaction is still under discussion. Here, we provide evidence that the mirror mechanism underlies visual–auditory interactions. In this study, visual stimuli were divided into two major groups—mirror stimuli that were able to activate mirror neurons and non-mirror stimuli that were not able to activate mirror neurons. The two groups were further divided into six subgroups as follows: visual speech-related mirror stimuli, visual speech-irrelevant mirror stimuli, and non-mirror stimuli with four different luminance levels. Participants were 25 children with cochlear implants (CIs) who underwent an event-related potential (ERP) and speech recognition task. The main results were as follows: (1) there were significant differences in P1, N1, and P2 ERPs between mirror stimuli and non-mirror stimuli; (2) these ERP differences between mirror and non-mirror stimuli were partly driven by Brodmann areas 41 and 42 in the superior temporal gyrus; (3) ERP component differences between visual speech-related mirror and non-mirror stimuli were partly driven by Brodmann area 39 (visual speech area), which was not observed when comparing the visual speech-irrelevant stimulus and non-mirror groups; and (4) ERPs evoked by visual speech-related mirror stimuli had more components correlated with speech recognition than ERPs evoked by non-mirror stimuli, while ERPs evoked by speech-irrelevant mirror stimuli were not significantly different to those induced by the non-mirror stimuli. These results indicate the following: (1) mirror and non-mirror stimuli differ in their associated neural activation; (2) the visual–auditory interaction possibly led to ERP differences, as Brodmann areas 41 and 42 constitute the primary auditory cortex; (3) mirror neurons could be responsible for the ERP differences, considering that Brodmann area 39 is associated with processing information about speech-related mirror stimuli; and (4) ERPs evoked by visual speech-related mirror stimuli could better reflect speech recognition ability. These results support the hypothesis that a mirror mechanism underlies visual–auditory interactions.


2019 ◽  
Vol 122 ◽  
pp. 76-87 ◽  
Author(s):  
Daniel Feuerriegel ◽  
Owen Churches ◽  
Scott Coussens ◽  
Hannah A.D. Keage

2017 ◽  
Vol 79 (8) ◽  
pp. 2396-2411 ◽  
Author(s):  
Flóra Bodnár ◽  
Domonkos File ◽  
István Sulykos ◽  
Krisztina Kecskés-Kovács ◽  
István Czigler

2017 ◽  
Vol 79 (8) ◽  
pp. 2642-2642
Author(s):  
Flóra Bodnár ◽  
Domonkos File ◽  
István Sulykos ◽  
Krisztina Kecskés-Kovács ◽  
István Czigler

Sign in / Sign up

Export Citation Format

Share Document