scholarly journals Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

2010 ◽  
Vol 6 (4) ◽  
pp. 404-416 ◽  
Author(s):  
Roel M. Willems ◽  
Krien Clevis ◽  
Peter Hagoort
2018 ◽  
Author(s):  
Benjamin J Clark ◽  
Christine M. Simmons ◽  
Laura Berkowitz ◽  
Aaron A. Wilber

The retrosplenial cortex is anatomically positioned to integrate sensory, motor, and visual information and is thought to have an important role in processing spatial information and guiding behavior through complex environments. Anatomical and theoretical work has argued that the retrosplenial cortex participates in spatial behavior in concert with its primary input, the parietal cortex. Although the nature of this interactions is unknown, the central position is that the functional connectivity is hierarchical with egocentric spatial information processed at parietal cortex, and higher-level allocentric mappings generated in the retrosplenial cortex. Here, we review the evidence supporting this proposal. We begin by summarizing the key anatomical features of the retrosplenial-parietal network, and then review studies investigating the neural correlates of these regions during spatial behavior. Our summary of this literature suggests that the retrosplenial-parietal circuitry does not represent a strict hierarchical parcellation of function between the two regions, but instead a heterogeneous mixture of egocentric-allocentric coding and integration across frames of reference. We also suggest that this circuitry should be represented as a gradient of egocentric-to-allocentric information processing from parietal to retrosplenial cortices, with more specialized encoding of global allocentric frameworks within the retrosplenial cortex and more specialized egocentric and local allocentric representations in parietal cortex. We conclude by identifying the major gaps in this literature and suggest new avenues of research.


2018 ◽  
Vol 46 (1-2) ◽  
pp. 50-59 ◽  
Author(s):  
Stefan Van der Stigchel ◽  
Jeroen de Bresser ◽  
Rutger Heinen ◽  
Huiberdina L. Koek ◽  
Yael D. Reijmer ◽  
...  

Deficits in copying (“constructional apraxia”) is generally defined as a multifaceted deficit. The exact neural correlates of the different types of copying errors are unknown. To assess whether the different categories of errors on the pentagon drawing relate to different neural correlates, we examined the pentagon drawings of the MMSE in persons with subjective cognitive complaints, mild cognitive impairment, or early dementia due to Alzheimer’s disease. We adopted a qualitative scoring method for the pentagon copy test (QSPT) which categorizes different possible errors in copying rather than the dichotomous categories “correct” or “incorrect.” We correlated (regional) gray matter volumes with performance on the different categories of the QSPT. Results showed that the total score of the QSPT was specifically associated with parietal gray matter volume and not with frontal, temporal, and occipital gray matter volume. A more fine-grained analysis of the errors reveals that the intersection score and the number of angles share their underlying neural correlates and are associated with specific subregions of the parietal cortex. These results are in line with the idea that constructional apraxia can be attributed to the failure to integrate visual information correctly from one fixation to the next, a process called spatial remapping.


2018 ◽  
Vol 21 ◽  
Author(s):  
Jorge Iglesias-Fuster ◽  
Daniela Piña-Novo ◽  
Marlis Ontivero-Ortega ◽  
Agustín Lage-Castellanos ◽  
Mitchell Valdés-Sosa

AbstractThe attentional selection of different hierarchical level within compound (Navon) figures has been studied with event related potentials (ERPs), by controlling the ERPs obtained during attention to the global or the local echelon. These studies, using the canonical Navon figures, have produced contradictory results, with doubts regarding the scalp distribution of the effects. Moreover, the evidence about the temporal evolution of the processing of these two levels is not clear. Here, we unveiled global and local letters at distinct times, which enabled separation of their ERP responses. We combine this approach with the temporal generalization methodology, a novel multivariate technique which facilitates exploring the temporal structure of these ERPs. Opposite lateralization patterns were obtained for the selection negativities generated when attending global and local distracters (D statistics, p < .005), with maxima in right and left occipito-temporal scalp regions, respectively (η2 = .111, p < .01; η2 = .042, p < .04). However, both discrimination negativities elicited when comparing targets and distractors at the global or the local level were lateralized to the left hemisphere (η2 = .25, p < .03 and η2 = .142, p < .05 respectively). Recurrent activation patterns were found for both global and local stimuli, with scalp topographies corresponding to early preparatory stages reemerging during the attentional selection process, thus indicating recursive attentional activation. This implies that selective attention to global and local hierarchical levels recycles similar neural correlates at different time points. These neural correlates appear to be mediated by visual extra-striate areas.


2011 ◽  
Vol 49 (7) ◽  
pp. 1730-1740 ◽  
Author(s):  
Willem Huijbers ◽  
Cyriel M.A. Pennartz ◽  
David C. Rubin ◽  
Sander M. Daselaar

2018 ◽  
Vol 30 (2) ◽  
pp. 261-272 ◽  
Author(s):  
Holger Wiese ◽  
Simone C. Tüttenberg ◽  
Brandon T. Ingram ◽  
Chelsea Y. X. Chan ◽  
Zehra Gurbuz ◽  
...  

Humans are remarkably accurate at recognizing familiar faces, whereas their ability to recognize, or even match, unfamiliar faces is much poorer. However, previous research has failed to identify neural correlates of this striking behavioral difference. Here, we found a clear difference in brain potentials elicited by highly familiar faces versus unfamiliar faces. This effect starts 200 ms after stimulus onset and reaches its maximum at 400 to 600 ms. This sustained-familiarity effect was substantially larger than previous candidates for a neural familiarity marker and was detected in almost all participants, representing a reliable index of high familiarity. Whereas its scalp distribution was consistent with a generator in the ventral visual pathway, its modulation by repetition and degree of familiarity suggests an integration of affective and visual information.


2018 ◽  
Vol 30 (7) ◽  
pp. 951-962 ◽  
Author(s):  
Sharon Gilad-Gutnick ◽  
Elia Samuel Harmatz ◽  
Kleovoulos Tsourides ◽  
Galit Yovel ◽  
Pawan Sinha

We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document