scholarly journals Eye Can See What You Want: Posterior Intraparietal Sulcus Encodes the Object of an Actor's Gaze

2011 ◽  
Vol 23 (11) ◽  
pp. 3400-3409 ◽  
Author(s):  
Richard Ramsey ◽  
Emily S. Cross ◽  
Antonia F. de C. Hamilton

In a social setting, seeing Sally look at a clock means something different to seeing her gaze longingly at a slice of chocolate cake. In both cases, her eyes and face might be turned rightward, but the information conveyed is markedly different, depending on the object of her gaze. Numerous studies have examined brain systems underlying the perception of gaze direction, but less is known about the neural basis of perceiving gaze shifts to specific objects. During fMRI, participants observed an actor look toward one of two objects, each occupying a distinct location. Video stimuli were sequenced to obtain repetition suppression (RS) for object identity, independent of spatial location. In a control condition, a spotlight highlighted one of the objects, but no actor was present. Observation of the human actor's gaze compared with the spotlight engaged frontal, parietal, and temporal cortices, consistent with a broad action observation network. RS for gazed object in the human condition was found in posterior intraparietal sulcus (pIPS). RS for highlighted object in the spotlight condition was found in middle occipital, inferior temporal, medial fusiform gyri, and superior parietal lobule. These results suggest that human pIPS is specifically sensitive to the type object that an observed actor looks at (tool vs. food), irrespective of the observed actor's gaze location (left vs. right). A general attention or lower-level object feature processing mechanism cannot account for the findings because a very different response pattern was seen in the spotlight control condition. Our results suggest that, in addition to spatial orienting, human pIPS has an important role in object-centered social orienting.

2011 ◽  
Vol 29 (supplement) ◽  
pp. 352-377 ◽  
Author(s):  
Seon Hee Jang ◽  
Frank E Pollick

The study of dance has been helpful to advance our understanding of how human brain networks of action observation are influenced by experience. However previous studies have not examined the effect of extensive visual experience alone: for example, an art critic or dance fan who has a rich experience of watching dance but negligible experience performing dance. To explore the effect of pure visual experience we performed a single experiment using functional Magnetic Resonance Imaging (fMRI) to compare the neural processing of dance actions in 3 groups: a) 14 ballet dancers, b) 10 experienced viewers, c) 12 novices without any extensive dance or viewing experience. Each of the 36 participants viewed short 2-second displays of ballet derived from motion capture of a professional ballerina. These displays represented the ballerina as only points of light at the major joints. We wished to study the action observation network broadly and thus included two different types of display and two different tasks for participants to perform. The two different displays were: a) brief movies of a ballet action and b) frames from the ballet movies with the points of lights connected by lines to show a ballet posture. The two different tasks were: a) passively observe the display and b) imagine performing the action depicted in the display. The two levels of display and task were combined factorially to produce four experimental conditions (observe movie, observe posture, motor imagery of movie, motor imagery of posture). The set of stimuli used in the experiment are available for download after this paper. A random effects ANOVA was performed on brain activity and an effect of experience was obtained in seven different brain areas including: right Temporoparietal Junction (TPJ), left Retrosplenial Cortex (RSC), right Primary Somatosensory Cortex (S1), bilateral Primary Motor Cortex (M1), right Orbitofrontal Cortex (OFC), right Temporal Pole (TP). The patterns of activation were plotted in each of these areas (TPJ, RSC, S1, M1, OFC, TP) to investigate more closely how the effect of experience changed across these areas. For this analysis, novices were treated as baseline and the relative effect of experience examined in the dancer and experienced viewer groups. Interpretation of these results suggests that both visual and motor experience appear equivalent in producing more extensive early processing of dance actions in early stages of representation (TPJ and RSC) and we hypothesise that this could be due to the involvement of autobiographical memory processes. The pattern of results found for dancers in S1 and M1 suggest that their perception of dance actions are enhanced by embodied processes. For example, the S1 results are consistent with claims that this brain area shows mirror properties. The pattern of results found for the experienced viewers in OFC and TP suggests that their perception of dance actions are enhanced by cognitive processes. For example, involving aspects of social cognition and hedonic processing – the experienced viewers find the motor imagery task more pleasant and have richer connections of dance to social memory. While aspects of our interpretation are speculative the core results clearly show common and distinct aspects of how viewing experience and physical experience shape brain responses to watching dance.


Author(s):  
Gloria Pizzamiglio ◽  
Zuo Zhang ◽  
James Kolasinski ◽  
Jane M. Riddoch ◽  
Richard E. Passingham ◽  
...  

2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


2019 ◽  
Vol 121 (4) ◽  
pp. 1410-1427 ◽  
Author(s):  
Margaret Henderson ◽  
John T. Serences

Searching for items that are useful given current goals, or “target” recognition, requires observers to flexibly attend to certain object properties at the expense of others. This could involve focusing on the identity of an object while ignoring identity-preserving transformations such as changes in viewpoint or focusing on its current viewpoint while ignoring its identity. To effectively filter out variation due to the irrelevant dimension, performing either type of task is likely to require high-level, abstract search templates. Past work has found target recognition signals in areas of ventral visual cortex and in subregions of parietal and frontal cortex. However, target status in these tasks is typically associated with the identity of an object, rather than identity-orthogonal properties such as object viewpoint. In this study, we used a task that required subjects to identify novel object stimuli as targets according to either identity or viewpoint, each of which was not predictable from low-level properties such as shape. We performed functional MRI in human subjects of both sexes and measured the strength of target-match signals in areas of visual, parietal, and frontal cortex. Our multivariate analyses suggest that the multiple-demand (MD) network, including subregions of parietal and frontal cortex, encodes information about an object’s status as a target in the relevant dimension only, across changes in the irrelevant dimension. Furthermore, there was more target-related information in MD regions on correct compared with incorrect trials, suggesting a strong link between MD target signals and behavior. NEW & NOTEWORTHY Real-world target detection tasks, such as searching for a car in a crowded parking lot, require both flexibility and abstraction. We investigated the neural basis of these abilities using a task that required invariant representations of either object identity or viewpoint. Multivariate decoding analyses of our whole brain functional MRI data reveal that invariant target representations are most pronounced in frontal and parietal regions, and the strength of these representations is associated with behavioral performance.


2013 ◽  
Vol 35 (1) ◽  
pp. 22-28 ◽  
Author(s):  
Miyuki Tamura ◽  
Yoshiya Moriguchi ◽  
Shigekazu Higuchi ◽  
Akiko Hida ◽  
Minori Enomoto ◽  
...  

2011 ◽  
Vol 7 (1) ◽  
pp. 64-80 ◽  
Author(s):  
Daniel J. Shaw ◽  
Marie-Helene Grosbras ◽  
Gabriel Leonard ◽  
G. Bruce Pike ◽  
Tomáš Paus

2011 ◽  
Vol 22 (3) ◽  
pp. 668-679 ◽  
Author(s):  
Luca Turella ◽  
Federico Tubaldi ◽  
Michael Erb ◽  
Wolfgang Grodd ◽  
Umberto Castiello

2017 ◽  
Vol 118 (5) ◽  
pp. 2601-2613
Author(s):  
Claire K. Naughtin ◽  
Benjamin J. Tamber-Rosenau ◽  
Paul E. Dux

Individuation refers to individualsʼ use of spatial and temporal properties to register objects as distinct perceptual events relative to other stimuli. Although behavioral studies have examined both spatial and temporal individuation, neuroimaging investigations have been restricted to the spatial domain and at relatively late stages of information processing. Here, we used univariate and multivoxel pattern analyses of functional MRI data to identify brain regions involved in individuating temporally distinct visual items and the neural consequences that arise when this process reaches its capacity limit (repetition blindness, RB). First, we found that regional patterns of blood-oxygen-level-dependent activity across the cortex discriminated between instances where repeated and nonrepeated stimuli were successfully individuated—conditions that placed differential demands on temporal individuation. These results could not be attributed to repetition suppression or other stimulus-related factors, task difficulty, regional activation differences, other capacity-limited processes, or artifacts in the data or analyses. Contrary to current theoretical models, this finding suggests that temporal individuation is supported by a distributed set of brain regions, rather than a single neural correlate. Second, conditions that reflect the capacity limit of individuation—instances of RB—lead to changes in the spatial patterns within this network, as well as amplitude changes in the left hemisphere premotor cortex, superior medial frontal cortex, anterior cingulate cortex, and bilateral parahippocampal place area. These findings could not be attributed to response conflict/ambiguity and likely reflect the core brain regions and mechanisms that underlie the capacity-limited process that gives rise to RB.NEW & NOTEWORTHY We present novel findings into the neural bases of temporal individuation and repetition blindness (RB)—the perceptual deficit that arises when this process reaches its capacity limit. Specifically, we found that temporal individuation is a widely distributed process in the brain and identified a number of candidate brain regions that appear to underpin RB. These findings enhance our understanding of how these fundamental perceptual processes are reflected in the human brain.


2014 ◽  
Vol 26 (10) ◽  
pp. 2385-2399 ◽  
Author(s):  
Shana A. Hall ◽  
David C. Rubin ◽  
Amanda Miles ◽  
Simon W. Davis ◽  
Erik A. Wing ◽  
...  

Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that, because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented, panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a postscan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial-temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that, although there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems.


Sign in / Sign up

Export Citation Format

Share Document