scholarly journals Stable readout of observed actions from format-dependent activity of monkey’s anterior intraparietal neurons

2020 ◽  
Vol 117 (28) ◽  
pp. 16596-16605 ◽  
Author(s):  
Marco Lanzilotto ◽  
Monica Maranesi ◽  
Alessandro Livi ◽  
Carolina Giulia Ferroni ◽  
Guy A. Orban ◽  
...  

Humans accurately identify observed actions despite large dynamic changes in their retinal images and a variety of visual presentation formats. A large network of brain regions in primates participates in the processing of others’ actions, with the anterior intraparietal area (AIP) playing a major role in routing information about observed manipulative actions (OMAs) to the other nodes of the network. This study investigated whether the AIP also contributes to invariant coding of OMAs across different visual formats. We recorded AIP neuronal activity from two macaques while they observed videos portraying seven manipulative actions (drag, drop, grasp, push, roll, rotate, squeeze) in four visual formats. Each format resulted from the combination of two actor’s body postures (standing, sitting) and two viewpoints (lateral, frontal). Out of 297 recorded units, 38% were OMA-selective in at least one format. Robust population code for viewpoint and actor’s body posture emerged shortly after stimulus presentation, followed by OMA selectivity. Although we found no fully invariant OMA-selective neuron, we discovered a population code that allowed us to classify action exemplars irrespective of the visual format. This code depends on a multiplicative mixing of signals about OMA identity and visual format, particularly evidenced by a set of units maintaining a relatively stable OMA selectivity across formats despite considerable rescaling of their firing rate depending on the visual specificities of each format. These findings suggest that the AIP integrates format-dependent information and the visual features of others’ actions, leading to a stable readout of observed manipulative action identity.

2003 ◽  
Vol 15 (7) ◽  
pp. 935-945 ◽  
Author(s):  
Michelle Liou ◽  
Hong-Ren Su ◽  
Juin-Der Lee ◽  
Philip E. Cheng ◽  
Chien-Chih Huang ◽  
...  

Historically, reproducibility has been the sine qua non of experimental findings that are considered to be scientifically useful. Typically, findings from functional magnetic resonance imaging (fMRI) studies are assessed with statistical parametric maps (SPMs) using a p value threshold. However, a smaller p value does not imply that the observed result will be reproducible. In this study, we suggest interpreting SPMs in conjunction with reproducibility evidence. Reproducibility is defined as the extent to which the active status of a voxel remains the same across replicates conducted under the same conditions. We propose a methodology for assessing reproducibility in functional MR images without conducting separate experiments. Our procedures include the empirical Bayes method for estimating effects due to experimental stimuli, the threshold optimization procedure for assigning voxels to the active status, and the construction of reproducibility maps. In an empirical example, we implemented the proposed methodology to construct reproducibility maps based on data from the study by Ishai et al. (2000). The original experiments involved 12 human subjects and investigated brain regions most responsive to visual presentation of 3 categories of objects: faces, houses, and chairs. The brain regions identified included occipital, temporal, and fusiform gyri. Using our reproducibility analysis, we found that subjects in one of the experiments exercised at least 2 mechanisms in responding to visual objects when performing alternately matching and passive tasks. One gave activation maps closer to those reported in Ishai et al., and the other had related regions in the precuneus and posterior cingulate. The patterns of activated regions are reproducible for at least 4 out of 6 subjects involved in the experiment. Empirical application of the proposed methodology suggests that human brains exhibit different strategies to accomplish experimental tasks when responding to stimuli. It is important to correlate activations to subjects' behavior such as reaction time and response accuracy. Also, the latency between the stimulus presentation and the peak of the hemodynamic response function varies considerably among individual subjects according to types of stimuli and experimental tasks. These variations per se also deserve scientific inquiries. We conclude by discussing research directions relevant to reproducibility evidence in fMRI.


This part of the book provides an occasion to combine visual presentation of concepts related to speed, velocity, and acceleration with the real-life circumstances (such as car or horse races) and at the same time with artistic connotations about motion and artistic responses to it. The goal of this project is to show acceleration, speed, and velocity by producing an image that would look very dynamic. For example, dynamic changes of motion can be presented as a scene with racecars or horses. Connotations related to art may enhance both our knowledge about acceleration and a message it evokes.


1971 ◽  
Vol 28 (1) ◽  
pp. 211-215
Author(s):  
Richard J. Reynolds ◽  
A. C. Bickley ◽  
Sharon Champion ◽  
Ocie Dekle

Differences in paradigmatic response to oral and visual presentation of word-association tasks were compared at 4 age levels ( n = 40). The syntagmatic/paradigmatic shift was investigated as a function of mode of stimulus presentation. Younger Ss produced more paradigmatic responses than older Ss. The oral mode produced more paradigmatic responses than the visual mode for all Ss. The syntagmatic/paradigmatic shift did not occur, nor was the variation across age groups consistent for the two modalities. Evidence indicated that response to. word-association tasks was a function of stimulus modality.


2012 ◽  
Vol 117 (3) ◽  
pp. 183-193 ◽  
Author(s):  
Michael Carlin ◽  
Michael P. Toglia ◽  
Colleen Belmonte ◽  
Chiara DiMeglio

Abstract In the present study the effects of visual, auditory, and audio–visual presentation formats on memory for thematically constructed lists were assessed in individuals with intellectual disability and mental age–matched children. The auditory recognition test included target items, unrelated foils, and two types of semantic lures: critical related foils and related foils. The audio–visual format led to better recognition of old items and lower false-alarm rates for all foil types. Those with intellectual disability had higher false-alarm rates for all foil types and experienced particular difficulty discriminating presented items from those most strongly activated internally during acquisition (i.e., critical foils). Results are consistent with the activation-monitoring framework and fuzzy-trace theory and inform best practices for designing visual supports to maximize performance in educational and work environments.


2013 ◽  
Vol 35 (3) ◽  
pp. 260-269 ◽  
Author(s):  
Aïmen Khacharem ◽  
Bachir Zoudji ◽  
Slava Kalyuga ◽  
Hubert Ripoll

Cognitive load perspective was used as a theoretical framework to investigate effects of expertise and type of presentation of interacting elements of information in learning from dynamic visualizations. Soccer players (N = 48) were required to complete a recall reconstruction test and to rate their invested mental effort after studying a concurrent or sequential presentation of the elements of play. The results provided evidence for an expertise reversal effect. For novice players, the sequential presentation produced better learning outcomes. In contrast, expert players performed better after studying the concurrent presentation. The findings suggest that the effectiveness of different visual presentation formats depend on levels of learner expertise.


Author(s):  
Davide Albertini ◽  
Marco Lanzilotto ◽  
Monica Maranesi ◽  
Luca Bonini

The neural processing of others' observed actions recruits a large network of brain regions (the action observation network, AON), in which frontal motor areas are thought to play a crucial role. Since the discovery of mirror neurons (MNs) in the ventral premotor cortex, it has been assumed that their activation was conditional upon the presentation of biological rather than nonbiological motion stimuli, supporting a form of direct visuomotor matching. Nonetheless, nonbiological observed movements have rarely been used as control stimuli to evaluate visual specificity, thereby leaving the issue of similarity among neural codes for executed actions and biological or nonbiological observed movements unresolved. Here, we addressed this issue by recording from two nodes of the AON that are attracting increasing interest, namely the ventro-rostral part of the dorsal premotor area F2 and the mesial pre-supplementary motor area F6 of macaques while they 1) executed a reaching-grasping task, 2) observed an experimenter performing the task, and 3) observed a nonbiological effector moving in the same context. Our findings revealed stronger neuronal responses to the observation of biological than nonbiological movement, but biological and nonbiological visual stimuli produced highly similar neural dynamics and relied on largely shared neural codes, which in turn remarkably differed from those associated with executed actions. These results indicate that, in highly familiar contexts, visuo-motor remapping processes in premotor areas hosting MNs are more complex and flexible than predicted by a direct visuomotor matching hypothesis.


2019 ◽  
Author(s):  
Amanda K. Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson

AbstractRapid image presentations combined with time-resolved multivariate analysis methods of EEG or MEG (rapid-MVPA) offer unique potential in assessing the temporal limitations of the human visual system. Recent work has shown that multiple visual objects presented sequentially can be simultaneously decoded from M/EEG recordings. Interestingly, object representations reached higher stages of processing for slower image presentation rates compared to fast rates. This fast rate attenuation is probably caused by forward and backward masking from the other images in the stream. Two factors that are likely to influence masking during rapid streams are stimulus duration and stimulus onset asynchrony (SOA). Here, we disentangle these effects by studying the emerging neural representation of visual objects using rapid-MVPA while independently manipulating stimulus duration and SOA. Our results show that longer SOAs enhance the decodability of neural representations, regardless of stimulus presentation duration, suggesting that subsequent images act as effective backward masks. In contrast, image duration does not appear to have a graded influence on object representations. Interestingly, however, decodability was improved when there was a gap between subsequent images, indicating that an abrupt onset or offset of an image enhances its representation. Our study yields insight into the dynamics of object processing in rapid streams, paving the way for future work using this promising approach.


Sign in / Sign up

Export Citation Format

Share Document