The Neural Representation of Kinematics and Dynamics in Multiple Brain Regions: The Use of Force Field Reaching Paradigms in the Primate and Rat

Author(s):  
Joseph T. Francis
2013 ◽  
Vol 31 (2) ◽  
pp. 197-209 ◽  
Author(s):  
BEVIL R. CONWAY

AbstractExplanations for color phenomena are often sought in the retina, lateral geniculate nucleus, and V1, yet it is becoming increasingly clear that a complete account will take us further along the visual-processing pathway. Working out which areas are involved is not trivial. Responses to S-cone activation are often assumed to indicate that an area or neuron is involved in color perception. However, work tracing S-cone signals into extrastriate cortex has challenged this assumption: S-cone responses have been found in brain regions, such as the middle temporal (MT) motion area, not thought to play a major role in color perception. Here, we review the processing of S-cone signals across cortex and present original data on S-cone responses measured with fMRI in alert macaque, focusing on one area in which S-cone signals seem likely to contribute to color (V4/posterior inferior temporal cortex) and on one area in which S signals are unlikely to play a role in color (MT). We advance a hypothesis that the S-cone signals in color-computing areas are required to achieve a balanced neural representation of perceptual color space, whereas those in noncolor-areas provide a cue to illumination (not luminance) and confer sensitivity to the chromatic contrast generated by natural daylight (shadows, illuminated by ambient sky, surrounded by direct sunlight). This sensitivity would facilitate the extraction of shape-from-shadow signals to benefit global scene analysis and motion perception.


2019 ◽  
Author(s):  
Yarden Cohen ◽  
Elad Schneidman ◽  
Rony Paz

AbstractPrimates can quickly and advantageously adopt new behaviors based on changing stimuli relationships. We studied acquisition of a classification task while recording single neurons in the dorsal-anterior-cingulate-cortex (dACC) and the Striatum. Monkeys performed trial-by-trial classification on a rich set of multi-cue patterns, allowing de-novo learning every few days. To examine neural dynamics during the learning itself, we represent each rule with a spanning set of the space formed by the stimuli features. Because neural preference can be expressed by feature combinations, we can track neural dynamics in geometrical terms in this space, allowing a compact description of neural trajectories by observing changes in either vector-magnitude and/or angle-to- rule. We find that a large fraction of cells in both regions follow the behavior during learning. Neurons in the dACC mainly rotate towards the policy, suggesting an increase in selectivity that approximates the rule; whereas in the Putamen we also find a prominent magnitude increase, suggesting strengthening of confidence. Additionally, magnitude increases in the striatum followed rotation in the dACC. Finally, the neural representation at the end of the session predicted next-day behavior. The use of this novel framework enables tracking of neural dynamics during learning and suggests differential yet complementing roles for these brain regions.


2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Jack Nasar ◽  
Bardia Nikrahei ◽  
Dirk B. Walther

AbstractImages of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.


2017 ◽  
Author(s):  
Morteza Dehghani ◽  
Reihane Boghrati ◽  
Kingson Man ◽  
Joseph Hoover ◽  
Sarah Gimbel ◽  
...  

Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading. Relying on over 44 billion classifications, our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network. These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages.


2020 ◽  
Author(s):  
Jordan Ehrman ◽  
Victoria T. Lim ◽  
Caitlin C. Bannan ◽  
Nam Thi ◽  
Daisy Kyu ◽  
...  

Many molecular simulation methods use force fields to help model and simulate molecules and their behavior in various environments. Force fields are sets of functions and parameters used to calculate the potential energy of a chemical system as a function of the atomic coordinates. Despite the widespread use of force fields, their inadequacies are often thought to contribute to systematic errors in molecular simulations. Furthermore, different force fields tend to give varying results on the same systems with the same simulation settings. Here, we present a pipeline for comparing the geometries of small molecule conformers. We aimed to identify molecules or chemistries that are particularly informative for future force field development because they display inconsistencies between force fields. We applied our pipeline to a subset of the eMolecules database, and highlighted molecules that appear to be parameterized inconsistently across different force fields. We then identified over-represented functional groups in these molecule sets. The molecules and moieties identified by this pipeline may be particularly helpful for future force field parameterization.


2020 ◽  
Author(s):  
Jun Liu ◽  
Longnian Lin ◽  
Dong V Wang

SUMMARYFear of heights is evolutionarily important for survival, yet it is unclear how and which brain regions encode such height threats. Given the importance of the basolateral amygdala (BLA) in processing both learned and innate fear, we investigated how BLA neurons may respond to high place exposure in freely behaving mice. We found that a discrete set of BLA neurons exhibited robust firing increases when the mouse was either exploring or placed on a high place, accompanied by increased heart rate and freezing. Importantly, these high-place fear neurons were only activated under height threats but not mild anxiogenic conditions. Furthermore, after a fear conditioning procedure, these high-place fear neurons developed conditioned responses to the context, but not the cue, indicating a convergence in encoding of dangerous/risky contextual information. Our results provide insights into the neural representation of the fear of heights and may have implications for treatment of excessive fear disorders.


2020 ◽  
Vol 45 (9) ◽  
pp. 833-844
Author(s):  
Ashley Prichard ◽  
Raveena Chhibber ◽  
Jon King ◽  
Kate Athanassiades ◽  
Mark Spivak ◽  
...  

Abstract In working and practical contexts, dogs rely upon their ability to discriminate a target odor from distracting odors and other sensory stimuli. Using awake functional magnetic resonance imaging (fMRI) in 18 dogs, we examined the neural mechanisms underlying odor discrimination between 2 odors and a mixture of the odors. Neural activation was measured during the presentation of a target odor (A) associated with a food reward, a distractor odor (B) associated with nothing, and a mixture of the two odors (A+B). Changes in neural activation during the presentations of the odor stimuli in individual dogs were measured over time within three regions known to be involved with odor processing: the caudate nucleus, the amygdala, and the olfactory bulbs. Average activation within the amygdala showed that dogs maximally differentiated between odor stimuli based on the stimulus-reward associations by the first run, while activation to the mixture (A+B) was most similar to the no-reward (B) stimulus. To clarify the neural representation of odor mixtures in the dog brain, we used a random forest classifier to compare multilabel (elemental) versus multiclass (configural) models. The multiclass model performed much better than the multilabel (weighted-F1 0.44 vs. 0.14), suggesting the odor mixture was processed configurally. Analysis of the subset of high-performing dogs’ brain classification metrics revealed a network of olfactory information-carrying brain regions that included the amygdala, piriform cortex, and posterior cingulate. These results add further evidence for the configural processing of odor mixtures in dogs and suggest a novel way to identify high-performers based on brain classification metrics.


Author(s):  
Jeffrey A Brooks ◽  
Ryan M Stolier ◽  
Jonathan B Freeman

Abstract Across multiple domains of social perception - including social categorization, emotion perception, impression formation, and mentalizing - multivariate pattern analysis (MVPA) of fMRI data has permitted a more detailed understanding of how social information is processed and represented in the brain. As in other neuroimaging fields, the neuroscientific study of social perception initially relied on broad structure-function associations derived from univariate fMRI analysis to map neural regions involved in these processes. In this review, we trace the ways that social neuroscience studies using MVPA have built on these neuroanatomical associations to better characterize the computational relevance of different brain regions, and how MVPA allows explicit tests of the correspondence between psychological models and the neural representation of social information. We also describe current and future advances in methodological approaches to multivariate fMRI data and their theoretical value for the neuroscience of social perception.


2020 ◽  
Vol 1 (3) ◽  
pp. 339-364
Author(s):  
David I. Saltzman ◽  
Emily B. Myers

The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the “dorsal stream”) to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.


Sign in / Sign up

Export Citation Format

Share Document