lateral intraparietal area
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 7)

H-INDEX

43
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Wujie Zhang ◽  
Jacqueline Gottlieb ◽  
Kenneth D Miller

When monkeys learn to group visual stimuli into arbitrary categories, lateral intraparietal area (LIP) neurons become category-selective. Surprisingly, the representations of learned categories are overwhelmingly biased: nearly all LIP neurons in a given animal prefer the same category over other behaviorally equivalent categories. We propose a model where such biased representations develop through the interplay between Hebbian plasticity and the recurrent connectivity of LIP. In this model, two separable processes of positive feedback unfold in parallel: in one, category selectivity emerges from competition between prefrontal inputs; in the other, bias develops due to lateral interactions among LIP neurons. This model reproduces the levels of category selectivity and bias observed under a variety of conditions, as well as the redevelopment of bias after monkeys learn redefined categories. It predicts that LIP receptive fields would spatially cluster by preferred category, which we experimentally confirm. In summary, our model reveals a mechanism by which LIP learns abstract representations and assigns meaning to sensory inputs.


Author(s):  
Joanita F. D’Souza ◽  
Nicholas S. C. Price ◽  
Maureen A. Hagan

AbstractThe technology, methodology and models used by visual neuroscientists have provided great insights into the structure and function of individual brain areas. However, complex cognitive functions arise in the brain due to networks comprising multiple interacting cortical areas that are wired together with precise anatomical connections. A prime example of this phenomenon is the frontal–parietal network and two key regions within it: the frontal eye fields (FEF) and lateral intraparietal area (area LIP). Activity in these cortical areas has independently been tied to oculomotor control, motor preparation, visual attention and decision-making. Strong, bidirectional anatomical connections have also been traced between FEF and area LIP, suggesting that the aforementioned visual functions depend on these inter-area interactions. However, advancements in our knowledge about the interactions between area LIP and FEF are limited with the main animal model, the rhesus macaque, because these key regions are buried in the sulci of the brain. In this review, we propose that the common marmoset is the ideal model for investigating how anatomical connections give rise to functionally-complex cognitive visual behaviours, such as those modulated by the frontal–parietal network, because of the homology of their cortical networks with humans and macaques, amenability to transgenic technology, and rich behavioural repertoire. Furthermore, the lissencephalic structure of the marmoset brain enables application of powerful techniques, such as array-based electrophysiology and optogenetics, which are critical to bridge the gaps in our knowledge about structure and function in the brain.


2021 ◽  
Author(s):  
Danique Jeurissen ◽  
S Shushruth ◽  
Yasmine El-Shamayleh ◽  
Gregory D Horwitz ◽  
Michael N Shadlen

AbstractPerceptual decisions arise through the transformation of samples of evidence into a commitment to a proposition or plan of action. Such transformation is thought to involve cortical circuits capable of computation over time scales associated with working memory, attention, and planning. Neurons in the lateral intraparietal area (LIP) are thought to play a role in all of these functions, and much of what is known about the neurobiology of decision making has been influenced by studies of LIP and its network of cortical and subcortical connections. However a causal role of neurons in LIP remains controversial. We used pharmacological and chemogenetic methods to inactivate LIP in one hemisphere of four rhesus monkeys. Inactivation produced clear biases in decisions, but the effects dissipated despite the persistence of neural inactivation, implying compensation by other unaffected areas. Compensation occurs on a rapid times scale, within an experimental session, and more gradually, across sessions. The findings resolve disparate studies and inform interpretation of focal perturbations of brain function.


2021 ◽  
Author(s):  
Yvonne Li ◽  
Nabil Daddaoua ◽  
Mattias Horan ◽  
Jacqueline Gottlieb

Animals are intrinsically motivated to resolve uncertainty and predict future events. This motivation is encoded in cortical and subcortical structures, but a key open question is how it generates concrete policies for attending to informative stimuli. We examined this question using neural recordings in the monkey lateral intraparietal area (LIP), a visual area implicated in attention and gaze, during non-instrumental information demand. We show that the uncertainty that was resolved by a visual cue enhanced visuo-spatial responses of LIP cells independently of reward probability. This enhancement was independent of immediate saccade plans but correlated with the sensitivity to uncertainty in eye movement behavior on longer time scales (across sessions/days). The findings suggest that topographic visual maps receive motivational signals of uncertainty, which enhance the priority of informative stimuli and the likelihood that animals will orient to the stimuli to reduce uncertainty.


2021 ◽  
Author(s):  
Joshua A Seideman ◽  
Terrence R Stanford ◽  
Emilio Salinas

The lateral intraparietal area (LIP) contains spatially selective neurons that are partly responsible for determining where to look next, and are thought to serve a variety of sensory, motor planning, and cognitive control functions within this role1,2,3. Notably, according to numerous studies in monkeys4,5,6,7,8,9,10,11,12, area LIP implements a fundamental perceptual process, the gradual accumulation of sensory evidence in favor of one choice (e.g., look left) over another (look right), which manifests as a slowly developing spatial signal during a motion discrimination task. However, according to recent inactivation experiments13,14, this signal is unnecessary for accurate task performance. Here we reconcile these contradictory findings. We designed an urgent version of the motion discrimination task in which there is no systematic lag between the perceptual evaluation and the motor action reporting it, and such that the evolution of the subject’s choice can be tracked millisecond by millisecond15,16,17,18. We found that while choice accuracy increased steeply with increasing sensory evidence, at the same time, the spatial signal became progressively weaker, as if it hindered performance. In contrast, in a similarly urgent task in which the discriminated stimuli and the choice targets were spatially coincident, the neural signal seemed to facilitate performance. The data suggest that the ramping activity in area LIP traditionally interpreted as evidence accumulation likely corresponds to a slow, post-decision shift of spatial attention from one location (where the motion occurs) to another (where the eyes land).


2019 ◽  
Vol 130 (4) ◽  
pp. 560-571 ◽  
Author(s):  
Li Ma ◽  
Wentai Liu ◽  
Andrew E. Hudson

Abstract Editor’s Perspective What We Already Know about This Topic What This Article Tells Us That Is New Background Frontoparietal functional connectivity decreases with multiple anesthetics using electrophysiology and functional imaging. This decrease has been proposed as a final common functional pathway to produce anesthesia. Two alternative measures of long-range cortical interaction are coherence and phase-amplitude coupling. Although phase-amplitude coupling within frontal cortex changes with propofol administration, the effects of propofol on phase-amplitude coupling between different cortical areas have not previously been reported. Based on phase-amplitude coupling observed within frontal lobe during the anesthetized period, it was hypothesized that between-lead phase-amplitude coupling analysis should decrease between frontal and parietal leads during propofol anesthesia. Methods A published monkey electrocorticography data set (N = 2 animals) was used to test for interactions in the cortical oculomotor circuit, which is robustly interconnected in primates, and in the visual system during propofol anesthesia using coherence and interarea phase-amplitude coupling. Results Propofol induces coherent slow oscillations in visual and oculomotor networks made up of cortical areas with strong anatomic projections. Frontal eye field within-area phase-amplitude coupling increases with a time course consistent with a bolus response to intravenous propofol (modulation index increase of 12.6-fold). Contrary to the hypothesis, interareal phase-amplitude coupling also increases with propofol, with the largest increase in phase-amplitude coupling in frontal eye field low-frequency phase modulating lateral intraparietal area β-power (27-fold increase) and visual area 2 low-frequency phase altering visual area 1 β-power (19-fold increase). Conclusions Propofol anesthesia induces coherent oscillations and increases certain frontoparietal interactions in oculomotor cortices. Frontal eye field and lateral intraparietal area show increased coherence and phase-amplitude coupling. Visual areas 2 and 1, which have similar anatomic projection patterns, show similar increases in phase-amplitude coupling, suggesting higher order feedback increases in influence during propofol anesthesia relative to wakefulness. This suggests that functional connectivity between frontal and parietal areas is not uniformly decreased by anesthetics.


2018 ◽  
Author(s):  
Han Hou ◽  
Qihao Zheng ◽  
Yuchen Zhao ◽  
Alexandre Pouget ◽  
Yong Gu

AbstractPerceptual decisions are often based on multiple sensory inputs whose reliabilities rapidly vary over time, yet little is known about how our brain integrates these inputs to optimize behavior. Here we show multisensory evidence with time-varying reliability can be accumulated near optimally, in a Bayesian sense, by simply taking time-invariant linear combinations of neural activity across time and modalities, as long as the neural code for the sensory inputs is close to an invariant linear probabilistic population code (ilPPC). Recordings in the lateral intraparietal area (LIP) while macaques optimally performed a vestibular-visual multisensory decision-making task revealed that LIP population activity reflects an integration process consistent with the ilPPC theory. Moreover, LIP accumulates momentary evidence proportional to vestibular acceleration and visual velocity which are encoded in sensory areas with a close approximation to ilPPCs. Together, these results provide a remarkably simple and biologically plausible solution to optimal multisensory decision making.


2018 ◽  
Vol 120 (5) ◽  
pp. 2614-2629 ◽  
Author(s):  
Piercesare Grimaldi ◽  
Seong Hah Cho ◽  
Hakwan Lau ◽  
Michele A. Basso

Recent findings indicate that monkeys can report their confidence in perceptual decisions and that this information is encoded in neurons involved in making decisions, including the lateral intraparietal area (LIP) and the supplementary eye field (SEF). A key issue to consider when studying confidence is that decision accuracy often correlates with confidence reports; when we are performing well, we generally feel more confident. Expanding on work performed in humans, we designed a novel task for monkeys that dissociates perceptual information leading to decisions from perceptual information leading to confidence reports. Using this task, we recently showed that decoded ensemble activity recorded from the superior colliculus (SC) reflected decisions rather than confidence reports. However, our previous population level analysis collapsed over multiple SC neuronal types and therefore left open the possibility that first, individual discharge rates might encode information related to decision confidence, and second, different neuronal cell types within the SC might signal decision confidence independently of decision accuracy. We found that when decision accuracy and decision confidence covaried, modulation occurred primarily in neurons with prelude activity (buildup neurons). However, isolating decision confidence from decision accuracy uncovered that only a few, primarily buildup neurons showed signals correlating uniquely with decision confidence and the effect sizes were very small. Based on this work and our previous work using decoding methods, we conclude that neuronal signals for decision confidence, independent of decision accuracy, are unlikely to exist at the level of single or populations of neurons in the SC. Our results together with other recent work call into question normative models of confidence based on the optimal readout of decision signals. NEW & NOTEWORTHY Models of decision confidence suggest that our sense of confidence is an optimal readout of perceptual decision signals. Here, we report that a subcortical area, the superior colliculus (SC), contains neurons with activity that signal decisions and confidence in a task in which decision accuracy and confidence covary, similar to area lateral intraparietal area in cortex. The signals from SC occur primarily in the neurons with prelude activity (buildup neurons). However, in a task that dissociates decision accuracy from decision confidence, we find that only a few individual neurons express unique signals of confidence. These results call into question normative models of confidence based on optimal readout of perceptual decision signals.


2018 ◽  
Vol 115 (37) ◽  
pp. E8755-E8764 ◽  
Author(s):  
Panagiotis Sapountzis ◽  
Sofia Paneri ◽  
Georgia G. Gregoriou

When searching for an object in a crowded scene, information about the similarity of stimuli to the target object is thought to be encoded in spatial priority maps, which are subsequently used to guide shifts of attention and gaze to likely targets. Two key cortical areas that have been described as holding priority maps are the frontal eye field (FEF) and the lateral intraparietal area (LIP). However, little is known about their distinct contributions in priority encoding. Here, we compared neuronal responses in FEF and LIP during free-viewing visual search. Although saccade selection signals emerged earlier in FEF, information about the target emerged at similar latencies in distinct populations within the two areas. Notably, however, effects in FEF were more pronounced. Moreover, LIP neurons encoded the similarity of stimuli to the target independent of saccade selection, whereas in FEF, encoding of target similarity was strongly modulated by saccade selection. Taken together, our findings suggest hierarchical processing of saccade selection signals and parallel processing of feature-based attention signals within the parietofrontal network with FEF having a more prominent role in priority encoding. Furthermore, they suggest discrete roles of FEF and LIP in the construction of priority maps.


Sign in / Sign up

Export Citation Format

Share Document