scholarly journals Learning of biased representations in LIP through interactions between recurrent connectivity and Hebbian plasticity

2021 ◽  
Author(s):  
Wujie Zhang ◽  
Jacqueline Gottlieb ◽  
Kenneth D Miller

When monkeys learn to group visual stimuli into arbitrary categories, lateral intraparietal area (LIP) neurons become category-selective. Surprisingly, the representations of learned categories are overwhelmingly biased: nearly all LIP neurons in a given animal prefer the same category over other behaviorally equivalent categories. We propose a model where such biased representations develop through the interplay between Hebbian plasticity and the recurrent connectivity of LIP. In this model, two separable processes of positive feedback unfold in parallel: in one, category selectivity emerges from competition between prefrontal inputs; in the other, bias develops due to lateral interactions among LIP neurons. This model reproduces the levels of category selectivity and bias observed under a variety of conditions, as well as the redevelopment of bias after monkeys learn redefined categories. It predicts that LIP receptive fields would spatially cluster by preferred category, which we experimentally confirm. In summary, our model reveals a mechanism by which LIP learns abstract representations and assigns meaning to sensory inputs.

1996 ◽  
Vol 76 (3) ◽  
pp. 1439-1456 ◽  
Author(s):  
P. Mazzoni ◽  
R. M. Bracewell ◽  
S. Barash ◽  
R. A. Andersen

1. The lateral intraparietal area (area LIP) of the monkey's posterior parietal cortex (PPC) contains neurons that are active during saccadic eye movements. These neurons' activity includes visual and saccade-related components. These responses are spatially tuned and the location of a neuron's visual receptive field (RF) relative to the fovea generally overlaps its preferred saccade amplitude and direction (i.e., its motor field, MF). When a delay is imposed between the presentation of a visual stimulus and a saccade made to its location (memory saccade task), many LIP neurons maintain elevated activity during the delay (memory activity, M), which appears to encode the metrics of the next intended saccadic eye movements. Recent studies have alternatively suggested that LIP neurons encode the locations of visual stimuli regardless of where the animal intends to look. We examined whether the M activity of LIP neurons specifically encodes movement intention or the locations of recent visual stimuli, or a combination of both. In the accompanying study, we investigated whether the intended-movement activity reflects changes in motor plan. 2. We trained monkeys (Macaca mulatta) to memorize the locations of two visual stimuli and plan a sequence of two saccades, one to each remembered target, as we recorded the activity of single LIP neurons. Two targets were flashed briefly while the monkey maintained fixation; after a delay the fixation point was extinguished, and the monkey made two saccades in sequence to each target's remembered location, in the order in which the targets were presented. This "delayed double saccade" (DDS) paradigm allowed us to dissociate the location of visual stimulation from the direction of the planned saccade and thus distinguish neuronal activity related to the target's location from activity related to the saccade plan. By imposing a delay, we eliminated the confounding effect of any phasic responses coincident with the appearance of the stimulus and with the saccade. 3. We arranged the two visual stimuli so that in one set of conditions at least the first one was in the neuron's visual RF, and thus the first saccade was in the neuron's motor field (MF). M activity should be high in these conditions according to both the sensory memory and motor plan hypotheses. In another set of conditions, the second stimulus appeared in the RF but the first one was presented outside the RF, instructing the monkey to plan the first saccade away from the neuron's MF. If the M activity encodes the motor plan, it should be low in these conditions, reflecting the plan for the first saccade (away from the MF). If it is a sensory trace of the stimulus' location, it should be high, reflecting stimulation of the RF by the second target. 4. We tested 49 LIP neurons (in 3 hemispheres of 2 monkeys) with M activity on the DDS task. Of these, 38 (77%) had M activity related to the next intended saccade. They were active in the delay period, as expected, if the first saccade was in their preferred direction. They were less active or silent if the next saccade was not in their preferred direction, even when the second stimulus appeared in their RF. 5. The M activity of 8 (16%) of the remaining neurons specifically encoded the location of the most recent visual stimulus. Their firing rate during the delay reflected stimulation of the RF independently of the saccade being planned. The remaining 3 neurons had M activity that did not consistently encode either the next saccade or the stimulus' location. 6. We also recorded the activity of a subset of neurons (n = 38) in a condition in which no stimulus appeared in a neuron's RF, but the second saccade was in the neuron's MF. In this case the majority of neurons tested (23/38, 60%) became active in the period between the first and second saccade, even if neither stimulus had appeared in their RF. Moreover, this activity appeared only after the first saccade had started in all but two of


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Mulugeta Semework ◽  
Sara C Steenrod ◽  
Michael E Goldberg

Humans effortlessly establish a gist-like memory of their environment whenever they enter a new place, a memory that can guide action even in the absence of vision. Neurons in the lateral intraparietal area (LIP) of the monkey exhibit a form of this environmental memory. These neurons respond when a monkey makes a saccade that brings the spatial location of a stimulus that appeared on a number of prior trials, but not on the present trial, into their receptive fields (RFs). The stimulus need never have appeared in the neuron’s RF. This memory response is usually weaker, with a longer latency than the neuron’s visual response. We suggest that these results demonstrate that LIP has access to a supraretinal memory of space, which is activated when the spatial location of the vanished stimulus can be described by a retinotopic vector from the center of gaze to the remembered spatial location.


1992 ◽  
Vol 9 (3-4) ◽  
pp. 399-407 ◽  
Author(s):  
Jon H. Kaas ◽  
Leah A. Krubitzer

AbstractThe middle temporal visual area, MT, is one of three major targets of the primary visual cortex, area 17, in primates. We assessed the contribution of area 17 connections to the responsiveness of area MT neurons to visual stimuli by first mapping the representation of the visual hemifield in MT of anesthetized owl monkeys with microelectrodes, ablating an electrophysiologically mapped part of area 17, and then immediately remapping MT. Before the lesions, neurons at recording sites throughout MT responded vigorously to moving slits of light and other visual stimuli. In addition, the relationship of receptive fields to recording sites revealed a systematic representation of the contralateral visual hemifield in MT, as reported previously for owl monkeys and other primates. The immediate effect of removing part of the retinotopic map in area 17 by gentle aspiration was to selectively deactivate the corresponding part of the visuotopic map in MT. Lesions of dorsomedial area 17 representing central and paracentral vision of the lower visual quadrant deactivated neurons in caudomedial MT formerly having receptive fields in the central and paracentral lower visual quadrant. Most neurons at recording sites throughout other parts of MT had normal levels of responsiveness to visual stimuli, and receptive-field locations that closely matched those before the lesion. However, neurons at a few sites along the margin of the deactivated zone of cortex had receptive fields that were slightly displaced from the region of vision affected by the lesion into other parts of the visual field, suggesting some degree of plasticity in the visual hemifield representation in MT. Subsequent histological examination of cortex confirmed that the lesions were confined to area 17 and the recordings were in MT. The results indicate that the visually evoked activity of neurons in MT of owl monkeys is highly dependent on inputs relayed directly or indirectly from area 17.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 763-781 ◽  
Author(s):  
Jenni Heikkilä ◽  
Kimmo Alho ◽  
Kaisa Tiippana

Audiovisual semantic congruency during memory encoding has been shown to facilitate later recognition memory performance. However, it is still unclear whether this improvement is due to multisensory semantic congruency or just semantic congruencyper se. We investigated whether dual visual encoding facilitates recognition memory in the same way as audiovisual encoding. The participants memorized auditory or visual stimuli paired with a semantically congruent, incongruent or non-semantic stimulus in the same modality or in the other modality during encoding. Subsequent recognition memory performance was better when the stimulus was initially paired with a semantically congruent stimulus than when it was paired with a non-semantic stimulus. This congruency effect was observed with both audiovisual and dual visual stimuli. The present results indicate that not only multisensory but also unisensory semantically congruent stimuli can improve memory performance. Thus, the semantic congruency effect is not solely a multisensory phenomenon, as has been suggested previously.


2017 ◽  
Vol 17 (10) ◽  
pp. 375
Author(s):  
Sunwoo Kwon ◽  
Savel'ev Sergey ◽  
Thomas Albright ◽  
Sergei Gepshtein

1975 ◽  
Vol 40 (1) ◽  
pp. 3-7 ◽  
Author(s):  
Gerda Smets

Ss take more time to perceive interesting/displeasing stimuli than uninteresting/pleasing ones. This is consistent with the results of former experiments. However we used a different operationalization of looking time, based on binocular rivalry. Each of six stimulus pairs was presented in a stereoscope. One member of each pair was interesting but displeasing in comparison to the other member. Stimulus complexity was under control. Due to binocular rivalry Ss perceived only one pattern a time. 20 Ss were asked to indicate which pattern they actually saw by pushing two buttons. For each stimulus pair was registered how long each button was pushed during each of six successive minutes. Unlike other operationalizations this one is less dependent on S's determination of what stimulus will be looked at or for how long. It has the advantage that it is bound up more exclusively with relations of similarity and dissimilarity between stimulus elements. It allows manipulation of exposure time in a systematic and continuous way. There is no significant interaction between looking and exposure time.


1974 ◽  
Vol 38 (2) ◽  
pp. 417-418 ◽  
Author(s):  
Robert Zenhausern ◽  
Claude Pompo ◽  
Michael Ciaiola

Simple and complex reaction time to visual stimuli was tested under 7 levels of accessory stimulation (white noise). Only the highest level of stimulation (70 db above threshold) lowered reaction time. The other levels had no effect.


1981 ◽  
Vol 94 (1) ◽  
pp. 95-104
Author(s):  
J. N. SIGGER ◽  
D. A. DORSETT

The buccal ganglia of Philine each contain a group of mechanoreceptors, consisting of 1 large and 3 small cells, with receptive fields in the oesophagus. Synaptic contacts occur between the receptors; the large cell providing an EIPSP input to its contralateral partner and to the two groups of smaller receptors. The small receptors make weak excitatory contacts with both the large receptors. The sensory cells synapse with other buccal motoneurones and interneurones, some of which show periodic activity associated with the feeding movements. Protraction phase neurones are divisible into two groups, one of which receives EPSPs from the receptors, while the other group receives IPSPs. Retraction phase neurones receive a biphasic EIPSP. The receptors provide excitatory synaptic input to a pair of interneurones which ‘gate’ the feeding cycle. A third class of neurones which are not rhythmically active during feeding receive a predominantly inhibitory EIPSP.


2020 ◽  
Vol 2 (7) ◽  
pp. 4-9
Author(s):  
Shripriya Singh

The olfactory sense is a potent sensory tool which helps us perceive our environment much better. However, smells despite being similar have different impacts on individuals. What makes one odor categorically different from the other and why do people have a unique and personalized experience with smell is an answer that needs to be addressed. In the present article we have discussed the research in which neuroscientists have decoded and described how the relationships between different odors are encoded in the brain. How the brain transforms information about odor chemistry into the perception of smell is a major highlight of this publication. Carefully selected odors with defined molecular structures were delivered in mice and the neural activity was analyzed. It was observed that neuronal representations of smell in the cortex reflected chemical similarities between odors, thus allowing the brain to categorize scents. The study has employed chemo informatics and multiphoton imaging in the mouse to demonstrate both the piriform cortex and its sensory inputs from the olfactory bulb represent chemical odor relationships through correlated patterns of activity. The research has given us cues in the direction of how the brain translates odor chemistry into neurochemistry and eventually perception of smell.


2012 ◽  
Vol 25 (0) ◽  
pp. 65 ◽  
Author(s):  
Vanja Kovic ◽  
Jovana Pejovic

A number of studies have demonstrated sound-symbolism effects in adults and in children. Moreover, recently, ERP studies have shown that the sensitivity to sound-symbolic label–object associations occurs within 200 ms of object presentation (Kovic et al., 2010). It was argued that this effect may reflect a more general process of auditory–visual feature integration where properties of auditory stimuli facilitate a mapping to specific visual features. Here we demonstrate that the sound-symbolism effect is design dependent, namely — it occurs only when mapping from auditory to visual stimuli and not vice verse. Two groups of participants were recruited for solving the categorization task. They were presented them with 12 visual stimuli, half of which were rounded and another half of angular shapes. One group was trained to classify the rounded objects as ‘takete’ and the rounded ones as ‘maluma’, whereas the other group mapped ‘takete’ to rounded and ‘maluma’ to angular shapes. Moreover, half of these two groups heard the label before seeing the objects, whereas the other half was given the label after perceiving the object. The results revealed the sound-symbolism effect only in the group which was trained on the auditory–visual objects mapping and not in the one trained on the visual–auditory mappings. Thus, despite the previous findings we demonstrate that the sound-symbolism effect is not significant per se, but design-dependent and we argue that the sound brings up a mental image that is more constrained than the sounds brought up by a picture.


Sign in / Sign up

Export Citation Format

Share Document