scholarly journals Distributed representations for human inference

2021 ◽  
Author(s):  
Zhenglong Zhou ◽  
Dhairyya Singh ◽  
Marlie C Tandoc ◽  
Anna C Schapiro

Neural representations can be characterized as falling along a continuum, from distributed representations, in which neurons are responsive to many related features of the environment, to localist representations, where neurons orthogonalize activity patterns despite any input similarity. Distributed representations support powerful learning in neural network models and have been posited to exist throughout the brain, but it is unclear under what conditions humans acquire these representations and what computational advantages they may confer. In a series of behavioral experiments, we present evidence that interleaved exposure to new information facilitates the rapid formation of distributed representations in humans. As in neural network models with distributed representations, interleaved learning supports fast and automatic recognition of item relatedness, affords efficient generalization, and is especially critical for inference when learning requires statistical integration of noisy information over time. We use the data to adjudicate between several existing computational models of human memory and inference. The results demonstrate the power of interleaved learning and implicate the use of distributed representations in human inference.

2007 ◽  
Vol 19 (1) ◽  
pp. 194-217 ◽  
Author(s):  
J. V. Stone ◽  
P. E. Jupp

After a language has been learned and then forgotten, relearning some words appears to facilitate spontaneous recovery of other words. More generally, relearning partially forgotten associations induces recovery of other associations in humans, an effect we call free-lunch learning (FLL). Using neural network models, we prove that FLL is a necessary consequence of storing associations as distributed representations. Specifically, we prove that (1) FLL becomes increasingly likely as the number of synapses (connection weights) increases, suggesting that FLL contributes to memory in neurophysiological systems, and (2) the magnitude of FLL is greatest if inactive synapses are removed, suggesting a computational role for synaptic pruning in physiological systems. We also demonstrate that FLL is different from generalization effects conventionally associated with neural network models. As FLL is a generic property of distributed representations, it may constitute an important factor in human memory.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Jianfeng Huang ◽  
Yuefeng Liu ◽  
Yue Chen ◽  
Chen Jia

<p><strong>Abstract.</strong> Location-based social networks (LBSNs) is playing an increasingly important role in our daily life, through which users can share their locations and location-related contents at any time. The Location information implicitly expresses user's behaviour preference. Therefore, LBSNs is being widely explored for Point-of-Interest (POI) recommendation in recent years. Most of existing POI recommenders only recommend a single POI, while sometimes successive POI sequence recommendation is more practical. For example, when we travel to a strange city, what we expect is not a single POI recommendation, but a POI sequence recommendation which contains a set of POIs and the order of visiting them. To solve this problem, this paper proposes a novel model called Context-Aware POI Sequence Recommendation (CPSR), which is developed based on an attention-based neural network. Neural network has made a great success in various of field because of its powerful learning ability. Recently, dozens of works has demonstrated that attention mechanism can make the neural network models more reasonable.</p>


2020 ◽  
Author(s):  
Matthew G. Perich ◽  
Charlotte Arlt ◽  
Sofia Soares ◽  
Megan E. Young ◽  
Clayton P. Mosher ◽  
...  

ABSTRACTBehavior arises from the coordinated activity of numerous anatomically and functionally distinct brain regions. Modern experimental tools allow unprecedented access to large neural populations spanning many interacting regions brain-wide. Yet, understanding such large-scale datasets necessitates both scalable computational models to extract meaningful features of interregion communication and principled theories to interpret those features. Here, we introduce Current-Based Decomposition (CURBD), an approach for inferring brain-wide interactions using data-constrained recurrent neural network models that directly reproduce experimentally-obtained neural data. CURBD leverages the functional interactions inferred by such models to reveal directional currents between multiple brain regions. We first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. We then apply CURBD to multi-region neural recordings obtained from mice during running, macaques during Pavlovian conditioning, and humans during memory retrieval to demonstrate the widespread applicability of CURBD to untangle brain-wide interactions underlying behavior from a variety of neural datasets.


MRS Bulletin ◽  
1988 ◽  
Vol 13 (8) ◽  
pp. 30-35 ◽  
Author(s):  
Dana Z. Anderson

From the time of their conception, holography and holograms have evolved as a metaphor for human memory. Holograms can be made so that the information they contain is distributed throughout the holographic medium—destroy part of the hologram and the stored information remains wholly intact, except for a loss of detail. In this property holograms evidently have something in common with human memory, which is to some extent resilient against physical damage to the brain. There is much more to the metaphor than simply that information is stored in a distributed manner.Research in the optics community is now looking to holography, in particular dynamic holography, not only for information storage, but for information processing as well. The ideas are based upon neural network models. Neural networks are models for processing that are inspired by the apparent architecture of the brain. This is a processing paradigm that is new to optics. From within this network paradigm we look to build machines that can store and recall information associatively, play back a chain of recorded events, undergo learning and possibly forgetting, make decisions, adapt to a particular environment, and self-organize to evolve some desirable behavior. We hope that neural network models will give rise to optical machines for memory, speech processing, visual processing, language acquisition, motor control, and so on.


2019 ◽  
Author(s):  
Gabriele Scheler ◽  
Johann Schumann

AbstractThe issue of memory is difficult for standard neural network models. Ubiquitous synaptic plasticity introduces the problem of interference, which limits pattern recall and introduces conflation errors. We present a lognormal recurrent neural network, load patterns into it (MNIST), and test the resulting neural representation for information content by an output classifier. We identify neurons, which ‘compress’ the pattern information into their own adjacency network, and by stimulating these achieve recall. Learning is limited to intrinsic plasticity and output synapses of these pattern neurons (localist plasticity), which prevents interference.Our first experiments show that this form of storage and recall is possible, with the caveat of a ‘lossy’ recall similar to human memory. Comparing our results with a standard Gaussian network model, we notice that this effect breaks down for the Gaussian model.


eLife ◽  
2022 ◽  
Vol 11 ◽  
Author(s):  
Baohua Zhou ◽  
Zifan Li ◽  
Sunnie Kim ◽  
John Lafferty ◽  
Damon A Clark

Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Sign in / Sign up

Export Citation Format

Share Document