scholarly journals A Self-Organized Neural Comparator

2013 ◽  
Vol 25 (4) ◽  
pp. 1006-1028 ◽  
Author(s):  
Guillermo A. Ludueña ◽  
Claudius Gros

Learning algorithms need generally the ability to compare several streams of information. Neural learning architectures hence need a unit, a comparator, able to compare several inputs encoding either internal or external information, for instance, predictions and sensory readings. Without the possibility of comparing the values of predictions to actual sensory inputs, reward evaluation and supervised learning would not be possible. Comparators are usually not implemented explicitly. Necessary comparisons are commonly performed by directly comparing the respective activities one-to-one. This implies that the characteristics of the two input streams (like size and encoding) must be provided at the time of designing the system. It is, however, plausible that biological comparators emerge from self-organizing, genetically encoded principles, which allow the system to adapt to the changes in the input and the organism. We propose an unsupervised neural circuitry, where the function of input comparison emerges via self-organization only from the interaction of the system with the respective inputs, without external influence or supervision. The proposed neural comparator adapts in an unsupervised form according to the correlations present in the input streams. The system consists of a multilayer feedforward neural network, which follows a local output minimization (anti-Hebbian) rule for adaptation of the synaptic weights. The local output minimization allows the circuit to autonomously acquire the capability of comparing the neural activities received from different neural populations, which may differ in population size and the neural encoding used. The comparator is able to compare objects never encountered before in the sensory input streams and evaluate a measure of their similarity even when differently encoded.

2019 ◽  
Vol 31 (2) ◽  
pp. 233-269 ◽  
Author(s):  
Christophe Gardella ◽  
Olivier Marre ◽  
Thierry Mora

The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simultaneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. We cover a variety of models describing correlations between pairs of neurons, as well as between larger groups, synchronous or delayed in time, with or without the explicit influence of the stimulus, and including or not latent variables. We discuss the advantages and drawbacks or each method, as well as the computational challenges related to their application to recordings of ever larger populations.


2021 ◽  
Author(s):  
Luke Miller ◽  
Cecile Fabio ◽  
Frederique de Vignemont ◽  
Alice Roy ◽  
W. Pieter Medendorp ◽  
...  

It is often claimed that tools are embodied by the user, but whether the brain actually repurposes its body-based computations to perform similar tasks with tools is not known. A fundamental body-based computation used by the somatosensory system is trilateration. Here, the location of touch on a limb is computed by integrating estimates of the distance between sensory input and its boundaries (e.g., elbow and wrist of the forearm). As evidence of this computational mechanism, tactile localization on a limb is most precise near its boundaries and lowest in the middle. If the brain repurposes trilateration to localize touch on a tool, we should observe this computational signature in behavior. In a large sample of participants, we indeed found that localizing touch on a tool produced the signature of trilateration, with highest precision close to the base and tip of the tool. A computational model of trilateration provided a good fit to the observed localization behavior. Importantly, model selection demonstrated that trilateration better explained each participant's behavior than an alternative model of localization. These results have important implications for how trilateration may be implemented by somatosensory neural populations. In sum, the present study suggests that tools are indeed embodied at a computational level, repurposing a fundamental spatial computation.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Sorin A. Pojoga ◽  
Natasha Kharas ◽  
Valentin Dragoi

AbstractOur daily behavior is dynamically influenced by conscious and unconscious processes. Although the neural bases of conscious experience have been extensively investigated over the past several decades, how unconscious information impacts neural circuitry and behavior remains unknown. Here, we recorded populations of neurons in macaque primary visual cortex (V1) to find that perceptually unidentifiable stimuli repeatedly presented in the absence of awareness are encoded by neural populations in a way that facilitates their future processing in the context of a behavioral task. Such exposure increases stimulus sensitivity and information encoded in cell populations, even though animals are unaware of stimulus identity. This phenomenon is consistent with a Hebbian mechanism underlying an increase in functional connectivity specifically for the neurons activated by subthreshold stimuli. This form of unsupervised adaptation may constitute a vestigial pre-attention system using the mere frequency of stimulus occurrence to change stimulus representations even when sensory inputs are perceptually invisible.


2004 ◽  
Vol 91 (2) ◽  
pp. 666-677 ◽  
Author(s):  
Adam S. Bristol ◽  
Michael A. Sutton ◽  
Thomas J. Carew

The tail-elicited siphon withdrawal reflex (TSW) has been a useful preparation in which to study learning and memory in Aplysia. However, comparatively little is known about the neural circuitry that translates tail sensory input (via the P9 nerves to the pleural ganglion) to final reflex output by siphon motor neurons (MNs) in the abdominal ganglion. To address this question, we examined the functional architecture of the TSW circuit by selectively severing nerves of semi-intact preparations and recording either tail-evoked responses in the siphon MNs or measuring siphon withdrawal responses directly. We found that the neural circuit underlying TSW is functionally lateralized. We next tested whether the expression of learning in the TSW reflects the underlying circuit architecture and shows side-specificity. We tested behavioral and physiological correlates of three forms of learning: sensitization, habituation, and dishabituation. Consistent with the circuit architecture, we found that sensitization and habituation of TSW are expressed in a side-specific manner. Unexpectedly, we found that dishabituation was expressed bilaterally, suggesting that a modulatory pathway bridges the two (ipsilateral) input pathways of the circuit, but this path is only revealed for a specific form of learning, dishabituation. These results suggest that the effects of a descending modulatory signal are differentially “gated” during sensitization and dishabituation.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Neal P Fox ◽  
Matthew Leonard ◽  
Matthias J Sjerps ◽  
Edward F Chang

In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar neurophysiological mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like /b/ and/p/. We used direct brain recordings in humans to investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to /pa/. We found that distinct neural populations respond preferentially to VOTs from one phonetic category, and are also sensitive to sub-phonetic VOT differences within a population’s preferred category. In a simple neural network model, simulated populations tuned to detect either temporal gaps or coincidences between spectral cues captured encoding patterns observed in real neural data. These results demonstrate that a spatial/amplitude neural code underlies the cortical representation of both spectral and temporal speech cues.


2019 ◽  
Author(s):  
Xue-Xin Wei ◽  
Ding Zhou ◽  
Andres Grosmark ◽  
Zaki Ajabi ◽  
Fraser Sparks ◽  
...  

AbstractCalcium imaging is a critical tool for measuring the activity of large neural populations. Much effort has been devoted to developing “pre-processing” tools applied to calcium video data, addressing the important issues of e.g., motion correction, denoising, compression, demixing, and deconvolution. However, computational modeling of deconvolved calcium signals (i.e., the estimated activity extracted by a pre-processing pipeline) is just as critical for interpreting calcium measurements. Surprisingly, these issues have to date received significantly less attention. To fill this gap, we examine the statistical properties of the deconvolved activity estimates, and propose several density models for these random signals. These models include a zero-inflated gamma (ZIG) model, which characterizes the calcium responses as a mixture of a gamma distribution and a point mass which serves to model zero responses. We apply the resulting models to neural encoding and decoding problems. We find that the ZIG model out-performs simpler models (e.g., Poisson or Bernoulli models) in the context of both simulated and real neural data, and can therefore play a useful role in bridging calcium imaging analysis methods with tools for analyzing activity in large neural populations.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Robert G Rasmussen ◽  
Andrew Schwartz ◽  
Steven M Chase

Neural populations from various sensory regions demonstrate dynamic range adaptation in response to changes in the statistical distribution of their input stimuli. These adaptations help optimize the transmission of information about sensory inputs. Here, we show a similar effect in the firing rates of primary motor cortical cells. We trained monkeys to operate a brain-computer interface in both two- and three-dimensional virtual environments. We found that neurons in primary motor cortex exhibited a change in the amplitude of their directional tuning curves between the two tasks. We then leveraged the simultaneous nature of the recordings to test several hypotheses about the population-based mechanisms driving these changes and found that the results are most consistent with dynamic range adaptation. Our results demonstrate that dynamic range adaptation is neither limited to sensory regions nor to rescaling of monotonic stimulus intensity tuning curves, but may rather represent a canonical feature of neural encoding.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Antoine Wystrach ◽  
Konstantinos Lagogiannis ◽  
Barbara Webb

Taxis behaviour in Drosophila larva is thought to consist of distinct control mechanisms triggering specific actions. Here, we support a simpler hypothesis: that taxis results from direct sensory modulation of continuous lateral oscillations of the anterior body, sparing the need for ‘action selection’. Our analysis of larvae motion reveals a rhythmic, continuous lateral oscillation of the anterior body, encompassing all head-sweeps, small or large, without breaking the oscillatory rhythm. Further, we show that an agent-model that embeds this hypothesis reproduces a surprising number of taxis signatures observed in larvae. Also, by coupling the sensory input to a neural oscillator in continuous time, we show that the mechanism is robust and biologically plausible. The mechanism provides a simple architecture for combining information across modalities, and explaining how learnt associations modulate taxis. We discuss the results in the light of larval neural circuitry and make testable predictions.


2019 ◽  
Vol 31 (5) ◽  
pp. 943-979
Author(s):  
Peng Yi ◽  
ShiNung Ching

A key aspect of the neural coding problem is understanding how representations of afferent stimuli are built through the dynamics of learning and adaptation within neural networks. The infomax paradigm is built on the premise that such learning attempts to maximize the mutual information between input stimuli and neural activities. In this letter, we tackle the problem of such information-based neural coding with an eye toward two conceptual hurdles. Specifically, we examine and then show how this form of coding can be achieved with online input processing. Our framework thus obviates the biological incompatibility of optimization methods that rely on global network awareness and batch processing of sensory signals. Central to our result is the use of variational bounds as a surrogate objective function, an established technique that has not previously been shown to yield online policies. We obtain learning dynamics for both linear-continuous and discrete spiking neural encoding models under the umbrella of linear gaussian decoders. This result is enabled by approximating certain information quantities in terms of neuronal activity via pairwise feedback mechanisms. Furthermore, we tackle the problem of how such learning dynamics can be realized with strict energetic constraints. We show that endowing networks with auxiliary variables that evolve on a slower timescale can allow for the realization of saddle-point optimization within the neural dynamics, leading to neural codes with favorable properties in terms of both information and energy.


Author(s):  
Angela Fan ◽  
Claire Gardent ◽  
Chloé Braud ◽  
Antoine Bordes

Various machine learning tasks can benefit from access to external information of different modalities, such as text and images. Recent work has focused on learning architectures with large memories capable of storing this knowledge. We propose augmenting generative Transformer neural networks with KNN-based Information Fetching (KIF) modules. Each KIF module learns a read operation to access fixed external knowledge. We apply these modules to generative dialog modeling, a challenging task where information must be flexibly retrieved and incorporated to maintain the topic and flow of conversation. We demonstrate the effectiveness of our approach by identifying relevant knowledge required for knowledgeable but engaging dialog from Wikipedia, images, and human-written dialog utterances, and show that leveraging this retrieved information improves model performance, measured by automatic and human evaluation.


Sign in / Sign up

Export Citation Format

Share Document