hebbian learning rule
Recently Published Documents


TOTAL DOCUMENTS

53
(FIVE YEARS 8)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Evgeni Bolotin ◽  
Daniel Melamed ◽  
Adi Livnat

Cases of parallel or recurrent gene fusions, whether in evolution or in cancer and genetic disease, are difficult to explain, as they require multiple of the same or similar breakpoints to repeat. The used-together-fused-together hypothesis holds that genes that are used together repeatedly and persistently in a certain context are more likely than otherwise to undergo a fusion mutation in the course of evolution–reminiscent of the Hebbian learning rule where neurons that fire together wire together. This mutational hypothesis offers to explain both evolutionary parallelism and recurrence in disease of gene fusions under one umbrella. Here, we test this hypothesis using bioinformatic data. Various measures of gene interaction, including co-expression, co-localization, same-TAD presence and semantic similarity of GO terms show that human genes whose homologs are fused in one or more other organisms are significantly more likely to interact together than random genes, controlling for genomic distance between genes. In addition, we find a statistically significant overlap between pairs of genes that fused in the course of evolution in non-human species and pairs that undergo fusion in human cancers. These results provide support for the used-together-fused-together hypothesis over several alternative hypotheses, including that all gene pairs can fuse by random mutation, but among pairs that have thus fused, those that have interacted previously are more likely to be favored by selection. Multiple consequences are discussed, including the relevance of mutational mechanisms to exon shuffling, to the distribution of fitness effects of mutation and to parallelism.


2021 ◽  
Vol 15 ◽  
Author(s):  
Shirin Dora ◽  
Sander M. Bohte ◽  
Cyriel M. A. Pennartz

Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy.


Author(s):  
Thomas Boraud

This chapter reviews the general principles that are necessary for a neural system to make decisions. A glance at the literature shows that the simplest system to obtain an imbalance between two populations of neurons subjected to the same activation consists of two interconnected populations of inhibitory neurons. These two populations exert lateral inhibition on each other. In order for a differential response to emerge, noise is necessary. Synaptic noise is considered the main source of noise in the nervous system. The chapter then goes on to look at positive feedback. It also studies the learning processes in the nervous system and explores neural plasticity rules, particularly the Hebbian learning rule.


2019 ◽  
Vol 6 (4) ◽  
pp. 181098 ◽  
Author(s):  
Le Zhao ◽  
Jie Xu ◽  
Xiantao Shang ◽  
Xue Li ◽  
Qiang Li ◽  
...  

Non-volatile memristors are promising for future hardware-based neurocomputation application because they are capable of emulating biological synaptic functions. Various material strategies have been studied to pursue better device performance, such as lower energy cost, better biological plausibility, etc. In this work, we show a novel design for non-volatile memristor based on CoO/Nb:SrTiO 3 heterojunction. We found the memristor intrinsically exhibited resistivity switching behaviours, which can be ascribed to the migration of oxygen vacancies and charge trapping and detrapping at the heterojunction interface. The carrier trapping/detrapping level can be finely adjusted by regulating voltage amplitudes. Gradual conductance modulation can therefore be realized by using proper voltage pulse stimulations. And the spike-timing-dependent plasticity, an important Hebbian learning rule, has been implemented in the device. Our results indicate the possibility of achieving artificial synapses with CoO/Nb:SrTiO 3 heterojunction. Compared with filamentary type of the synaptic device, our device has the potential to reduce energy consumption, realize large-scale neuromorphic system and work more reliably, since no structural distortion occurs.


2019 ◽  
pp. 175-182
Author(s):  
Snehashish Chakraverty ◽  
Deepti Moyi Sahoo ◽  
Nisha Rani Mahato

2017 ◽  
Vol 7 (4) ◽  
pp. 257-264 ◽  
Author(s):  
Toshifumi Minemoto ◽  
Teijiro Isokawa ◽  
Haruhiko Nishimura ◽  
Nobuyuki Matsui

AbstractHebbian learning rule is well known as a memory storing scheme for associative memory models. This scheme is simple and fast, however, its performance gets decreased when memory patterns are not orthogonal each other. Pseudo-orthogonalization is a decorrelating method for memory patterns which uses XNOR masking between the memory patterns and randomly generated patterns. By a combination of this method and Hebbian learning rule, storage capacity of associative memory concerning non-orthogonal patterns is improved without high computational cost. The memory patterns can also be retrieved based on a simulated annealing method by using an external stimulus pattern. By utilizing complex numbers and quaternions, we can extend the pseudo-orthogonalization for complex-valued and quaternionic Hopfield neural networks. In this paper, the extended pseudo-orthogonalization methods for associative memories based on complex numbers and quaternions are examined from the viewpoint of correlations in memory patterns. We show that the method has stable recall performance on highly correlated memory patterns compared to the conventional real-valued method.


Sign in / Sign up

Export Citation Format

Share Document