scholarly journals Genes used together are more likely to be fused together in evolution by mutational mechanisms: A bioinformatic test of the used-fused hypothesis

2021 ◽  
Author(s):  
Evgeni Bolotin ◽  
Daniel Melamed ◽  
Adi Livnat

Cases of parallel or recurrent gene fusions, whether in evolution or in cancer and genetic disease, are difficult to explain, as they require multiple of the same or similar breakpoints to repeat. The used-together-fused-together hypothesis holds that genes that are used together repeatedly and persistently in a certain context are more likely than otherwise to undergo a fusion mutation in the course of evolution–reminiscent of the Hebbian learning rule where neurons that fire together wire together. This mutational hypothesis offers to explain both evolutionary parallelism and recurrence in disease of gene fusions under one umbrella. Here, we test this hypothesis using bioinformatic data. Various measures of gene interaction, including co-expression, co-localization, same-TAD presence and semantic similarity of GO terms show that human genes whose homologs are fused in one or more other organisms are significantly more likely to interact together than random genes, controlling for genomic distance between genes. In addition, we find a statistically significant overlap between pairs of genes that fused in the course of evolution in non-human species and pairs that undergo fusion in human cancers. These results provide support for the used-together-fused-together hypothesis over several alternative hypotheses, including that all gene pairs can fuse by random mutation, but among pairs that have thus fused, those that have interacted previously are more likely to be favored by selection. Multiple consequences are discussed, including the relevance of mutational mechanisms to exon shuffling, to the distribution of fitness effects of mutation and to parallelism.

2019 ◽  
Vol 6 (4) ◽  
pp. 181098 ◽  
Author(s):  
Le Zhao ◽  
Jie Xu ◽  
Xiantao Shang ◽  
Xue Li ◽  
Qiang Li ◽  
...  

Non-volatile memristors are promising for future hardware-based neurocomputation application because they are capable of emulating biological synaptic functions. Various material strategies have been studied to pursue better device performance, such as lower energy cost, better biological plausibility, etc. In this work, we show a novel design for non-volatile memristor based on CoO/Nb:SrTiO 3 heterojunction. We found the memristor intrinsically exhibited resistivity switching behaviours, which can be ascribed to the migration of oxygen vacancies and charge trapping and detrapping at the heterojunction interface. The carrier trapping/detrapping level can be finely adjusted by regulating voltage amplitudes. Gradual conductance modulation can therefore be realized by using proper voltage pulse stimulations. And the spike-timing-dependent plasticity, an important Hebbian learning rule, has been implemented in the device. Our results indicate the possibility of achieving artificial synapses with CoO/Nb:SrTiO 3 heterojunction. Compared with filamentary type of the synaptic device, our device has the potential to reduce energy consumption, realize large-scale neuromorphic system and work more reliably, since no structural distortion occurs.


1989 ◽  
Vol 03 (07) ◽  
pp. 555-560 ◽  
Author(s):  
M.V. TSODYKS

We consider the Hopfield model with the most simple form of the Hebbian learning rule, when only simultaneous activity of pre- and post-synaptic neurons leads to modification of synapse. An extra inhibition proportional to full network activity is needed. Both symmetric nondiluted and asymmetric diluted networks are considered. The model performs well at extremely low level of activity p<K−1/2, where K is the mean number of synapses per neuron.


1996 ◽  
Vol 8 (3) ◽  
pp. 545-566 ◽  
Author(s):  
Christopher W. Lee ◽  
Bruno A. Olshausen

An intrinsic limitation of linear, Hebbian networks is that they are capable of learning only from the linear pairwise correlations within an input stream. To explore what higher forms of structure could be learned with a nonlinear Hebbian network, we constructed a model network containing a simple form of nonlinearity and we applied it to the problem of learning to detect the disparities present in random-dot stereograms. The network consists of three layers, with nonlinear sigmoidal activation functions in the second-layer units. The nonlinearities allow the second layer to transform the pixel-based representation in the input layer into a new representation based on coupled pairs of left-right inputs. The third layer of the network then clusters patterns occurring on the second-layer outputs according to their disparity via a standard competitive learning rule. Analysis of the network dynamics shows that the second-layer units' nonlinearities interact with the Hebbian learning rule to expand the region over which pairs of left-right inputs are stable. The learning rule is neurobiologically inspired and plausible, and the model may shed light on how the nervous system learns to use coincidence detection in general.


1991 ◽  
Vol 3 (2) ◽  
pp. 201-212 ◽  
Author(s):  
Peter J. B. Hancock ◽  
Leslie S. Smith ◽  
William A. Phillips

We show that a form of synaptic plasticity recently discovered in slices of the rat visual cortex (Artola et al. 1990) can support an error-correcting learning rule. The rule increases weights when both pre- and postsynaptic units are highly active, and decreases them when pre-synaptic activity is high and postsynaptic activation is less than the threshold for weight increment but greater than a lower threshold. We show that this rule corrects false positive outputs in feedforward associative memory, that in an appropriate opponent-unit architecture it corrects misses, and that it performs better than the optimal Hebbian learning rule reported by Willshaw and Dayan (1990).


2021 ◽  
Vol 15 ◽  
Author(s):  
Shirin Dora ◽  
Sander M. Bohte ◽  
Cyriel M. A. Pennartz

Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy.


2005 ◽  
Vol 151 (3) ◽  
pp. 50-60
Author(s):  
Makoto Motoki ◽  
Tomoki Hamagami ◽  
Seiichi Koakutsu ◽  
Hironori Hirata

Sign in / Sign up

Export Citation Format

Share Document