scholarly journals Influence of Autapses on Synchronization in Neural Networks With Chemical Synapses

2020 ◽  
Vol 14 ◽  
Author(s):  
Paulo R. Protachevicz ◽  
Kelly C. Iarosz ◽  
Iberê L. Caldas ◽  
Chris G. Antonopoulos ◽  
Antonio M. Batista ◽  
...  

A great deal of research has been devoted on the investigation of neural dynamics in various network topologies. However, only a few studies have focused on the influence of autapses, synapses from a neuron onto itself via closed loops, on neural synchronization. Here, we build a random network with adaptive exponential integrate-and-fire neurons coupled with chemical synapses, equipped with autapses, to study the effect of the latter on synchronous behavior. We consider time delay in the conductance of the pre-synaptic neuron for excitatory and inhibitory connections. Interestingly, in neural networks consisting of both excitatory and inhibitory neurons, we uncover that synchronous behavior depends on their synapse type. Our results provide evidence on the synchronous and desynchronous activities that emerge in random neural networks with chemical, inhibitory and excitatory synapses where neurons are equipped with autapses.

2010 ◽  
Vol 22 (5) ◽  
pp. 1383-1398 ◽  
Author(s):  
Takashi Kanamaru ◽  
Kazuyuki Aihara

The roles of inhibitory neurons in synchronous firing are examined in a network of excitatory and inhibitory neurons with Watts and Strogatz's rewiring. By examining the persistence of the synchronous firing that exists in the random network, it was found that there is a probability of rewiring at which a transition between the synchronous state and the asynchronous state takes place, and the dynamics of the inhibitory neurons play an important role in determining this probability.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 230
Author(s):  
Jaechan Cho ◽  
Yongchul Jung ◽  
Seongjoo Lee ◽  
Yunho Jung

Binary neural networks (BNNs) have attracted significant interest for the implementation of deep neural networks (DNNs) on resource-constrained edge devices, and various BNN accelerator architectures have been proposed to achieve higher efficiency. BNN accelerators can be divided into two categories: streaming and layer accelerators. Although streaming accelerators designed for a specific BNN network topology provide high throughput, they are infeasible for various sensor applications in edge AI because of their complexity and inflexibility. In contrast, layer accelerators with reasonable resources can support various network topologies, but they operate with the same parallelism for all the layers of the BNN, which degrades throughput performance at certain layers. To overcome this problem, we propose a BNN accelerator with adaptive parallelism that offers high throughput performance in all layers. The proposed accelerator analyzes target layer parameters and operates with optimal parallelism using reasonable resources. In addition, this architecture is able to fully compute all types of BNN layers thanks to its reconfigurability, and it can achieve a higher area–speed efficiency than existing accelerators. In performance evaluation using state-of-the-art BNN topologies, the designed BNN accelerator achieved an area–speed efficiency 9.69 times higher than previous FPGA implementations and 24% higher than existing VLSI implementations for BNNs.


2007 ◽  
Vol 19 (12) ◽  
pp. 3226-3238 ◽  
Author(s):  
Arnaud Tonnelier ◽  
Hana Belmabrouk ◽  
Dominique Martinez

Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Xiuwen Fu ◽  
Yongsheng Yang ◽  
Haiqing Yao

Previous research of wireless sensor networks (WSNs) invulnerability mainly focuses on the static topology, while ignoring the cascading process of the network caused by the dynamic changes of load. Therefore, given the realistic features of WSNs, in this paper we research the invulnerability of WSNs with respect to cascading failures based on the coupled map lattice (CML). The invulnerability and the cascading process of four types of network topologies (i.e., random network, small-world network, homogenous scale-free network, and heterogeneous scale-free network) under various attack schemes (i.e., random attack, max-degree attack, and max-status attack) are investigated, respectively. The simulation results demonstrate that the rise of interference R and coupling coefficient ε will increase the risks of cascading failures. Cascading threshold values Rc and εc exist, where cascading failures will spread to the entire network when R>Rc or ε>εc. When facing a random attack or max-status attack, the network with higher heterogeneity tends to have a stronger invulnerability towards cascading failures. Conversely, when facing a max-degree attack, the network with higher uniformity tends to have a better performance. Besides that, we have also proved that the spreading speed of cascading failures is inversely proportional to the average path length of the network and the increase of average degree k can improve the network invulnerability.


Author(s):  
M. Rossmann ◽  
C. Burwick ◽  
A. Bühlmeier ◽  
G. Manteuffel ◽  
K. Goser

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Alexander P. Christensen ◽  

The nature of associations between variables is important for constructing theory about psychological phenomena. In the last decade, this topic has received renewed interest with the introduction of psychometric network models. In psychology, network models are often contrasted with latent variable (e.g., factor) models. Recent research has shown that differences between the two tend to be more substantive than statistical. One recently developed algorithm called the Loadings Comparison Test (LCT) was developed to predict whether data were generated from a factor or small-world network model. A significant limitation of the current LCT implementation is that it's based on heuristics that were derived from descriptive statistics. In the present study, we used artificial neural networks to replace these heuristics and develop a more robust and generalizable algorithm. We performed a Monte Carlo simulation study that compared neural networks to the original LCT algorithm as well as logistic regression models that were trained on the same data. We found that the neural networks performed as well as or better than both methods for predicting whether data were generated from a factor, small-world network, or random network model. Although the neural networks were trained on small-world networks, we show that they can reliably predict the data-generating model of random networks, demonstrating generalizability beyond the trained data. We echo the call for more formal theories about the relations between variables and discuss the role of the LCT in this process.


2020 ◽  
Author(s):  
Laércio Oliveira Junior ◽  
Florian Stelzer ◽  
Liang Zhao

Echo State Networks (ESNs) are recurrent neural networks that map an input signal to a high-dimensional dynamical system, called reservoir, and possess adaptive output weights. The output weights are trained such that the ESN’s output signal fits the desired target signal. Classical reservoirs are sparse and randomly connected networks. In this article, we investigate the effect of different network topologies on the performance of ESNs. Specifically, we use two types of networks to construct clustered reservoirs of ESN: the clustered Erdös–Rényi and the clustered Barabási-Albert network model. Moreover, we compare the performance of these clustered ESNs (CESNs) and classical ESNs with the random reservoir by employing them to two different tasks: frequency filtering and the reconstruction of chaotic signals. By using a clustered topology, one can achieve a significant increase in the ESN’s performance.


Sign in / Sign up

Export Citation Format

Share Document