sparse connectivity
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 17)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Sina Tootoonian ◽  
Andreas T Schaefer ◽  
Peter E Latham

Sensory processing is hard because the variables of interest are encoded in spike trains in a relatively complex way. A major goal in studies of sensory processing is to understand how the brain extracts those variables. Here we revisit a common encoding model in which variables are encoded linearly. Although there are typically more variables than neurons, this problem is still solvable because only a small number of variables appear at any one time (sparse prior). However, previous solutions require all-to-all connectivity, inconsistent with the sparse connectivity seen in the brain. Here we propose an algorithm that provably reaches the MAP (maximum a posteriori) inference solution, but does so using sparse connectivity. Our algorithm is inspired by the circuit of the mouse olfactory bulb, but our approach is general enough to apply to other modalities. In addition, it should be possible to extend it to nonlinear encoding models.


2021 ◽  
Author(s):  
Amarpreet Bamrah

Opportunistic networks are a subclass of delay tolerant networks based on a novel communication paradigm that aims at transmitting messages by exploiting direct contacts among nodes, without the need of a predefined infrastructure. Typical characteristics of OppNet include high mobility, short radio range, intermittent links, unstable topology, sparse connectivity, to name a few. As such, routing in such networks is a challenging task since it relies on node cooperation. This thesis focuses on using the concept of centrality to alleviate this task. Unlike other nodes in the network, central nodes are more likely to act as communication hubs to facilitate the message forwarding. In this thesis, a recently proposed History-Based Prediction Routing protocol is redesigned using this concept, yielding the so-called centrality-based HBPR protocol. The proposed CHBPR is evaluated by simulations using the ONE simulator, showing superior performance compared to HBPR without centrality and the Epidemic protocol with centrality.


2021 ◽  
Author(s):  
Amarpreet Bamrah

Opportunistic networks are a subclass of delay tolerant networks based on a novel communication paradigm that aims at transmitting messages by exploiting direct contacts among nodes, without the need of a predefined infrastructure. Typical characteristics of OppNet include high mobility, short radio range, intermittent links, unstable topology, sparse connectivity, to name a few. As such, routing in such networks is a challenging task since it relies on node cooperation. This thesis focuses on using the concept of centrality to alleviate this task. Unlike other nodes in the network, central nodes are more likely to act as communication hubs to facilitate the message forwarding. In this thesis, a recently proposed History-Based Prediction Routing protocol is redesigned using this concept, yielding the so-called centrality-based HBPR protocol. The proposed CHBPR is evaluated by simulations using the ONE simulator, showing superior performance compared to HBPR without centrality and the Epidemic protocol with centrality.


Author(s):  
Yue Qu ◽  
Chuanren Liu ◽  
Kai Zhang ◽  
Keli Xiao ◽  
Bo Jin ◽  
...  
Keyword(s):  

Author(s):  
Bo Jin ◽  
Ke Cheng ◽  
Yue Qu ◽  
Liang Zhang ◽  
Keli Xiao ◽  
...  

2020 ◽  
Vol 30 (20) ◽  
pp. 4063-4070.e2
Author(s):  
Denis Burdakov ◽  
Mahesh M. Karnani

2020 ◽  
Vol 117 (40) ◽  
pp. 25066-25073 ◽  
Author(s):  
Ori Maoz ◽  
Gašper Tkačik ◽  
Mohamad Saleh Esteki ◽  
Roozbeh Kiani ◽  
Elad Schneidman

The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation.


2020 ◽  
Author(s):  
Denis Burdakov ◽  
Mahesh M. Karnani

AbstractThe lateral hypothalamus (LH) contains neuronal populations which generate fundamental behavioural actions such as feeding, sleep, movement, attack and evasion. Their activity is also correlated with various appetitive and consummatory behaviours as well as reward seeking. It is unknown how neural activity within and among these populations is coordinated. One hypothesis postulates that they communicate using inhibitory and excitatory synapses, forming local microcircuits. We inspected this hypothesis using quadruple whole cell recordings and optogenetics to screen thousands of potential connections in brain slices. In contrast to the neocortex, we found near zero connectivity within the LH. In line with its ultra-sparse intrinsic connectivity, we found that the LH does not generate local beta and gamma oscillations. This suggests that LH neurons integrate incoming input within individual neurons rather than through local network interactions, and that input from other brain structures is decisive for selecting active populations in LH.


Sign in / Sign up

Export Citation Format

Share Document