Weak, Stochastic Temporal Correlation of Large-Scale Synaptic Input Is a Major Determinant of Neuronal Bandwidth

2000 ◽  
Vol 12 (3) ◽  
pp. 693-707 ◽  
Author(s):  
David M. Halliday

We determine the bandwidth of a model neurone to large-scale synaptic input by assessing the frequency response between the outputs of a two-cell simulation that share a percentage of the total synaptic input. For temporally uncorrelated inputs, a large percentage of common inputs are required before the output discharges of the two cells exhibit significant correlation. In contrast, a small percentage (5%) of the total synaptic input that involves stochastic spike trains that are weakly correlated over a broad range of frequencies exert a clear influence on the output discharge of both cells over this range of frequencies. Inputs that are weakly correlated at a single frequency induce correlation between the output discharges only at the frequency of correlation. The strength of temporal correlation required is sufficiently weak that analysis of a sample pair of input spike trains could fail to reveal the presence of correlated input. Weak temporal correlation between inputs is therefore a major determinant of the transmission to the output discharge of frequencies present in the spike discharges of presynaptic inputs, and therefore of neural bandwidth.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stephan Fischer ◽  
Marc Dinh ◽  
Vincent Henry ◽  
Philippe Robert ◽  
Anne Goelzer ◽  
...  

AbstractDetailed whole-cell modeling requires an integration of heterogeneous cell processes having different modeling formalisms, for which whole-cell simulation could remain tractable. Here, we introduce BiPSim, an open-source stochastic simulator of template-based polymerization processes, such as replication, transcription and translation. BiPSim combines an efficient abstract representation of reactions and a constant-time implementation of the Gillespie’s Stochastic Simulation Algorithm (SSA) with respect to reactions, which makes it highly efficient to simulate large-scale polymerization processes stochastically. Moreover, multi-level descriptions of polymerization processes can be handled simultaneously, allowing the user to tune a trade-off between simulation speed and model granularity. We evaluated the performance of BiPSim by simulating genome-wide gene expression in bacteria for multiple levels of granularity. Finally, since no cell-type specific information is hard-coded in the simulator, models can easily be adapted to other organismal species. We expect that BiPSim should open new perspectives for the genome-wide simulation of stochastic phenomena in biology.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-21
Author(s):  
He Wang ◽  
Nicoleta Cucu Laurenciu ◽  
Yande Jiang ◽  
Sorin Cotofana

Design and implementation of artificial neuromorphic systems able to provide brain akin computation and/or bio-compatible interfacing ability are crucial for understanding the human brain’s complex functionality and unleashing brain-inspired computation’s full potential. To this end, the realization of energy-efficient, low-area, and bio-compatible artificial synapses, which sustain the signal transmission between neurons, is of particular interest for any large-scale neuromorphic system. Graphene is a prime candidate material with excellent electronic properties, atomic dimensions, and low-energy envelope perspectives, which was already proven effective for logic gates implementations. Furthermore, distinct from any other materials used in current artificial synapse implementations, graphene is biocompatible, which offers perspectives for neural interfaces. In view of this, we investigate the feasibility of graphene-based synapses to emulate various synaptic plasticity behaviors and look into their potential area and energy consumption for large-scale implementations. In this article, we propose a generic graphene-based synapse structure, which can emulate the fundamental synaptic functionalities, i.e., Spike-Timing-Dependent Plasticity (STDP) and Long-Term Plasticity . Additionally, the graphene synapse is programable by means of back-gate bias voltage and can exhibit both excitatory or inhibitory behavior. We investigate its capability to obtain different potentiation/depression time scale for STDP with identical synaptic weight change amplitude when the input spike duration varies. Our simulation results, for various synaptic plasticities, indicate that a maximum 30% synaptic weight change and potentiation/depression time scale range from [-1.5 ms, 1.1 ms to [-32.2 ms, 24.1 ms] are achievable. We further explore the effect of our proposal at the Spiking Neural Network (SNN) level by performing NEST-based simulations of a small SNN implemented with 5 leaky-integrate-and-fire neurons connected via graphene-based synapses. Our experiments indicate that the number of SNN firing events exhibits a strong connection with the synaptic plasticity type, and monotonously varies with respect to the input spike frequency. Moreover, for graphene-based Hebbian STDP and spike duration of 20ms we obtain an SNN behavior relatively similar with the one provided by the same SNN with biological STDP. The proposed graphene-based synapse requires a small area (max. 30 nm 2 ), operates at low voltage (200 mV), and can emulate various plasticity types, which makes it an outstanding candidate for implementing large-scale brain-inspired computation systems.


Author(s):  
Paolo Bergamo ◽  
Conny Hammer ◽  
Donat Fäh

ABSTRACT We address the relation between seismic local amplification and topographical and geological indicators describing the site morphology. We focus on parameters that can be derived from layers of diffuse information (e.g., digital elevation models, geological maps) and do not require in situ surveys; we term these parameters as “indirect” proxies, as opposed to “direct” indicators (e.g., f0, VS30) derived from field measurements. We first compiled an extensive database of indirect parameters covering 142 and 637 instrumented sites in Switzerland and Japan, respectively; we collected topographical indicators at various spatial extents and focused on shared features in the geological descriptions of the two countries. We paired this proxy database with a companion dataset of site amplification factors at 10 frequencies within 0.5–20 Hz, empirically measured at the same Swiss and Japanese stations. We then assessed the robustness of the correlation between individual site-condition indicators and local response by means of statistical analyses; we also compared the proxy-site amplification relations at Swiss versus Japanese sites. Finally, we tested the prediction of site amplification by feeding ensembles of indirect parameters to a neural network (NN) structure. The main results are: (1) indirect indicators show higher correlation with site amplification in the low-frequency range (0.5–3.33 Hz); (2) topographical parameters primarily relate to local response not because of topographical amplification effects but because topographical features correspond to the properties of the subsurface, hence to stratigraphic amplification; (3) large-scale topographical indicators relate to low-frequency response, smaller-scale to higher-frequency response; (4) site amplification versus indirect proxy relations show a more marked regional variability when compared with direct indicators; and (5) the NN-based prediction of site response is the best achieved in the 1.67–5 Hz band, with both geological and topographical proxies provided as input; topographical indicators alone perform better than geological parameters.


Energies ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 2741 ◽  
Author(s):  
George Lavidas ◽  
Vengatesan Venugopal

At autonomous electricity grids Renewable Energy (RE) contributes significantly to energy production. Offshore resources benefit from higher energy density, smaller visual impacts, and higher availability levels. Offshore locations at the West of Crete obtain wind availability ≈80%, combining this with the installation potential for large scale modern wind turbines (rated power) then expected annual benefits are immense. Temporal variability of production is a limiting factor for wider adaptation of large offshore farms. To this end multi-generation with wave energy can alleviate issues of non-generation for wind. Spatio-temporal correlation of wind and wave energy production exhibit that wind and wave hybrid stations can contribute significant amounts of clean energy, while at the same time reducing spatial constrains and public acceptance issues. Offshore technologies can be combined as co-located or not, altering contribution profiles of wave energy to non-operating wind turbine production. In this study a co-located option contributes up to 626 h per annum, while a non co-located solution is found to complement over 4000 h of a non-operative wind turbine. Findings indicate the opportunities associated not only in terms of capital expenditure reduction, but also in the ever important issue of renewable variability and grid stability.


2021 ◽  
Author(s):  
Yijiao Fang ◽  
Jiangwei Zhong

Abstract A novel dual-band conformal surface plasmons (CSPs) waveguide is designed and well studied in this paper. In earlier researches, we have recognized that electromagnetic field of CSPs waveguide are always confined to a sub-wavelength area and have a strong potential to be applied in devices designing. However, almost all of the earlier CSP structures is mainly focus on the fundamental mode characteristics with only single resonance frequency. Here we propose a innovative dual inverted-L structure with excellent performance not only on the fundamental mode but also on a new upper mode. This structure operates in microwave frequencies regime and shows outstanding frequency tunability characteristic. Being different from frequency characteristics in the earlier CSP waveguides which always used to be designed single-frequency device, dual-frequency tunability can be obtained via the dual L-type bending branch of the periodical CSP structure. In present paper, we also realize a tunable dual-frequency filter by changing the scaling factor of inverted-L stubs.


Ocean Science ◽  
2016 ◽  
Vol 12 (5) ◽  
pp. 1067-1090 ◽  
Author(s):  
Marie-Isabelle Pujol ◽  
Yannice Faugère ◽  
Guillaume Taburet ◽  
Stéphanie Dupuy ◽  
Camille Pelloquin ◽  
...  

Abstract. The new DUACS DT2014 reprocessed products have been available since April 2014. Numerous innovative changes have been introduced at each step of an extensively revised data processing protocol. The use of a new 20-year altimeter reference period in place of the previous 7-year reference significantly changes the sea level anomaly (SLA) patterns and thus has a strong user impact. The use of up-to-date altimeter standards and geophysical corrections, reduced smoothing of the along-track data, and refined mapping parameters, including spatial and temporal correlation-scale refinement and measurement errors, all contribute to an improved high-quality DT2014 SLA data set. Although all of the DUACS products have been upgraded, this paper focuses on the enhancements to the gridded SLA products over the global ocean. As part of this exercise, 21 years of data have been homogenized, allowing us to retrieve accurate large-scale climate signals such as global and regional MSL trends, interannual signals, and better refined mesoscale features.An extensive assessment exercise has been carried out on this data set, which allows us to establish a consolidated error budget. The errors at mesoscale are about 1.4 cm2 in low-variability areas, increase to an average of 8.9 cm2 in coastal regions, and reach nearly 32.5 cm2 in high mesoscale activity areas. The DT2014 products, compared to the previous DT2010 version, retain signals for wavelengths lower than  ∼  250 km, inducing SLA variance and mean EKE increases of, respectively, +5.1 and +15 %. Comparisons with independent measurements highlight the improved mesoscale representation within this new data set. The error reduction at the mesoscale reaches nearly 10 % of the error observed with DT2010. DT2014 also presents an improved coastal signal with a nearly 2 to 4 % mean error reduction. High-latitude areas are also more accurately represented in DT2014, with an improved consistency between spatial coverage and sea ice edge position. An error budget is used to highlight the limitations of the new gridded products, with notable errors in areas with strong internal tides.


Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 208-217 ◽  
Author(s):  
Jinghan Du ◽  
Haiyan Chen ◽  
Weining Zhang

Purpose In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks. Design/methodology/approach Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network. Findings This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness. Originality/value A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.


Sign in / Sign up

Export Citation Format

Share Document