scholarly journals An emergent temporal basis set robustly supports cerebellar time-series learning

2022 ◽  
Author(s):  
Jesse I Gilmer ◽  
Michael A Farries ◽  
Zachary P Kilpatrick ◽  
Ioannis Delis ◽  
Abigail L Person

Learning plays a key role in the function of many neural circuits. The cerebellum is considered a learning machine essential for time interval estimation underlying motor coordination and other behaviors. Theoretical work has proposed that the cerebellar input recipient structure, the granule cell layer (GCL), performs pattern separation of inputs that facilitates learning in Purkinje cells (P-cells). However, the relationship between input reformatting and learning outcomes has remained debated, with roles emphasized for pattern separation features from sparsification to decorrelation. We took a novel approach by training a minimalist model of the cerebellar cortex to learn complex time-series data from naturalistic inputs, in contrast to traditional classification tasks. The model robustly produced temporal basis sets from naturalistic inputs, and the resultant GCL output supported learning of temporally complex target functions. Learning favored surprisingly dense granule cell activity, yet the key statistical features in GCL population activity that drove learning differed from those seen previously for classification tasks. Moreover, different cerebellar tasks were supported by diverse pattern separation features that matched the demands of the tasks. These findings advance testable hypotheses for mechanisms of temporal basis set formation and predict that population statistics of granule cell activity may differ across cerebellar regions to support distinct behaviors.

Water ◽  
2021 ◽  
Vol 13 (14) ◽  
pp. 1944
Author(s):  
Haitham H. Mahmoud ◽  
Wenyan Wu ◽  
Yonghao Wang

This work develops a toolbox called WDSchain on MATLAB that can simulate blockchain on water distribution systems (WDS). WDSchain can import data from Excel and EPANET water modelling software. It extends the EPANET to enable simulation blockchain of the hydraulic data at any intended nodes. Using WDSchain will strengthen network automation and the security in WDS. WDSchain can process time-series data with two simulation modes: (1) static blockchain, which takes a snapshot of one-time interval data of all nodes in WDS as input and output into chained blocks at a time, and (2) dynamic blockchain, which takes all simulated time-series data of all the nodes as input and establishes chained blocks at the simulated time. Five consensus mechanisms are developed in WDSchain to provide data at different security levels using PoW, PoT, PoV, PoA, and PoAuth. Five different sizes of WDS are simulated in WDSchain for performance evaluation. The results show that a trade-off is needed between the system complexity and security level for data validation. The WDSchain provides a methodology to further explore the data validation using Blockchain to WDS. The limitations of WDSchain do not consider selection of blockchain nodes and broadcasting delay compared to commercial blockchain platforms.


2020 ◽  
Vol 2020 (1) ◽  
pp. 98-117
Author(s):  
Jyoti U. Devkota

Abstract The nightfires illuminated on the earth surface are caught by the satellite. These are emitted by various sources such as gas flares, biomass burning, volcanoes, and industrial sites such as steel mills. Amount of nightfires in an area is a proxy indicator of fuel consumption and CO2 emission. In this paper the behavior of radiant heat (RH) data produced by nightfire is minutely analyzed over a period of 75 hour; the geographical coordinates of energy sources generating these values are not considered. Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS DNB) satellite earth observation nightfire data were used. These 75 hours and 28252 observations time series RH (unit W) data is from 2 September 2018 to 6 September 2018. The dynamics of change in the overall behavior these data and with respect to time and irrespective of its geographical occurrence is studied and presented here. Different statistical methodologies are also used to identify hidden groups and patterns which are not obvious by remote sensing. Underlying groups and clusters are formed using Cluster Analysis and Discriminant Analysis. The behavior of RH for three consecutive days is studied with the technique Analysis of Variance. Cubic Spline Interpolation and merging has been done to create a time series data occurring at equal minute time interval. The time series data is decomposed to study the effect of various components. The behavior of this data is also analyzed in frequency domain by study of period, amplitude and the spectrum.


2019 ◽  
Author(s):  
Girish L

Network and Cloud Data Centers generate a lot of data every second, this data can be collected as a time series data. A time series is a sequence taken at successive equally spaced points in time, that means at a particular time interval to a specific time, the values of specific data that was taken is known as a data of a time series. This time series data can be collected using system metrics like CPU, Memory, and Disk utilization. The TICK Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data incredibly easy. As a data collector, the authors are using both Telegraf and Collectd, for storing and analyzing data and the time series database InfluxDB. For plotting and visualizing, they use Chronograf along with Grafana. Kapacitor is used for alert refinement and once system metrics usage exceeds the specified threshold, the alert is generated and sends it to the system admin.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2146
Author(s):  
Mikhail Zymbler ◽  
Elena Ivanova

Currently, big sensor data arise in a wide spectrum of Industry 4.0, Internet of Things, and Smart City applications. In such subject domains, sensors tend to have a high frequency and produce massive time series in a relatively short time interval. The data collected from the sensors are subject to mining in order to make strategic decisions. In the article, we consider the problem of choosing a Time Series Database Management System (TSDBMS) to provide efficient storing and mining of big sensor data. We overview InfluxDB, OpenTSDB, and TimescaleDB, which are among the most popular state-of-the-art TSDBMSs, and represent different categories of such systems, namely native, add-ons over NoSQL systems, and add-ons over relational DBMSs (RDBMSs), respectively. Our overview shows that, at present, TSDBMSs offer a modest built-in toolset to mine big sensor data. This leads to the use of third-party mining systems and unwanted overhead costs due to exporting data outside a TSDBMS, data conversion, and so on. We propose an approach to managing and mining sensor data inside RDBMSs that exploits the Matrix Profile concept. A Matrix Profile is a data structure that annotates a time series through the index of and the distance to the nearest neighbor of each subsequence of the time series and serves as a basis to discover motifs, anomalies, and other time-series data mining primitives. This approach is implemented as a PostgreSQL extension that allows an application programmer both to compute matrix profiles and mining primitives and to represent them as relational tables. Experimental case studies show that our approach surpasses the above-mentioned out-of-TSDBMS competitors in terms of performance since it assumes that sensor data are mined inside a TSDBMS at no significant overhead costs.


Author(s):  
Roy Assaf ◽  
Anika Schumann

We demonstrate that CNN deep neural networks can not only be used for making predictions based on multivariate time series data, but also for explaining these predictions. This is important for a number of applications where predictions are the basis for decisions and actions. Hence, confidence in the prediction result is crucial. We design a two stage convolutional neural network architecture which uses particular kernel sizes. This allows us to utilise gradient based techniques for generating saliency maps for both the time dimension and the features. These are then used for explaining which features during which time interval are responsible for a given prediction, as well as explaining during which time intervals was the joint contribution of all features most important for that prediction. We demonstrate our approach for predicting the average energy production of photovoltaic power plants and for explaining these predictions.


2019 ◽  
Vol 12 (3) ◽  
pp. 82-89
Author(s):  
O. S. Vidmant

The use of new tools for economic data analysis in the last decade has led to significant improvements in forecasting. This is due to the relevance of the question, and the development of technologies that allow implementation of more complex models without resorting to the use of significant computing power. The constant volatility of the world indices forces all financial market players to improve risk management models and, at the same time, to revise the policy of capital investment. More stringent liquidity and transparency standards in relation to the financial sector also encourage participants to experiment with protective mechanisms and to create predictive algorithms that can not only reduce the losses from the volatility of financial instruments but also benefit from short-term investment manipulations. The article discusses the possibility of improving the efficiency of calculations in predicting the volatility by the models of tree ensembles using various methods of data analysis. As the key points of efficiency growth, the author studied the possibility of aggregation of financial time series data using several methods of calculation and prediction of variance: Standard, EWMA, ARCH, GARCH, and also analyzed the possibility of simplifying the calculations while reducing the correlation between the series. The author demonstrated the application of calculation methods on the basis of an array of historical price data (Open, High, Low, Close) and volume indicators (Volumes) of futures trading on the RTS index with a five-minute time interval and an annual set of historical data. The proposed method allows to reduce the cost of computing power and time for data processing in the analysis of short-term positions in the financial markets and to identify risks with a certain level of confidence probability.


2019 ◽  
Vol 290 ◽  
pp. 02002
Author(s):  
Crina Narcisa Deac ◽  
Gicu Calin Deac ◽  
Florina Chiscop ◽  
Cicerone Laurentiu Popa

Anomaly detection is a crucial analysis topic in the field of Industry 4.0 data mining as well as knowing what is the probability that a specific machine to go down due to a failure of a component in the next time interval. In this article, we used time series data collected from machines, from both classes - time series data which leads up to the failures of machines as well as data from healthy operational periods of the machine. We used telemetry data, error logs from still operational components, maintenance records comprising historical breakdowns and replacement component to build and compare several different models. The validation of the proposed methods was made by comparing the actual failures in the test data with the predicted component failures over the test data.


Author(s):  
Faruk H. Bursal ◽  
Benson H. Tongue

Abstract In this paper, a system identification algorithm based on Interpolated Mapping (IM) that was introduced in a previous paper is generalized to the case of data stemming from arbitrary time series. The motivation for the new algorithm is the need to identify nonlinear dynamics in continuous time from discrete-time data. This approach has great generality and is applicable to problems arising in many areas of science and engineering. In the original formulation, a map defined on a regular grid in the state space of a dynamical system was assumed to be given. For the formulation to become practically viable, however, the requirement of initial conditions being taken from such a regular grid needs to be dropped. In particular, one would like to use time series data, where the time interval between samples is identified with the mapping time step T. This paper is concerned with the resulting complications. Various options for extending the formulation are examined, and a choice is made in favor of a pre-processing algorithm for estimating the FS map based on local fits to the data set. The suggested algorithm also has smoothing properties that are desirable from the standpoint of noise reduction.


2020 ◽  
Vol 12 (17) ◽  
pp. 2726 ◽  
Author(s):  
Yongguang Zhai ◽  
Nan Wang ◽  
Lifu Zhang ◽  
Lei Hao ◽  
Caihong Hao

Accurate and timely information on the spatial distribution of crops is of great significance to precision agriculture and food security. Many cropland mapping methods using satellite image time series are based on expert knowledge to extract phenological features to identify crops. It is still a challenge to automatically obtain meaningful features from time-series data for crop classification. In this study, we developed an automated method based on satellite image time series to map the spatial distribution of three major crops including maize, rice, and soybean in northeastern China. The core method used is the nonlinear dimensionality reduction technique. However, the existing nonlinear dimensionality reduction technique cannot handle missing data, and it is not designed for subsequent classification tasks. Therefore, the nonlinear dimensionality reduction algorithm Landmark–Isometric feature mapping (L–ISOMAP) is improved. The advantage of the improved L–ISOMAP is that it does not need to reconstruct time series for missing data, and it can automatically obtain meaningful featured metrics for classification. The improved L–ISOMAP was applied to Landsat 8 full-band time-series data during the crop-growing season in the three northeastern provinces of China; then, the dimensionality reduction bands were inputted into a random forest classifier to complete a crop distribution map. The results show that the area of crops mapped is consistent with official statistics. The 2015 crop distribution map was evaluated through the collected reference dataset, and the overall classification accuracy and Kappa index were 83.68% and 0.7519, respectively. The geographical characteristics of major crops in three provinces in northeast China were analyzed. This study demonstrated that the improved L–ISOMAP method can be used to automatically extract features for crop classification. For future work, there is great potential for applying automatic mapping algorithms to other data or classification tasks.


2020 ◽  
Author(s):  
John Hongyu Meng ◽  
Hermann Riecke

AbstractHow animals learn to discriminate between different sensory stimuli is an intriguing question. An important, common step towards discrimination is the enhancement of differences between the representations of relevant stimuli. This can be part of the learning process. In rodents, the olfac-tory bulb, which is known to contribute to this pattern separation, exhibits extensive structural synaptic plasticity even in adult animals: reciprocal connections between excitatory mitral cells and inhibitory granule cells are persistently formed and eliminated, correlated with mitral cell and granule cell activity. Here we present a Hebbian-type model for this plasticity. It captures the experimental observation that the same learning protocol that enhanced the discriminability of similar stimuli actually reduced that of dissimilar stimuli. The model predicts that the learned bulbar network structure is remembered across training with additional stimuli, unless the new stimuli interfere with the representations of previously learned ones.


Sign in / Sign up

Export Citation Format

Share Document