scholarly journals A Temporal Pool Learning Algorithm Based on Location Awareness

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Lei Li ◽  
Yuquan Zhu ◽  
Tao Cai ◽  
Dejiao Niu ◽  
Huaji Shi ◽  
...  

Hierarchical Temporal Memory is a new type of artificial neural network model, which imitates the structure and information processing flow of the human brain. Hierarchical Temporal Memory has strong adaptability and fast learning ability and becomes a hot spot in current research. Hierarchical Temporal Memory obtains and saves the temporal characteristics of input sequences by the temporal pool learning algorithm. However, the current algorithm has some problems such as low learning efficiency and poor learning effect when learning time series data. In this paper, a temporal pool learning algorithm based on location awareness is proposed. The cell selection rules based on location awareness and the dendritic updating rules based on adjacent inputs are designed to improve the learning efficiency and effect of the algorithm. Through the algorithm prototype, three different datasets are used to test and analyze the algorithm performance. The experimental results verify that the algorithm can quickly obtain the complete characteristics of the input sequence. No matter whether there are similar segments in the sequence, the proposed algorithm has higher prediction recall and precision than the existing algorithms.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
E.N. Osegi

In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting (STLF). A HTM Spatial Pooler (HTM-SP) stage is used to continually form sparse distributed representations (SDRs) from a univariate load time series data, a temporal aggregator is used to transform the SDRs into a sequential bivariate representation space and an overlap classifier makes temporal classifications from the bivariate SDRs through time. The comparative performance of HTM on several daily electrical load time series data including the Eunite competition dataset and the Polish power system dataset from 2002 to 2004 are presented. The robustness performance of HTM is also further validated using hourly load data from three more recent electricity markets. The results obtained from experimenting with the Eunite and Polish dataset indicated that HTM will perform better than the existing techniques reported in the literature. In general, the robustness test also shows that the error distribution performance of the proposed HTM technique is positively skewed for most of the years considered and with kurtosis values mostly lower than a base value of 3 indicating a reasonable level of outlier rejections.


The aim of this research is to do risk modelling after analysis of twitter posts based on certain sentiment analysis. In this research we analyze posts of several users or a particular user to check whether they can be cause of concern to the society or not. Every sentiment like happy, sad, anger and other emotions are going to provide scaling of severity in the conclusion of final table on which machine learning algorithm is applied. The data which is put under the machine learning algorithms are been monitored over a period of time and it is related to a particular topic in an area


2020 ◽  
Vol 10 (5) ◽  
pp. 1754 ◽  
Author(s):  
Pedro Huertas-Leyva ◽  
Giovanni Savino ◽  
Niccolò Baldanzini ◽  
Marco Pierini

The most common evasive maneuver among motorcycle riders and one of the most complicated to perform in emergency situations is braking. Because of the inherent instability of motorcycles, motorcycle crashes are frequently caused by loss of control performing braking as an evasive maneuver. Understanding the motion conditions that lead riders to start losing control is essential for defining countermeasures capable of minimizing the risk of this type of crashes. This paper provides predictive models to classify unsafe loss of control braking maneuvers on a straight line before becoming irreversibly unstable. We performed braking maneuver experiments in the field with motorcycle riders facing a simulated emergency scenario. The latter involved a mock-up intersection in which we generated conflict events between the motorcycle ridden by the participants and an oncoming car driven by trained research staff. The data collected comprises 165 braking trials (including 11 trials identified as loss of control) with 13 riders representing four categories of braking skill, ranging from beginner to expert. Three predictive models of loss of control events during braking trials, going from a basic model to a more advanced one, were defined using logistic regressions as supervised learning methods and using the area under the receiver operating characteristic (ROC) curve as a performance indicator. The predictor variables of the models were identified among the parameters of the vehicle kinematics. The best model predicted 100% of the loss of control and 100% of the full control cases. The basic and the more advanced supervised models were adapted for loss of control identification with time series data, and the results detecting in real-time the loss of control events showed excellent performance as well as with the supervised models. The study showed that expert riders may maintain stability under dynamic conditions that normally lead less skilled riders to a loss of control or falling events. The best decision thresholds of the most relevant kinematic parameters to predict loss of control have been defined. The thresholds of parameters that typically characterize the loss of control such as the yaw rate and front-wheel lock duration were dependent on the rider skill levels. The peak-to-root-mean-square ratio of roll acceleration was the most robust parameter for identifying loss of control among all skill levels.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Daniel J. Gauthier ◽  
Erik Bollt ◽  
Aaron Griffith ◽  
Wendson A. S. Barbosa

AbstractReservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computing to nonlinear vector autoregression, which requires no random matrices, fewer metaparameters, and provides interpretable results. Here, we demonstrate that nonlinear vector autoregression excels at reservoir computing benchmark tasks and requires even shorter training data sets and training time, heralding the next generation of reservoir computing.


2020 ◽  
Vol 9 (2) ◽  
pp. 135-142
Author(s):  
Di Mokhammad Hakim Ilmawan ◽  
Budi Warsito ◽  
Sugito Sugito

Bitcoin is one of digital assets that can be used to make a profit. One of the ways to use Bitcoin profitly is to trade Bitcoin. At trade activities, decisions making whether to buy or not are very crucial. If we can predict the price of Bitcoin in the future period, we can make a decisions whether to buy Bitcoin or not. Artificial Neural Network can be used to predict Bitcoin price data which is time series data. There are many learning algorithm in Artificial Neural Network, Modified Artificial Bee Colony is one of optimization algorithm that used to solve the optimal weight of Artificial Neural Network. In this study, the Bitcoin exchage rate against Rupiah starting September 1, 2017 to January 4, 2019 are used. Based on the training results obtained that MAPE value is 3,12% and the testing results obtained that MAPE value is 2,02%. This represent that the prediction results from Artificial Neural Network optimized by Modified Artificial Bee Colony algorithm are quite accurate because of small MAPE value.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7628
Author(s):  
Yeon-Wook Kim ◽  
Kyung-Lim Joa ◽  
Han-Young Jeong ◽  
Sangmin Lee

In this study, a wearable inertial measurement unit system was introduced to assess patients via the Berg balance scale (BBS), a clinical test for balance assessment. For this purpose, an automatic scoring algorithm was developed. The principal aim of this study is to improve the performance of the machine-learning-based method by introducing a deep-learning algorithm. A one-dimensional (1D) convolutional neural network (CNN) and a gated recurrent unit (GRU) that shows good performance in multivariate time-series data were used as model components to find the optimal ensemble model. Various structures were tested, and a stacking ensemble model with a simple meta-learner after two 1D-CNN heads and one GRU head showed the best performance. Additionally, model performance was enhanced by improving the dataset via preprocessing. The data were down sampled, an appropriate sampling rate was found, and the training and evaluation times of the model were improved. Using an augmentation process, the data imbalance problem was solved, and model accuracy was improved. The maximum accuracy of 14 BBS tasks using the model was 98.4%, which is superior to the results of previous studies.


2020 ◽  
Vol 10 (12) ◽  
pp. 4092 ◽  
Author(s):  
Sung-Hyun Yoon ◽  
Ha-Jin Yu

Recurrent neural networks (RNNs) can model the time-dependency of time-series data. It has also been widely used in text-dependent speaker verification to extract speaker-and-phrase-discriminant embeddings. As with other neural networks, RNNs are trained in mini-batch units. In order to feed input sequences into an RNN in mini-batch units, all the sequences in each mini-batch must have the same length. However, the sequences have variable lengths and we have no knowledge of these lengths in advance. Truncation/padding are most commonly used to make all sequences the same length. However, the truncation/padding causes information distortion because some information is lost and/or unnecessary information is added, which can degrade the performance of text-dependent speaker verification. In this paper, we propose a method to handle variable length sequences for RNNs without adding information distortion by truncating the output sequence so that it has the same length as corresponding original input sequence. The experimental results for the text-dependent speaker verification task in part 2 of RSR 2015 show that our method reduces the relative equal error rate by approximately 1.3% to 27.1%, depending on the task, compared to the baselines but with an associated, small overhead in execution time.


Author(s):  
Tarik A. Rashid ◽  
Mohammad K. Hassan ◽  
Mokhtar Mohammadi ◽  
Kym Fraser

Recently, the population of the world has increased along with health problems. Diabetes mellitus disease as an example causes issues to the health of many patients globally. The task of this chapter is to develop a dynamic and intelligent decision support system for patients with different diseases, and it aims at examining machine-learning techniques supported by optimization techniques. Artificial neural networks have been used in healthcare for several decades. Most research works utilize multilayer layer perceptron (MLP) trained with back propagation (BP) learning algorithm to achieve diabetes mellitus classification. Nonetheless, MLP has some drawbacks, such as, convergence, which can be slow; local minima can affect the training process. It is hard to scale and cannot be used with time series data sets. To overcome these drawbacks, long short-term memory (LSTM) is suggested, which is a more advanced form of recurrent neural networks. In this chapter, adaptable LSTM trained with two optimizing algorithms instead of the back propagation learning algorithm is presented. The optimization algorithms are biogeography-based optimization (BBO) and genetic algorithm (GA). Dataset is collected locally and another benchmark dataset is used as well. Finally, the datasets fed into adaptable models; LSTM with BBO (LSTMBBO) and LSTM with GA (LSTMGA) for classification purposes. The experimental and testing results are compared and they are promising. This system helps physicians and doctors to provide proper health treatment for patients with diabetes mellitus. Details of source code and implementation of our system can be obtained in the following link “https://github.com/hamakamal/LSTM.”


Sign in / Sign up

Export Citation Format

Share Document