A Cross-Correlated Delay Shift Supervised Learning Method for Spiking Neurons with Application to Interictal Spike Detection in Epilepsy

2017 ◽  
Vol 27 (03) ◽  
pp. 1750002 ◽  
Author(s):  
Lilin Guo ◽  
Zhenzhong Wang ◽  
Mercedes Cabrerizo ◽  
Malek Adjouadi

This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.

2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Falah Y. H. Ahmed ◽  
Siti Mariyam Shamsuddin ◽  
Siti Zaiton Mohd Hashim

A spiking neurons network encodes information in the timing of individual spike times. A novel supervised learning rule for SpikeProp is derived to overcome the discontinuities introduced by the spiking thresholding. This algorithm is based on an error-backpropagation learning rule suited for supervised learning of spiking neurons that use exact spike time coding. The SpikeProp is able to demonstrate the spiking neurons that can perform complex nonlinear classification in fast temporal coding. This study proposes enhancements of SpikeProp learning algorithm for supervised training of spiking networks which can deal with complex patterns. The proposed methods include the SpikeProp particle swarm optimization (PSO) and angle driven dependency learning rate. These methods are presented to SpikeProp network for multilayer learning enhancement and weights optimization. Input and output patterns are encoded as spike trains of precisely timed spikes, and the network learns to transform the input trains into target output trains. With these enhancements, our proposed methods outperformed other conventional neural network architectures.


2013 ◽  
Vol 25 (6) ◽  
pp. 1472-1511 ◽  
Author(s):  
Yan Xu ◽  
Xiaoqin Zeng ◽  
Shuiming Zhong

The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.


2019 ◽  
Author(s):  
David Rotermund ◽  
Klaus R. Pawelzik

ABSTRACTNeural networks are important building blocks in technical applications. These artificial neural networks (ANNs) rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. A promising approach towards bridging this gap are the Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models that perform inference on their inputs. What is still missing are algorithms for finding weight sets that would optimize the output performances of deep SbS networks with many layers.Here, a learning rule for hierarchically organized SbS networks is derived. The properties of this approach are investigated and its functionality demonstrated by simulations. In particular, a Deep Convolutional SbS network for classifying handwritten digits (MNIST) is presented. When applied together with an optimizer this learning method achieves a classification performance of roughly 99.3% on the MNIST test data. Thereby it approaches the benchmark results of ANNs without extensive parameter optimization. We envision that with this learning rule SBS networks will provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Shuiming Zhong ◽  
Yu Xue ◽  
Yunhao Jiang ◽  
Yuanfeng Jin ◽  
Jing Yang ◽  
...  

This paper proposes a new adaptive learning algorithm for Madalines based on a sensitivity measure that is established to investigate the effect of a Madaline weight adaptation on its output. The algorithm, following the basic idea of minimal disturbance as the MRII did, introduces an adaptation selection rule by means of the sensitivity measure to more accurately locate the weights in real need of adaptation. Experimental results on some benchmark data demonstrate that the proposed algorithm has much better learning performance than the MRII and the BP algorithms.


Author(s):  
YIHAO ZHANG ◽  
JUNHAO WEN ◽  
FANGFANG TANG ◽  
ZHUO JIANG

Current existing representative works to semi-supervised incremental learning prefer to select unlabeled instances predicted with high confidence for model retraining. However, this strategy may degrade the classification performance rather than improve it, because relying on high confidence for data selection can lead to an erroneous estimate to the true distribution, especially when the confidence annotator is highly correlated with the confidence annotator. In this paper, a new semi-supervised incremental learning algorithm was proposed, which selected the high confidence unlabeled instances with symmetrical distribution from unlabeled data, it can reduce the bias in the estimation in some degree. In detail, expectation maximization algorithm was used to estimate the confidence of each instance, and Gaussian function was used to calculate the data distribution, then the selected unlabeled data was used for retraining model with classifier algorithm. The experimental results based on a large number of UCI data sets show that our algorithm can effectively exploit unlabeled data to enhance the learning performance.


2016 ◽  
Vol 40 (2) ◽  
pp. 363-374 ◽  
Author(s):  
Ye Tao ◽  
Duzhou Zhang ◽  
Shengjun Cheng ◽  
Xianglong Tang

Semi-supervised learning aims to utilize both labelled and unlabelled data to improve learning performance. This paper shows a distinct way to exploit unlabelled data for traditional semi-supervised learning methods, such as self-training. Self-training is a well-known semi-supervised learning algorithm which iteratively trains a classifier by bootstrapping from unlabelled data. Standard self-training barely selects unlabelled examples for training set augmentation according to the current classifier model, which is trained only on the labelled data. This could be problematic since the underlying classifier is not strong enough, especially when initial labelled data is sparse. Consequently, self-training suffers from too much classification noise accumulated in the training set. In this paper, we propose a novel self-training style algorithm, which exploits a manifold assumption to optimize the self-labelling process. Unlike standard self-training, our algorithm utilizes labelled and unlabelled data as a whole to label and select unlabelled examples for training set augmentation. In detail, two measures are employed to minimize the effect of noise introduced to the labelled training set: a transductive method based on controlled graph random walk is incorporated to generate reliable predictions on unlabelled data; secondly, the mechanism is adopted to sequentially augment the training set. Empirical results suggest that the proposed method can effectively improve classification performance.


2021 ◽  
Author(s):  
Thomas Wilschut ◽  
Florian Sense ◽  
Maarten van der Velde ◽  
Zafeirios Fountas ◽  
Sarah Maass ◽  
...  

Memorising vocabulary is an important aspect of formal foreign-language learning. Advances in cognitive psychology have led to the development of adaptive learning systems that make vocabulary learning more efficient. One way these computer-based systems optimize learning is by measuring learning performance in real time to create optimal repetition schedules for individual learners. While such adaptive learning systems have been successfully applied to word learning using keyboard-based input, they have thus far seen little application in spoken word learning. Here we present a system for adaptive, speech-based word learning using an adaptive model that was developed for and tested with typing-based word learning. We show that typing- and speech-based learning result in similar behavioral patterns that can be used to reliably estimate individual memory processes, and we extend earlier findings demonstrating that a response-time based adaptive learning system outperforms an accuracy-based, Leitner flashcard learning algorithm. In short, we show that adaptive learning benefits transfer from typing-based learning, to speech based learning. Our work provides a basis for the development of language learning applications that use real-time pronunciation assessment software to score the accuracy of the learner's pronunciations. The development of adaptive, speech-based learning applications is important for two reasons. First, by focusing on speech, the model can be applied for individuals whose typing skills are insufficient---as is demonstrated by the successful application of the model in an elderly participant population. Second, speech-based learning models are educationally relevant because they focus on what may be the most important aspect of language learning: to practice speech.


Author(s):  
Rong Xiao ◽  
Qiang Yu ◽  
Rui Yan ◽  
Huajin Tang

The formulation of efficient supervised learning algorithms for spiking neurons is complicated and remains challenging. Most existing learning methods with the precisely firing times of spikes often result in relatively low efficiency and poor robustness to noise. To address these limitations, we propose a simple and effective multi-spike learning rule to train neurons to match their output spike number with a desired one. The proposed method will quickly find a local maximum value (directly related to the embedded feature) as the relevant signal for synaptic updates based on membrane potential trace of a neuron, and constructs an error function defined as the difference between the local maximum membrane potential and the firing threshold. With the presented rule, a single neuron can be trained to learn multi-category tasks, and can successfully mitigate the impact of the input noise and discover embedded features. Experimental results show the proposed algorithm has higher precision, lower computation cost, and better noise robustness than current state-of-the-art learning methods under a wide range of learning tasks.


Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3267
Author(s):  
Ramon C. F. Araújo ◽  
Rodrigo M. S. de Oliveira ◽  
Fernando S. Brasil ◽  
Fabrício J. B. Barros

In this paper, a novel image denoising algorithm and novel input features are proposed. The algorithm is applied to phase-resolved partial discharge (PRPD) diagrams with a single dominant partial discharge (PD) source, preparing them for automatic artificial-intelligence-based classification. It was designed to mitigate several sources of distortions often observed in PRPDs obtained from fully operational hydroelectric generators. The capabilities of the denoising algorithm are the automatic removal of sparse noise and the suppression of non-dominant discharges, including those due to crosstalk. The input features are functions of PD distributions along amplitude and phase, which are calculated in a novel way to mitigate random effects inherent to PD measurements. The impact of the proposed contributions was statistically evaluated and compared to classification performance obtained using formerly published approaches. Higher recognition rates and reduced variances were obtained using the proposed methods, statistically outperforming autonomous classification techniques seen in earlier works. The values of the algorithm’s internal parameters are also validated by comparing the recognition performance obtained with different parameter combinations. All typical PD sources described in hydro-generators PD standards are considered and can be automatically detected.


Sign in / Sign up

Export Citation Format

Share Document