An Improving Data Stream Classification Algorithm Based on BP Neural Network

Author(s):  
Baoju Zhang ◽  
Guilin Wang ◽  
Lei Xue
2013 ◽  
Vol 441 ◽  
pp. 717-720
Author(s):  
Zhi Bo Ren ◽  
Chun Miao Yan ◽  
Yu Zhou Wei ◽  
Lei Sun

According to the high speed of data arriving, a large amount of data and concept drifting in the stream model, combining the techniques of rough set theory, neural network and voting rule, we put forward a new data stream classification model, which is a multi-classifier integration based on rough set theory, neural network. Firstly, it reduces all attributes using rough set theory; secondly, it constructs base classifiers on the data chunks after the reduction of attributes using the improved BP neural network; finally, it fuses various base classifiers into an ensemble by voting rule. Through applying the model to classify data stream, the experiment results show that the ensemble method is feasible and effective.


2021 ◽  
pp. 272-280
Author(s):  
Fernando Puentes-Marchal ◽  
María Dolores Pérez-Godoy ◽  
Pedro González ◽  
María José Del Jesus

2020 ◽  
Vol 34 (04) ◽  
pp. 3717-3724 ◽  
Author(s):  
Monidipa Das ◽  
Mahardhika Pratama ◽  
Jie Zhang ◽  
Yew Soon Ong

Stream classification models for non-stationary environments often assume the immediate availability of data labels. However, in a practical scenario, it is quite natural that the data labels are available only after some temporal lag. This paper explores how a stream classifier model can be made adaptive to such label latency scenario. We propose SkipE-RNN, a self-evolutionary recurrent neural network with dynamically evolving skipped-recurrent-connection for the best utilization of previously observed label information while classifying the current data. When the data label is unavailable, SkipE-RNN uses an auto-learned mapping function to find the best match from the already known data labels and updates the network parameter accordingly. Later, upon availability of true data label, if the previously mapped label is found to be incorrect, SkipE-RNN employs a regularization technique along with the parameter updating process, so as to penalize the model. In addition, SkipE-RNN has inborn power of self-adjusting the network capacity by growing/pruning hidden nodes to cope with the evolving nature of data stream. Rigorous empirical evaluations using synthetic as well as real-world datasets reveal effectiveness of SkipE-RNN in both finitely delayed and infinitely delayed data label scenarios.


Sign in / Sign up

Export Citation Format

Share Document