scholarly journals A Hybrid Short-term Traffic Flow Forecasting Method Based on Neural Networks Combined with K-Nearest Neighbor

2018 ◽  
Vol 30 (4) ◽  
pp. 445-456 ◽  
Author(s):  
Zhao Liu ◽  
Jianhua Guo ◽  
Jinde Cao ◽  
Yun Wei ◽  
Wei Huang

It is critical to implement accurate short-term traffic forecasting in traffic management and control applications. This paper proposes a hybrid forecasting method based on neural networks combined with the K-nearest neighbor (K-NN) method for short-term traffic flow forecasting. The procedure of training a neural network model using existing traffic input-output data, i.e., training data, is indispensable for fine-tuning the prediction model. Based on this point, the K-NN method was employed to reconstruct the training data for neural network models while considering the similarity of traffic flow patterns. This was done through collecting the specific state vectors that were closest to the current state vectors from the historical database to enhance the relationship between the inputs and outputs for the neural network models. In this study, we selected four different neural network models, i.e., back-propagation (BP) neural network, radial basis function (RBF) neural network, generalized regression (GR) neural network, and Elman neural network, all of which have been widely applied for short-term traffic forecasting. Using real world traffic data, the  experimental results primarily show that the BP and GR neural networks combined with the K-NN method have better prediction performance, and both are sensitive to the size of the training data. Secondly, the forecast accuracies of the RBF and Elman neural networks combined with the K-NN method both remain fairly stable with the increasing size of the training data. In summary, the proposed hybrid forecasting  approach outperforms the conventional forecasting models, facilitating the implementation of short-term  traffic forecasting in traffic management and control applications.

10.14311/1121 ◽  
2009 ◽  
Vol 49 (2) ◽  
Author(s):  
M. Chvalina

This article analyses the existing possibilities for using Standard Statistical Methods and Artificial Intelligence Methods for a short-term forecast and simulation of demand in the field of telecommunications. The most widespread methods are based on Time Series Analysis. Nowadays, approaches based on Artificial Intelligence Methods, including Neural Networks, are booming. Separate approaches will be used in the study of Demand Modelling in Telecommunications, and the results of these models will be compared with actual guaranteed values. Then we will examine the quality of Neural Network models. 


Author(s):  
Makhamisa Senekane ◽  
Mhlambululi Mafu ◽  
Molibeli Benedict Taele

Weather variations play a significant role in peoples’ short-term, medium-term or long-term planning. Therefore, understanding of weather patterns has become very important in decision making. Short-term weather forecasting (nowcasting) involves the prediction of weather over a short period of time; typically few hours. Different techniques have been proposed for short-term weather forecasting. Traditional techniques used for nowcasting are highly parametric, and hence complex. Recently, there has been a shift towards the use of artificial intelligence techniques for weather nowcasting. These include the use of machine learning techniques such as artificial neural networks. In this chapter, we report the use of deep learning techniques for weather nowcasting. Deep learning techniques were tested on meteorological data. Three deep learning techniques, namely multilayer perceptron, Elman recurrent neural networks and Jordan recurrent neural networks, were used in this work. Multilayer perceptron models achieved 91 and 75% accuracies for sunshine forecasting and precipitation forecasting respectively, Elman recurrent neural network models achieved accuracies of 96 and 97% for sunshine and precipitation forecasting respectively, while Jordan recurrent neural network models achieved accuracies of 97 and 97% for sunshine and precipitation nowcasting respectively. The results obtained underline the utility of using deep learning for weather nowcasting.


2020 ◽  
Vol 12 (1) ◽  
pp. 813-820
Author(s):  
Guangyuan Kan ◽  
Ke Liang ◽  
Haijun Yu ◽  
Bowen Sun ◽  
Liuqian Ding ◽  
...  

AbstractMachine learning-based data-driven models have achieved great success since their invention. Nowadays, the artificial neural network (ANN)-based machine learning methods have made great progress than ever before, such as the deep learning and reinforcement learning, etc. In this study, we coupled the ANN with the K-nearest neighbor method to propose a novel hybrid machine learning (HML) hydrological model for flood forecast purpose. The advantage of the proposed model over traditional neural network models is that it can predict discharge continuously without accuracy loss owed to its specially designed model structure. In order to overcome the local minimum issue of the traditional neural network training, a genetic algorithm and Levenberg–Marquardt-based multi-objective training method was also proposed. Real-world applications of the HML hydrological model indicated its satisfactory performance and reliable stability, which enlightened the possibility of further applications of the HML hydrological model in flood forecast problems.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 618
Author(s):  
Paola A. Sanchez-Sanchez ◽  
José Rafael García-González ◽  
Juan Manuel Rúa Ascar

Background: Previous studies of migraine classification have focused on the analysis of brain waves, leading to the development of complex tests that are not accessible to the majority of the population. In the early stages of this pathology, patients tend to go to the emergency services or outpatient department, where timely identification largely depends on the expertise of the physician and continuous monitoring of the patient. However, owing to the lack of time to make a proper diagnosis or the inexperience of the physician, migraines are often misdiagnosed either because they are wrongly classified or because the disease severity is underestimated or disparaged. Both cases can lead to inappropriate, unnecessary, or imprecise therapies, which can result in damage to patients’ health. Methods: This study focuses on designing and testing an early classification system capable of distinguishing between seven types of migraines based on the patient’s symptoms. The methodology proposed comprises four steps: data collection based on symptoms and diagnosis by the treating physician, selection of the most relevant variables, use of artificial neural network models for automatic classification, and selection of the best model based on the accuracy and precision of the diagnosis. Results: The neural network models used provide an excellent classification performance, with accuracy and precision levels >97% and which exceed the classifications made using other model, such as logistic regression, support vector machines, nearest neighbor, and decision trees. Conclusions: The implementation of migraine classification through neural networks is a powerful tool that reduces the time to obtain accurate, reliable, and timely clinical diagnoses.


Author(s):  
Hyun-il Lim

The neural network is an approach of machine learning by training the connected nodes of a model to predict the results of specific problems. The prediction model is trained by using previously collected training data. In training neural network models, overfitting problems can occur from the excessively dependent training of data and the structural problems of the models. In this paper, we analyze the effect of DropConnect for controlling overfitting in neural networks. It is analyzed according to the DropConnect rates and the number of nodes in designing neural networks. The analysis results of this study help to understand the effect of DropConnect in neural networks. To design an effective neural network model, the DropConnect can be applied with appropriate parameters from the understanding of the effect of the DropConnect in neural network models.


Author(s):  
Yu He ◽  
Jianxin Li ◽  
Yangqiu Song ◽  
Mutian He ◽  
Hao Peng

Traditional text classification algorithms are based on the assumption that data are independent and identically distributed. However, in most non-stationary scenarios, data may change smoothly due to long-term evolution and short-term fluctuation, which raises new challenges to traditional methods. In this paper, we present the first attempt to explore evolutionary neural network models for time-evolving text classification. We first introduce a simple way to extend arbitrary neural networks to evolutionary learning by using a temporal smoothness framework, and then propose a diachronic propagation framework to incorporate the historical impact into currently learned features through diachronic connections. Experiments on real-world news data demonstrate that our approaches greatly and consistently outperform traditional neural network models in both accuracy and stability.


Author(s):  
Tianle Ma ◽  
Aidong Zhang

While deep learning has achieved great success in computer vision and many other fields, currently it does not work very well on patient genomic data with the “big p, small N” problem (i.e., a relatively small number of samples with highdimensional features). In order to make deep learning work with a small amount of training data, we have to design new models that facilitate few-shot learning. Here we present the Affinity Network Model (AffinityNet), a data efficient deep learning model that can learn from a limited number of training examples and generalize well. The backbone of the AffinityNet model consists of stacked k-Nearest-Neighbor (kNN) attention pooling layers. The kNN attention pooling layer is a generalization of the Graph Attention Model (GAM), and can be applied to not only graphs but also any set of objects regardless of whether a graph is given or not. As a new deep learning module, kNN attention pooling layers can be plugged into any neural network model just like convolutional layers. As a simple special case of kNN attention pooling layer, feature attention layer can directly select important features that are useful for classification tasks. Experiments on both synthetic data and cancer genomic data from TCGA projects show that our AffinityNet model has better generalization power than conventional neural network models with little training data.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 830
Author(s):  
Seokho Kang

k-nearest neighbor (kNN) is a widely used learning algorithm for supervised learning tasks. In practice, the main challenge when using kNN is its high sensitivity to its hyperparameter setting, including the number of nearest neighbors k, the distance function, and the weighting function. To improve the robustness to hyperparameters, this study presents a novel kNN learning method based on a graph neural network, named kNNGNN. Given training data, the method learns a task-specific kNN rule in an end-to-end fashion by means of a graph neural network that takes the kNN graph of an instance to predict the label of the instance. The distance and weighting functions are implicitly embedded within the graph neural network. For a query instance, the prediction is obtained by performing a kNN search from the training data to create a kNN graph and passing it through the graph neural network. The effectiveness of the proposed method is demonstrated using various benchmark datasets for classification and regression tasks.


2018 ◽  
Vol 6 (11) ◽  
pp. 216-216 ◽  
Author(s):  
Zhongheng Zhang ◽  
◽  
Marcus W. Beck ◽  
David A. Winkler ◽  
Bin Huang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document