Predicting Freeway Incident Duration Using Machine Learning

Author(s):  
Khaled Hamad ◽  
Mohamad Ali Khalil ◽  
Abdul Razak Alozi

Traffic incidents dont only cause various levels of traffic congestion but often contribute to traffic accidents and secondary accidents, resulting in substantial loss of life, economy, and productivity loss in terms of injuries and deaths, increased travel times and delays, and excessive consumption of energy and air pollution. Therefore, it is essential to accurately estimate the duration of the incident to mitigate these effects. Traffic management center incident logs and traffic sensors data from Eastbound Interstate 70 (I-70) in Missouri, United States collected during the period from January 2015 to January 2017, with a total of 352 incident records were used to develop incident duration estimation models. This paper investigated different machine learning (ML) methods for traffic incidents duration prediction. The attempted ML techniques include Support Vector Machine (SVM), Random Forest (RF), and Neural Network Multi-Layer Perceptron (MLP). Root mean squared error (RMSE) and Mean absolute error (MAE) were used to evaluate the performance of these models. The results showed that the performance of the models was comparable with SVM models slightly outperforms the RF, and MLP models in terms of MAE index, where MAE was 14.23 min for the best-performing SVM models. Whereas, in terms of the RMSE index, RF models slightly outperformed the other two models given RMSE of 18.91 min for the best-performing RF model. Index Terms— Incident Duration, Neural Network Multi-Layer Perceptron, Random Forest, Support Vector Machine.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Author(s):  
Lorenza Saitta ◽  
Attilio Giordana ◽  
Antoine Cornuejols

Author(s):  
Shai Shalev-Shwartz ◽  
Shai Ben-David
Keyword(s):  

2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

Sign in / Sign up

Export Citation Format

Share Document