Application of Deep Learning for Characterization of Drivers’ Engagement in Secondary Tasks in In-Vehicle Systems

Author(s):  
Osama A. Osman ◽  
Hesham Rakha

Distracted driving (i.e., engaging in secondary tasks) is an epidemic that threatens the lives of thousands every year. Data collected from vehicular sensor technologies and through connectivity provide comprehensive information that, if used to detect driver engagement in secondary tasks, could save thousands of lives and millions of dollars. This study investigates the possibility of achieving this goal using promising deep learning tools. Specifically, two deep neural network models (a multilayer perceptron neural network model and a long short-term memory networks [LSTMN] model) were developed to identify three secondary tasks: cellphone calling, cellphone texting, and conversation with adjacent passengers. The Second Strategic Highway Research Program Naturalistic Driving Study (SHRP 2 NDS) time series data, collected using vehicle sensor technology, were used to train and test the model. The results show excellent performance for the developed models, with a slight improvement for the LSTMN model, with overall classification accuracies ranging between 95 and 96%. Specifically, the models are able to identify the different types of secondary tasks with high accuracies of 100% for calling, 96%–97% for texting, 90%–91% for conversation, and 95%–96% for the normal driving. Based on this performance, the developed models improve on the results of a previous model developed by the author to classify the same three secondary tasks, which had an accuracy of 82%. The model is promising for use in in-vehicle driving assistance technology to report engagement in unlawful tasks or alert drivers to take over control in level 1 and 2 automated vehicles.

Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 99
Author(s):  
Sultan Daud Khan ◽  
Louai Alarabi ◽  
Saleh Basalamah

COVID-19 caused the largest economic recession in the history by placing more than one third of world’s population in lockdown. The prolonged restrictions on economic and business activities caused huge economic turmoil that significantly affected the financial markets. To ease the growing pressure on the economy, scientists proposed intermittent lockdowns commonly known as “smart lockdowns”. Under smart lockdown, areas that contain infected clusters of population, namely hotspots, are placed on lockdown, while economic activities are allowed to operate in un-infected areas. In this study, we proposed a novel deep learning prediction framework for the accurate prediction of hotpots. We exploit the benefits of two deep learning models, i.e., Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) and propose a hybrid framework that has the ability to extract multi time-scale features from convolutional layers of CNN. The multi time-scale features are then concatenated and provide as input to 2-layers LSTM model. The LSTM model identifies short, medium and long-term dependencies by learning the representation of time-series data. We perform a series of experiments and compare the proposed framework with other state-of-the-art statistical and machine learning based prediction models. From the experimental results, we demonstrate that the proposed framework beats other existing methods with a clear margin.


Author(s):  
Zahra A. Shirazi ◽  
Camila P. E. de Souza ◽  
Rasha Kashef ◽  
Felipe F. Rodrigues

Artificial Neural networks (ANN) are composed of nodes that are joint to each other through weighted connections. Deep learning, as an extension of ANN, is a neural network model, but composed of different categories of layers: input layer, hidden layers, and output layers. Input data is fed into the first (input) layer. But the main process of the neural network models is done within the hidden layers, ranging from a single hidden layer to multiple ones. Depending on the type of model, the structure of the hidden layers is different. Depending on the type of input data, different models are applied. For example, for image data, convolutional neural networks are the most appropriate. On the other hand, for text or sequential and time series data, recurrent neural networks or long short-term memory models are the better choices. This chapter summarizes the state-of-the-art deep learning methods applied to the healthcare industry.


2021 ◽  
Vol 21 (3) ◽  
pp. 175-188
Author(s):  
Sumaiya Thaseen Ikram ◽  
Aswani Kumar Cherukuri ◽  
Babu Poorva ◽  
Pamidi Sai Ushasree ◽  
Yishuo Zhang ◽  
...  

Abstract Intrusion Detection Systems (IDSs) utilise deep learning techniques to identify intrusions with maximum accuracy and reduce false alarm rates. The feature extraction is also automated in these techniques. In this paper, an ensemble of different Deep Neural Network (DNN) models like MultiLayer Perceptron (MLP), BackPropagation Network (BPN) and Long Short Term Memory (LSTM) are stacked to build a robust anomaly detection model. The performance of the ensemble model is analysed on different datasets, namely UNSW-NB15 and a campus generated dataset named VIT_SPARC20. Other types of traffic, namely unencrypted normal traffic, normal encrypted traffic, encrypted and unencrypted malicious traffic, are captured in the VIT_SPARC20 dataset. Encrypted normal and malicious traffic of VIT_SPARC20 is categorised by the deep learning models without decrypting its contents, thus preserving the confidentiality and integrity of the data transmitted. XGBoost integrates the results of each deep learning model to achieve higher accuracy. From experimental analysis, it is inferred that UNSW_ NB results in a maximal accuracy of 99.5%. The performance of VIT_SPARC20 in terms of accuracy, precision and recall are 99.4%. 98% and 97%, respectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yao Li

Faults occurring in the production line can cause many losses. Predicting the fault events before they occur or identifying the causes can effectively reduce such losses. A modern production line can provide enough data to solve the problem. However, in the face of complex industrial processes, this problem will become very difficult depending on traditional methods. In this paper, we propose a new approach based on a deep learning (DL) algorithm to solve the problem. First, we regard these process data as a spatial sequence according to the production process, which is different from traditional time series data. Second, we improve the long short-term memory (LSTM) neural network in an encoder-decoder model to adapt to the branch structure, corresponding to the spatial sequence. Meanwhile, an attention mechanism (AM) algorithm is used in fault detection and cause identification. Third, instead of traditional biclassification, the output is defined as a sequence of fault types. The approach proposed in this article has two advantages. On the one hand, treating data as a spatial sequence rather than a time sequence can overcome multidimensional problems and improve prediction accuracy. On the other hand, in the trained neural network, the weight vectors generated by the AM algorithm can represent the correlation between faults and the input data. This correlation can help engineers identify the cause of faults. The proposed approach is compared with some well-developed fault diagnosing methods in the Tennessee Eastman process. Experimental results show that the approach has higher prediction accuracy, and the weight vector can accurately label the factors that cause faults.


2021 ◽  
Vol 35 (1) ◽  
pp. 1-10
Author(s):  
Senthil Kumar Paramasivan

In the modern era, deep learning is a powerful technique in the field of wind energy forecasting. The deep neural network effectively handles the seasonal variation and uncertainty characteristics of wind speed by proper structural design, objective function optimization, and feature learning. The present paper focuses on the critical analysis of wind energy forecasting using deep learning based Recurrent neural networks (RNN) models. It explores RNN and its variants, such as simple RNN, Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and Bidirectional RNN models. The recurrent neural network processes the input time series data sequentially and captures well the temporal dependencies exist in the successive input data. This review investigates the RNN models of wind energy forecasting, the data sources utilized, and the performance achieved in terms of the error measures. The overall review shows that the deep learning based RNN improves the performance of wind energy forecasting compared to the conventional techniques.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Zhao Yang ◽  
Yifan Wang ◽  
Jie Li ◽  
Liming Liu ◽  
Jiyang Ma ◽  
...  

This study presents a combined Long Short-Term Memory and Extreme Gradient Boosting (LSTM-XGBoost) method for flight arrival flow prediction at the airport. Correlation analysis is conducted between the historic arrival flow and input features. The XGBoost method is applied to identify the relative importance of various variables. The historic time-series data of airport arrival flow and selected features are taken as input variables, and the subsequent flight arrival flow is the output variable. The model parameters are sequentially updated based on the recently collected data and the new predicting results. It is found that the prediction accuracy is greatly improved by incorporating the meteorological features. The data analysis results indicate that the developed method can characterize well the dynamics of the airport arrival flow, thereby providing satisfactory prediction results. The prediction performance is compared with benchmark methods including backpropagation neural network, LSTM neural network, support vector machine, gradient boosting regression tree, and XGBoost. The results show that the proposed LSTM-XGBoost model outperforms baseline and state-of-the-art neural network models.


The prediction of time series data is a forecast using the analysis of a relationship pattern between what will be predicted (prediction) and the time variable. The prediction process using the recurrent neural network (RNN) model could recognize and learn the data pattern of time series, but the presence of fluctuations in data makes the introduction of data patterns difficult to be learned. The data used for forecasting are tourist visits to Tanah Lot Bali tourist attraction for 10 years (2008-2017). The training process uses the RNN method on high fluctuating data, which requires a relatively long time in recognizing and studying the data patterns. Modification of the RNN method on learning rate and momentum by using dynamic values, can shorten learning time. The results showed the learning time using the RNN dynamic value, smaller than the variants of the RNN method such as the RNN Elman, Jordan RNN, Fully RNN, LSTM and the feedforward method (Backpropagation). The resulting error value is 0,05105 MSE. This value is smaller than the Fully RNN, Jordan RNN, LSTM and Feedforward methods. The elman method has the shortest training time among other models. The purpose of this research is to make a prediction design consisting of sliding windows techniques, training with neural network models and validation of results with k-fold cross-validation.


Water ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 2088
Author(s):  
Minxue He ◽  
Liheng Zhong ◽  
Prabhjot Sandhu ◽  
Yu Zhou

Salinity management is a subject of particular interest in estuarine environments because of the underlying biological significance of salinity and its variations in time and space. The foremost step in such management practices is understanding the spatial and temporal variations of salinity and the principal drivers of these variations. This has traditionally been achieved with the assistance of empirical or process-based models, but these can be computationally expensive for complex environmental systems. Model emulation based on data-driven methods offers a viable alternative to traditional modeling in terms of computational efficiency and improving accuracy by recognizing patterns and processes that are overlooked or underrepresented (or overrepresented) by traditional models. This paper presents a case study of emulating a process-based boundary salinity generator via deep learning for the Sacramento–San Joaquin Delta (Delta), an estuarine environment with significant economic, ecological, and social value on the Pacific coast of northern California, United States. Specifically, the study proposes a range of neural network models: (a) multilayer perceptron, (b) long short-term memory network, and (c) convolutional neural network-based models in estimating the downstream boundary salinity of the Delta on a daily time-step. These neural network models are trained and validated using half of the dataset from water year 1991 to 2002. They are then evaluated for performance in the remaining record period from water year 2003 to 2014 against the process-based boundary salinity generation model across different ranges of salinity in different types of water years. The results indicate that deep learning neural networks provide competitive or superior results compared with the process-based model, particularly when the output of the latter are incorporated as an input to the former. The improvements are generally more noticeable during extreme (i.e., wet, dry, and critical) years rather than in near-normal (i.e., above-normal and below-normal) years and during low and medium ranges of salinity rather than high range salinity. Overall, this study indicates that deep learning approaches have the potential to supplement the current practices in estimating salinity at the downstream boundary and other locations across the Delta, and thus guide real-time operations and long-term planning activities in the Delta.


Sign in / Sign up

Export Citation Format

Share Document