Toward Detection of Driver Drowsiness with Commercial Smartwatch and Smartphone

Author(s):  
Liangliang Lin ◽  
Hongyu Yang ◽  
Yang Liu ◽  
Haoyuan Zheng ◽  
Jizhong Zhao
Keyword(s):  
2019 ◽  
Vol 70 (3) ◽  
pp. 184-192
Author(s):  
Toan Dao Thanh ◽  
Vo Thien Linh

In this article, a system to detect driver drowsiness and distraction based on image sensing technique is created. With a camera used to observe the face of driver, the image processing system embedded in the Raspberry Pi 3 Kit will generate a warning sound when the driver shows drowsiness based on the eye-closed state or a yawn. To detect the closed eye state, we use the ratio of the distance between the eyelids and the ratio of the distance between the upper lip and the lower lip when yawning. A trained data set to extract 68 facial features and “frontal face detectors” in Dlib are utilized to determine the eyes and mouth positions needed to carry out identification. Experimental data from the tests of the system on Vietnamese volunteers in our University laboratory show that the system can detect at realtime the common driver states of “Normal”, “Close eyes”, “Yawn” or “Distraction”


Information ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 3
Author(s):  
Shuang Chen ◽  
Zengcai Wang ◽  
Wenxin Chen

The effective detection of driver drowsiness is an important measure to prevent traffic accidents. Most existing drowsiness detection methods only use a single facial feature to identify fatigue status, ignoring the complex correlation between fatigue features and the time information of fatigue features, and this reduces the recognition accuracy. To solve these problems, we propose a driver sleepiness estimation model based on factorized bilinear feature fusion and a long- short-term recurrent convolutional network to detect driver drowsiness efficiently and accurately. The proposed framework includes three models: fatigue feature extraction, fatigue feature fusion, and driver drowsiness detection. First, we used a convolutional neural network (CNN) to effectively extract the deep representation of eye and mouth-related fatigue features from the face area detected in each video frame. Then, based on the factorized bilinear feature fusion model, we performed a nonlinear fusion of the deep feature representations of the eyes and mouth. Finally, we input a series of fused frame-level features into a long-short-term memory (LSTM) unit to obtain the time information of the features and used the softmax classifier to detect sleepiness. The proposed framework was evaluated with the National Tsing Hua University drowsy driver detection (NTHU-DDD) video dataset. The experimental results showed that this method had better stability and robustness compared with other methods.


Sign in / Sign up

Export Citation Format

Share Document