Implementation of Face Detection and Eye Tracking with Convolution Neutral Network and Edge Computing on Electromobility

Author(s):  
Ching-Lung Su ◽  
Wen-Cheng Lai ◽  
Han-Wei Huang ◽  
Cheng-Han Lin ◽  
Yu-Bin Chen
2012 ◽  
Vol 113 (1) ◽  
pp. 66-77 ◽  
Author(s):  
Elisa Di Giorgio ◽  
Chiara Turati ◽  
Gianmarco Altoè ◽  
Francesca Simion

Author(s):  
Heesun Park ◽  
Jangpyo Hong ◽  
Sangyeol Kim ◽  
Young-Min Jang ◽  
Cheol-Su Kim ◽  
...  
Keyword(s):  

Author(s):  
Xiaofeng Li ◽  
Jiahao Xia ◽  
Libo Cao ◽  
Guanjun Zhang ◽  
Xiexing Feng

Most current vision-based fatigue detection methods don’t have high-performance and robust face detector. They detect driver fatigue using single detection feature and cannot achieve real-time efficiency on edge computing devices. Aimed at solving these problems, this paper proposes a driver fatigue detection system based on convolutional neural network that can run in real-time on edge computing devices. The system firstly uses the proposed face detection network LittleFace to locate the face and classify the face into two states: small yaw angle state “normal” and large yaw angle state “distract.” Secondly, the speed-optimized SDM algorithm is conducted only in the face region of the “normal” state to deal with the problem that the face alignment accuracy decreases at large angle profile, and the “distract” state is used to detect driver distraction. Finally, feature parameters EAR, MAR and head pitch angle are calculated from the obtained landmarks and used to detect driver fatigue respectively. Comprehensive experiments are conducted to evaluate the proposed system and the results show its practicality and superiority. Our face detection network LittleFace can achieve 88.53% mAP on AFLW test set at 58 FPS on the edge computing device Nvidia Jetson Nano. Evaluation results on YawDD, 300 W, and DriverEyes show the average detection accuracy of the proposed system can reach 89.55%.


2019 ◽  
Author(s):  
Hanojhan Rajahrajasingh

When a driver doesn’t get proper rest, they fall asleep while driving and this leads to fatal accidents. This particular issue demands a solution in the form of a system that is capable of detecting drowsiness and to take necessary actions to avoid accidents.The detection is achieved with three main steps, it begins with face detection and facial feature detection using the famous Viola Jones algorithm followed by eye tracking. By the use of correlation coefficient template matching, the eyes are tracked. Whether the driver is awake or asleep is identified by matching the extracted eye image with the externally fed template (open eyes and closed eyes) based on eyes opening and eyes closing, blinking is recognized. If the driver falling asleep state remains above a specific time (the threshold time) the vehicles stops and an alarm is activated by the use of a specific microcontroller, in this prototype an Arduino is used.


Author(s):  
HM Tamim ◽  
Fahema Sultana ◽  
Nazifa Tasneem ◽  
Yakut Marzan ◽  
Mohammad Monirujjaman Khan

2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Sign in / Sign up

Export Citation Format

Share Document