scholarly journals Driver Sleepiness Detection Algorithm Based on Relevance Vector Machine

2021 ◽  
Vol 16 (1) ◽  
pp. 118-139
Author(s):  
Lingxiang Wei ◽  
Tianliu Feng ◽  
Pengfei Zhao ◽  
Mingjun Liao

Driver sleepiness is one of the most important causes of traffic accidents. Efficient and stable algorithms are crucial for distinguishing nonfatigue from fatigue state. Relevance vector machine (RVM) as a leading-edge detection approach allows meeting this requirement and represents a potential solution for fatigue state detection. To accurately and effectively identify the driver’s fatigue state and reduce the number of traffic accidents caused by driver sleepiness, this paper considers the degree of driver’s mouth opening and eye state as multi-source related variables and establishes classification of fatigue and non-fatigue states based on the related literature and investigation. On this basis, an RVM model for automatic detection of the fatigue state is proposed. Twenty male respondents participated in the data collection process and a total of 1000 datasets of driving status (half of non-fatigue and half of fatigue) were obtained. The results of fatigue state recognition were analysed by different RVM classifiers. The results show that the recognition accuracy of the RVM-driven state classifiers with different kernel functions was higher than 90%, which indicated that the mouth-opening degree and the eye state index used in this work were closely related to the fatigue state. Based on the obtained results, the proposed fatigue state identification method has the potential to improve the fatigue state detection accuracy. More importantly, it provides a scientific theoretical basis for the development of fatigue state warning methods.

2011 ◽  
Vol 130-134 ◽  
pp. 2429-2432
Author(s):  
Liang Xiu Zhang ◽  
Xu Yun Qiu ◽  
Zhu Lin Zhang ◽  
Yu Lin Wang

Realtime on-road vehicle detection is a key technology in many transportation applications, such as driver assistance, autonomous driving and active safety. A vehicle detection algorithm based on cascaded structure is introduced. Haar-like features are used to built model in this application, and GAB algorithm is chosen to train the strong classifiers. Then, the real-time on-road vehicle classifier based on cascaded structure is constructed by combining the strong classifiers. Experimental results show that the cascaded classifier is excellent in both detection accuracy and computational efficiency, which ensures its application to collision warning system.


Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3646
Author(s):  
Jingwei Cao ◽  
Chuanxue Song ◽  
Silun Peng ◽  
Shixin Song ◽  
Xu Zhang ◽  
...  

Pedestrian detection is an important aspect of the development of intelligent vehicles. To address problems in which traditional pedestrian detection is susceptible to environmental factors and are unable to meet the requirements of accuracy in real time, this study proposes a pedestrian detection algorithm for intelligent vehicles in complex scenarios. YOLOv3 is one of the deep learning-based object detection algorithms with good performance at present. In this article, the basic principle of YOLOv3 is elaborated and analyzed firstly to determine its limitations in pedestrian detection. Then, on the basis of the original YOLOv3 network model, many improvements are made, including modifying grid cell size, adopting improved k-means clustering algorithm, improving multi-scale bounding box prediction based on receptive field, and using Soft-NMS algorithm. Finally, based on INRIA person and PASCAL VOC 2012 datasets, pedestrian detection experiments are conducted to test the performance of the algorithm in various complex scenarios. The experimental results show that the mean Average Precision (mAP) value reaches 90.42%, and the average processing time of each frame is 9.6 ms. Compared with other detection algorithms, the proposed algorithm exhibits accuracy and real-time performance together, good robustness and anti-interference ability in complex scenarios, strong generalization ability, high network stability, and detection accuracy and detection speed have been markedly improved. Such improvements are significant in protecting the road safety of pedestrians and reducing traffic accidents, and are conducive to ensuring the steady development of the technological level of intelligent vehicle driving assistance.


Author(s):  
Dongxian Yu ◽  
Jiatao Kang ◽  
Zaihui Cao ◽  
Neha Jain

In order to solve the current traffic sign detection technology due to the interference of various complex factors, it is difficult to effectively carry out the correct detection of traffic signs, and the robustness is weak, a traffic sign detection algorithm based on the region of interest extraction and double filter is designed.First, in order to reduce environmental interference, the input image is preprocessed to enhance the main color of each logo.Secondly, in order to improve the extraction ability Of Regions Of Interest, a Region Of Interest (ROI) detector based on Maximally Stable Extremal Regions (MSER) and Wave Equation (WE) was defined, and candidate Regions were selected through the ROI detector.Then, an effective HOG (Histogram of Oriented Gradient) descriptor is introduced as the detection feature of traffic signs, and SVM (Support Vector Machine) is used to classify them into traffic signs or background.Finally, the context-aware filter and the traffic light filter are used to further identify the false traffic signs and improve the detection accuracy.In the GTSDB database, three kinds of traffic signs, which are indicative, prohibited and dangerous, are tested, and the results show that the proposed algorithm has higher detection accuracy and robustness compared with the current traffic sign recognition technology.


Photonics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 3
Author(s):  
Shun Qin ◽  
Wai Kin Chan

Accurate segmented mirror wavefront sensing and control is essential for next-generation large aperture telescope system design. In this paper, a direct tip–tilt and piston error detection technique based on model-based phase retrieval with multiple defocused images is proposed for segmented mirror wavefront sensing. In our technique, the tip–tilt and piston error are represented by a basis consisting of three basic plane functions with respect to the x, y, and z axis so that they can be parameterized by the coefficients of these bases; the coefficients then are solved by a non-linear optimization method with the defocus multi-images. Simulation results show that the proposed technique is capable of measuring high dynamic range wavefront error reaching 7λ, while resulting in high detection accuracy. The algorithm is demonstrated as robust to noise by introducing phase parameterization. In comparison, the proposed tip–tilt and piston error detection approach is much easier to implement than many existing methods, which usually introduce extra sensors and devices, as it is a technique based on multiple images. These characteristics make it promising for the application of wavefront sensing and control in next-generation large aperture telescopes.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2021 ◽  
Vol 11 (8) ◽  
pp. 3531
Author(s):  
Hesham M. Eraqi ◽  
Karim Soliman ◽  
Dalia Said ◽  
Omar R. Elezaby ◽  
Mohamed N. Moustafa ◽  
...  

Extensive research efforts have been devoted to identify and improve roadway features that impact safety. Maintaining roadway safety features relies on costly manual operations of regular road surveying and data analysis. This paper introduces an automatic roadway safety features detection approach, which harnesses the potential of artificial intelligence (AI) computer vision to make the process more efficient and less costly. Given a front-facing camera and a global positioning system (GPS) sensor, the proposed system automatically evaluates ten roadway safety features. The system is composed of an oriented (or rotated) object detection model, which solves an orientation encoding discontinuity problem to improve detection accuracy, and a rule-based roadway safety evaluation module. To train and validate the proposed model, a fully-annotated dataset for roadway safety features extraction was collected covering 473 km of roads. The proposed method baseline results are found encouraging when compared to the state-of-the-art models. Different oriented object detection strategies are presented and discussed, and the developed model resulted in improving the mean average precision (mAP) by 16.9% when compared with the literature. The roadway safety feature average prediction accuracy is 84.39% and ranges between 91.11% and 63.12%. The introduced model can pervasively enable/disable autonomous driving (AD) based on safety features of the road; and empower connected vehicles (CV) to send and receive estimated safety features, alerting drivers about black spots or relatively less-safe segments or roads.


2021 ◽  
Vol 13 (4) ◽  
pp. 721
Author(s):  
Zhongheng Li ◽  
Fang He ◽  
Haojie Hu ◽  
Fei Wang ◽  
Weizhong Yu

Collaborative representation-based detector (CRD), as the most representative anomaly detection method, has been widely applied in the field of hyperspectral anomaly detection (HAD). However, the sliding dual window of the original CRD introduces high computational complexity. Moreover, most HAD models only consider a single spectral or spatial feature of the hyperspectral image (HSI), which is unhelpful for improving detection accuracy. To solve these problems, in terms of speed and accuracy, we propose a novel anomaly detection approach, named Random Collective Representation-based Detector with Multiple Feature (RCRDMF). This method includes the following steps. This method first extract the different features include spectral feature, Gabor feature, extended multiattribute profile (EMAP) feature, and extended morphological profile (EMP) feature matrix from the HSI image, which enables us to improve the accuracy of HAD by combining the multiple spectral and spatial features. The ensemble and random collaborative representation detector (ERCRD) method is then applied, which can improve the anomaly detection speed. Finally, an adaptive weight approach is proposed to calculate the weight for each feature. Experimental results on six hyperspectral datasets demonstrate that the proposed approach has the superiority over accuracy and speed.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1081
Author(s):  
Tamon Miyake ◽  
Shintaro Yamamoto ◽  
Satoshi Hosono ◽  
Satoshi Funabashi ◽  
Zhengxue Cheng ◽  
...  

Gait phase detection, which detects foot-contact and foot-off states during walking, is important for various applications, such as synchronous robotic assistance and health monitoring. Gait phase detection systems have been proposed with various wearable devices, sensing inertial, electromyography, or force myography information. In this paper, we present a novel gait phase detection system with static standing-based calibration using muscle deformation information. The gait phase detection algorithm can be calibrated within a short time using muscle deformation data by standing in several postures; it is not necessary to collect data while walking for calibration. A logistic regression algorithm is used as the machine learning algorithm, and the probability output is adjusted based on the angular velocity of the sensor. An experiment is performed with 10 subjects, and the detection accuracy of foot-contact and foot-off states is evaluated using video data for each subject. The median accuracy is approximately 90% during walking based on calibration for 60 s, which shows the feasibility of the static standing-based calibration method using muscle deformation information for foot-contact and foot-off state detection.


2016 ◽  
Vol 23 (4) ◽  
pp. 579-592 ◽  
Author(s):  
Jaromir Przybyło ◽  
Eliasz Kańtoch ◽  
Mirosław Jabłoński ◽  
Piotr Augustyniak

Abstract Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb) using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm) for fluorescent light to 6.6 bpm for dim daylight.


Sign in / Sign up

Export Citation Format

Share Document