Hand off Steering Wheel Detection Using Sensor Data of Smartwatch and Smartphone

2021 ◽  
Author(s):  
Changchang He ◽  
Weining Liu ◽  
Dihua Sun ◽  
Zhiping Gao
Keyword(s):  
Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3052 ◽  
Author(s):  
Björn Reuper ◽  
Matthias Becker ◽  
Stefan Leinen

Localization algorithms based on global navigation satellite systems (GNSS) play an important role in automotive positioning. Due to the advent of autonomously driving cars, their importance is expected to grow even further in the next years. Simultaneously, the performance requirements for these localization algorithms will increase because they are no longer used exclusively for navigation, but also for control of the vehicle’s movement. These requirements cannot be met with GNSS alone. Instead, algorithms for sensor data fusion are needed. While the combination of GNSS receivers with inertial measurements units (IMUs) is a common approach, it is traditionally executed in a single-frequency/single-constellation architecture, usually with the Global Positioning System’s (GPS) L1 C/A signal. With the advent of new GNSS constellations and civil signals on multiple frequencies, GNSS/IMU integration algorithm performance can be improved by utilizing these new data sources. To achieve this, we upgraded a tightly coupled GNSS/IMU integration algorithm to process measurements from GPS (L1 C/A, L2C, L5) and Galileo (E1, E5a, E5b). After investigating various combination strategies, we chose to preferably work with ionosphere-free combinations of L5-L1 C/A and E5a-E1 pseudo-ranges. L2C-L1 C/A and E5b-E1 combinations as well as single-frequency pseudo-ranges on L1 and E1 serve as backup when no L5/E5a measurements are available. To be able to process these six types of pseudo-range observations simultaneously, the differential code biases (DCBs) of the employed receiver need to be calibrated. Time-differenced carrier-phase measurements on L1 and E1 provide the algorithm with pseudo-range-rate observations. To provide additional aiding, information about the vehicle’s velocity obtained by an odometry model fed with angular velocities from all four wheels as well as the steering wheel angle is incorporated into the algorithm. To evaluate the performance improvement provided by these new data sources, two sets of measurement data are collected and the resulting navigation solutions are compared to a higher-grade reference system, consisting of a geodetic GNSS receiver for real-time kinematic positioning (RTK) and a navigation grade IMU. The multi-frequency/multi-constellation algorithm with odometry aiding achieves a 3-D root mean square (RMS) position error of 3.6 m / 2.1 m in these data sets, compared to 5.2 m / 2.9 m for the single-frequency GPS algorithm without odometry aiding. Odometry is most beneficial to positioning accuracy when GNSS measurement quality is poor. This is demonstrated in data set 1, resulting in a reduction of the horizontal position error’s 95% quantile from 6.2 m without odometry aiding to 4.2 m with odometry aiding.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5197
Author(s):  
Beomjun Kim ◽  
Yunju Baek

Advances in vehicle technology have resulted in the development of vehicles equipped with sensors to acquire standardized information such as engine speed and vehicle speed from the in-vehicle controller area network (CAN) system. However, there are challenges in acquiring proprietary information from CAN frames, such as the brake pedal and steering wheel operation, which are essential for driver behavior analysis. Such information extraction requires electronic control unit identifier analysis and accompanying data interpretation. In this paper, we present a system for the automatic extraction of proprietary in-vehicle information using sensor data correlated with the desired information. First, the proposed system estimates the vehicle’s driving status through threshold-, random forest-, and long short-term memory-based techniques using inertial measurement unit and global positioning system values. Then, the system segments in-vehicle CAN frames using the estimation and evaluates each segment with our scoring method to select suitable candidates by examining the similarity between each candidate and its estimation through the suggested distance matching technique. We conduct comprehensive experiments of the proposed system using real vehicles in an urban environment. Performance evaluation shows that the estimation accuracy of the driving condition is 84.20%, and the extraction accuracy of the in-vehicle information is 82.31%, which implies that the presented approaches are quite feasible for automatic extraction of proprietary in-vehicle information.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2372
Author(s):  
Yongsu Jeon ◽  
Beomjun Kim ◽  
Yunju Baek

Drowsy driving is a major threat to the safety of drivers and road traffic. Accurate and reliable drowsy driving detection technology can reduce accidents caused by drowsy driving. In this study, we present a new method to detect drowsy driving with vehicle sensor data obtained from the steering wheel and pedal pressure. From our empirical study, we categorized drowsy driving into long-duration drowsy driving and short-duration drowsy driving. Furthermore, we propose an ensemble network model composed of convolution neural networks that can detect each type of drowsy driving. Each subnetwork is specialized to detect long- or short-duration drowsy driving using a fusion of features, obtained through time series analysis. To efficiently train the proposed network, we propose an imbalanced data-handling method that adjusts the ratio of normal driving data and drowsy driving data in the dataset by partially removing normal driving data. A dataset comprising 198.3 h of in-vehicle sensor data was acquired through a driving simulation that includes a variety of road environments such as urban environments and highways. The performance of the proposed model was evaluated with a dataset. This study achieved the detection of drowsy driving with an accuracy of up to 94.2%.


2019 ◽  
Vol 15 (9) ◽  
pp. 155014771987245 ◽  
Author(s):  
Zuojin Li ◽  
Qing Yang ◽  
Shengfu Chen ◽  
Wei Zhou ◽  
Liukui Chen ◽  
...  

The study of the robust fatigue feature learning method for the driver’s operational behavior is of great significance for improving the performance of the real-time detection system for driver’s fatigue state. Aiming at how to extract more abstract and deep features in the driver’s direction operation data in the robust feature learning, this article constructs a fuzzy recurrent neural network model, which includes input layer, fuzzy layer, hidden layer, and output layer. The steering-wheel direction sensing time series sends the time series to the input layer through a fixed time window. After the fuzzification process, it is sent to the hidden layer to share the weight of the hidden layer, realize the memorization of the fatigue feature, and improve the feature depth capability of the steering wheel angle time sequence. The experimental results show that the proposed model achieves an average recognition rate of 87.30% in the fatigue sample database of real vehicle conditions, which indicates that the model has strong robustness to different subjects under real driving conditions. The model proposed in this article has important theoretical and engineering significance for studying the prediction of fatigue driving under real driving conditions.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
David E. Anderson ◽  
John P. Bader ◽  
Emily A. Boes ◽  
Meghal Gagrani ◽  
Lynette M. Smith ◽  
...  

Abstract Background Driving simulators are a safe alternative to on-road vehicles for studying driving behavior in glaucoma drivers. Visual field (VF) loss severity is associated with higher driving simulator crash risk, though mechanisms explaining this relationship remain unknown. Furthermore, associations between driving behavior and neurocognitive performance in glaucoma are unexplored. Here, we evaluated the hypothesis that VF loss severity and neurocognitive performance interact to influence simulated vehicle control in glaucoma drivers. Methods Glaucoma patients (n = 25) and suspects (n = 18) were recruited into the study. All had > 20/40 corrected visual acuity in each eye and were experienced field takers with at least three stable (reliability > 20%) fields over the last 2 years. Diagnosis of neurological disorder or cognitive impairment were exclusion criteria. Binocular VFs were derived from monocular Humphrey VFs to estimate a binocular VF index (OU-VFI). Montreal Cognitive Assessment (MoCA) was administered to assess global and sub-domain neurocognitive performance. National Eye Institute Visual Function Questionnaire (NEI-VFQ) was administered to assess peripheral vision and driving difficulties sub-scores. Driving performance was evaluated using a driving simulator with a 290° panoramic field of view constructed around a full-sized automotive cab. Vehicle control metrics, such as lateral acceleration variability and steering wheel variability, were calculated from vehicle sensor data while patients drove on a straight two-lane rural road. Linear mixed models were constructed to evaluate associations between driving performance and clinical characteristics. Results Patients were 9.5 years older than suspects (p = 0.015). OU-VFI in the glaucoma group ranged from 24 to 98% (85.6 ± 18.3; M ± SD). OU-VFI (p = .0066) was associated with MoCA total (p = .0066) and visuo-spatial and executive function sub-domain scores (p = .012). During driving simulation, patients showed greater steering wheel variability (p = 0.0001) and lateral acceleration variability (p < .0001) relative to suspects. Greater steering wheel variability was independently associated with OU-VFI (p = .0069), MoCA total scores (p = 0.028), and VFQ driving sub-scores (p = 0.0087), but not age (p = 0.61). Conclusions Poor vehicle control was independently associated with greater VF loss and worse neurocognitive performance, suggesting both factors contribute to information processing models of driving performance in glaucoma. Future research must demonstrate the external validity of current findings to on-road performance in glaucoma.


Author(s):  
Cephas Alves da Silveira Barreto ◽  
João C. Xavier-Júnior ◽  
Anne M. P. Canuto ◽  
Ivanovitch M. D. Da Silva

The potential for processing car sensing data has increased in recent years due to the development of new technologies. Having this type of data is important, for instance, to analyze the way drivers behave when sitting behind steering wheel. Many studies have addressed the drive behavior by developing smartphone-based telematics systems. However, very little has been done to analyze car usage patterns based on car engine sensor data, and, therefore, it has not been been explored its full potential by considering all sensors within a car engine. Aiming to bridge this gap, this paper proposes the use of Machine Learning techniques (supervised and unsupervised) on automotive engine sensor data to discover drivers’ usage patterns, and to perform classification through a distributed online sensing platform. We believe that such platform can be useful used in different domains, such as fleet management, insurance market, fuel consumption optimization, CO2 emission reduction, among others.


Author(s):  
Seyedmeysam Khaleghian ◽  
Saied Taheri

Intelligent tire is a relatively new technology that provides useful tire-road contact information by directly monitoring the interaction between the tire and the road. Different types of sensors are attached to the tire inner-liner for this purpose; the sensor data then will be used to estimate the tire-road contact parameters as well as to monitor the tire conditions. In this study, a tri-axial accelerometer was used and a two-steps intelligent tire based pressure monitoring algorithm was developed in this study. First, the angular velocity of the wheel was estimated based on the parameters extracted from the acceleration components through a trained neural network. Then the estimated wheel angular velocity from the first step was used along with the acceleration components to estimate the power of radial acceleration. The estimated power was compared to the actual one and the tire pressure condition was judged to be “normal” or “low”. To train the neural networks, the experimental data collected using an instrumented vehicle was used. A VW Jetta 2003 was used for this purpose and instrumented with appropriate sensors; intelligent tires, steering wheel sensor to measure the steering angle, steering velocity and steering torque, encoders to measure the angular speed of the wheels and an Inertial Measurement Unit (IMU) to measure the vehicle linear and angular acceleration. Another set of experimental data with different tire pressures and different vehicle velocity was then used to validate the algorithm; good agreements were observed between the estimated tire pressures and the actual ones.


2021 ◽  
Vol 13 (3) ◽  
pp. 1342
Author(s):  
Wencai Sun ◽  
Yihao Si ◽  
Mengzhu Guo ◽  
Shiwu Li

Distracted driving has become a major cause of road traffic accidents. There are generally four different types of distractions: manual, visual, auditory, and cognitive. Manual distractions are the most common. Previous studies have used physiological indicators, vehicle behavior parameters, or machine-visual features to support research. However, these technologies are not suitable for an in-vehicle environment. To address this need, this study examined a non-intrusive method for detecting in-transit manual distractions. Wrist kinematics data from 20 drivers were collected using wearable inertial measurement units (IMU) to detect four common gestures made while driving: dialing a hand-held cellular phone, adjusting the audio or climate controls, reaching for an object in the back seat, and maneuvering the steering wheel to stay in the lane. The study proposed a progressive classification model for gesture recognition, including two major time-based sequencing components and a Hidden Markov Model (HMM). Results show that the accuracy for detecting disturbances was 95.52%. The accuracy associated with recognizing manual distractions reached 96.63%, using the proposed model. The overall model has the advantages of being sensitive to perceptions of motion, effectively solving the problem of a fall-off in recognition performance due to excessive disturbances in motion samples.


2012 ◽  
Vol 5 (5) ◽  
pp. 17
Author(s):  
MARY ELLEN SCHNEIDER
Keyword(s):  

2012 ◽  
Author(s):  
Anthony D. McDonald ◽  
Chris Schwarz ◽  
John D. Lee ◽  
Timothy L. Brown

Sign in / Sign up

Export Citation Format

Share Document