Physiological Sensor Fusion for Real-Time Pilot Workload Prediction and Overload Prevention

2022 ◽  
Author(s):  
Matthew Masters ◽  
Axel Schulte
2021 ◽  
Vol 21 (2) ◽  
pp. 2241-2255 ◽  
Author(s):  
Tzuu-Hseng S. Li ◽  
Ping-Huan Kuo ◽  
Chuan-Han Cheng ◽  
Chia-Ching Hung ◽  
Po-Chien Luan ◽  
...  

2021 ◽  
Vol 9 ◽  
Author(s):  
Sharnil Pandya ◽  
Anirban Sur ◽  
Nitin Solke

The presented deep learning and sensor-fusion based assistive technology (Smart Facemask and Thermal scanning kiosk) will protect the individual using auto face-mask detection and auto thermal scanning to detect the current body temperature. Furthermore, the presented system also facilitates a variety of notifications, such as an alarm, if an individual is not wearing a mask and detects thermal temperature beyond the standard body temperature threshold, such as 98.6°F (37°C). Design/methodology/approach—The presented deep Learning and sensor-fusion-based approach can also detect an individual in with or without mask situations and provide appropriate notification to the security personnel by raising the alarm. Moreover, the smart tunnel is also equipped with a thermal sensing unit embedded with a camera, which can detect the real-time body temperature of an individual concerning the prescribed body temperature limits as prescribed by WHO reports. Findings—The investigation results validate the performance evaluation of the presented smart face-mask and thermal scanning mechanism. The presented system can also detect an outsider entering the building with or without mask condition and be aware of the security control room by raising appropriate alarms. Furthermore, the presented smart epidemic tunnel is embedded with an intelligent algorithm that can perform real-time thermal scanning of an individual and store essential information in a cloud platform, such as Google firebase. Thus, the proposed system favors society by saving time and helps in lowering the spread of coronavirus.


Author(s):  
V. Cherkassky ◽  
H. Lari-Najaffi ◽  
N.L. Lawrie ◽  
D. Masson ◽  
D.W. Pritty

Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1584 ◽  
Author(s):  
Yushan Li ◽  
Wenbo Zhang ◽  
Xuewu Ji ◽  
Chuanxiang Ren ◽  
Jian Wu

The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate the real lane in real-time during sensor jumps. This paper presents a lane compensation method based on multi-sensor fusion of global positioning system (GPS), inertial measurement unit (IMU) and vision sensors. In order to compensate the lane, the cubic polynomial function of the longitudinal distance is selected as the lane model. In this method, a Kalman filter is used to estimate vehicle velocity and yaw angle by GPS and IMU measurements, and a vehicle kinematics model is established to describe vehicle motion. It uses the geometric relationship between vehicle and relative lane motion at the current moment to solve the coefficient of the lane polynomial at the next moment. The simulation and vehicle test results show that the prediction information can compensate for the failure of the vision sensor, and has good real-time, robustness and accuracy.


Sign in / Sign up

Export Citation Format

Share Document