scholarly journals Centralised and Decentralised Sensor Fusion-Based Emergency Brake Assist

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5422
Author(s):  
Ankur Deo ◽  
Vasile Palade ◽  
Md. Nazmul Huda

Many advanced driver assistance systems (ADAS) are currently trying to utilise multi-sensor architectures, where the driver assistance algorithm receives data from a multitude of sensors. As mono-sensor systems cannot provide reliable and consistent readings under all circumstances because of errors and other limitations, fusing data from multiple sensors ensures that the environmental parameters are perceived correctly and reliably for most scenarios, thereby substantially improving the reliability of the multi-sensor-based automotive systems. This paper first highlights the significance of efficiently fusing data from multiple sensors in ADAS features. An emergency brake assist (EBA) system is showcased using multiple sensors, namely, a light detection and ranging (LiDAR) sensor and camera. The architectures of the proposed ‘centralised’ and ‘decentralised’ sensor fusion approaches for EBA are discussed along with their constituents, i.e., the detection algorithms, the fusion algorithm, and the tracking algorithm. The centralised and decentralised architectures are built and analytically compared, and the performance of these two fusion architectures for EBA are evaluated in terms of speed of execution, accuracy, and computational cost. While both fusion methods are seen to drive the EBA application at an acceptable frame rate (~20fps or higher) on an Intel i5-based Ubuntu system, it was concluded through the experiments and analytical comparisons that the decentralised fusion-driven EBA leads to higher accuracy; however, it has the downside of a higher computational cost. The centralised fusion-driven EBA yields comparatively less accurate results, but with the benefits of a higher frame rate and lesser computational cost.

2021 ◽  
Vol 11 (13) ◽  
pp. 5900
Author(s):  
Yohei Fujinami ◽  
Pongsathorn Raksincharoensak ◽  
Shunsaku Arita ◽  
Rei Kato

Advanced driver assistance systems (ADAS) for crash avoidance, when making a right-turn in left-hand traffic or left-turn in right-hand traffic, are expected to further reduce the number of traffic accidents caused by automobiles. Accurate future trajectory prediction of an ego vehicle for risk prediction is important to activate the assistance system correctly. Our objectives are to propose a trajectory prediction method for ADAS for safe intersection turnings and to evaluate the effectiveness of the proposed prediction method. Our proposed curve generation method is capable of generating a smooth curve without discontinuities in the curvature. By incorporating the curve generation method into the vehicle trajectory prediction, the proposed method could simulate the actual driving path of human drivers at a low computational cost. The curve would be required to define positions, angles, and curvatures at its initial and terminal points. Driving experiments conducted at real city traffic intersections proved that the proposed method could predict the trajectory with a high degree of accuracy for various shapes and sizes of the intersections. This paper also describes a method to determine the terminal conditions of the curve generation method from intersection features. We set a hypothesis where the conditions can be defined individually from intersection geometry. From the hypothesis, a formula to determine the parameter was derived empirically from the driving experiments. Public road driving experiments indicated that the parameters for the trajectory prediction could be appropriately estimated by the obtained empirical formula.


2017 ◽  
Vol 865 ◽  
pp. 429-433
Author(s):  
Sung Bum Park ◽  
Hyeok Chan Kwon ◽  
Dong Hoon Lee

Autonomous cars recognize the surroundings through multiple sensors and make decisions to control the car in order to arrive at destination without driver's interventions. In such environment, if sensor data forgery occurs, it could lead to a big (critical) accident that could threaten the life of the driver. In the paper, a research on a way to get accurate driving information through sensor fusion algorithm that has resilience against data forgery and modulation will be discussed.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3808 ◽  
Author(s):  
Antonio A. Aguileta ◽  
Ramon F. Brena ◽  
Oscar Mayora ◽  
Erik Molino-Minero-Re ◽  
Luis A. Trejo

In Ambient Intelligence (AmI), the activity a user is engaged in is an essential part of the context, so its recognition is of paramount importance for applications in areas like sports, medicine, personal safety, and so forth. The concurrent use of multiple sensors for recognition of human activities in AmI is a good practice because the information missed by one sensor can sometimes be provided by the others and many works have shown an accuracy improvement compared to single sensors. However, there are many different ways of integrating the information of each sensor and almost every author reporting sensor fusion for activity recognition uses a different variant or combination of fusion methods, so the need for clear guidelines and generalizations in sensor data integration seems evident. In this survey we review, following a classification, the many fusion methods for information acquired from sensors that have been proposed in the literature for activity recognition; we examine their relative merits, either as they are reported and sometimes even replicated and a comparison of these methods is made, as well as an assessment of the trends in the area.


2020 ◽  
Author(s):  
Pradip Kumar Sarkar

Topic: Driver Assistance Technology is emerging as new driving technology popularly known as ADAS. It is supported with Adaptive Cruise Control, Automatic Emergency Brake, blind spot monitoring, lane change assistance, and forward collision warnings etc. It is an important platform to integrate these multiple applications by using data from multifunction sensors, cameras, radars, lidars etc. and send command to plural actuators, engine, brake, steering etc. ADAS technology can detect some objects, do basic classification, alert the driver of hazardous road conditions, and in some cases, slow or stop the vehicle. The architecture of the electronic control units (ECUs) is responsible for executing advanced driver assistance systems (ADAS) in vehicle which is changing as per its response during the process of driving. Automotive system architecture integrates multiple applications into ADAS ECUs that serve multiple sensors for their functions. Hardware architecture of ADAS and autonomous driving, includes automotive Ethernet, TSN, Ethernet switch and gateway, and domain controller while Software architecture of ADAS and autonomous driving, including AUTOSAR Classic and Adaptive, ROS 2.0 and QNX. This chapter explains the functioning of Assistance Driving Technology with the help of its architecture and various types of sensors.


2021 ◽  
Vol 15 ◽  
Author(s):  
Francesco Rundo ◽  
Sabrina Conoci ◽  
Concetto Spampinato ◽  
Roberto Leotta ◽  
Francesca Trenta ◽  
...  

In recent years, the automotive field has been changed by the accelerated rise of new technologies. Specifically, autonomous driving has revolutionized the car manufacturer's approach to design the advanced systems compliant to vehicle environments. As a result, there is a growing demand for the development of intelligent technology in order to make modern vehicles safer and smarter. The impact of such technologies has led to the development of the so-called Advanced Driver Assistance Systems (ADAS), suitable to maintain control of the vehicle in order to avoid potentially dangerous situations while driving. Several studies confirmed that an inadequate driver's physiological condition could compromise the ability to drive safely. For this reason, assessing the car driver's physiological status has become one of the primary targets of the automotive research and development. Although a large number of efforts has been made by researchers to design safety-assessment applications based on the detection of physiological signals, embedding them into a car environment represents a challenging task. These mentioned implications triggered the development of this study in which we proposed an innovative pipeline, that through a combined less invasive Neuro-Visual approach, is able to reconstruct the car driver's physiological status. Specifically, the proposed contribution refers to the sampling and processing of the driver PhotoPlethysmoGraphic (PPG) signal. A parallel enhanced low frame-rate motion magnification algorithm is used to reconstruct such features of the driver's PhotoPlethysmoGraphic (PPG) data when that signal is no longer available from the native embedded sensor platform. A parallel monitoring of the driver's blood pressure levels from the PPG signal as well as the driver's eyes dynamics completes the reconstruction of the driver's physiological status. The proposed pipeline has been tested in one of the major investigated automotive scenarios i.e., the detection and monitoring of pedestrians while driving (pedestrian tracking). The collected performance results confirmed the effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document