scholarly journals Study of the influence of the exhaust line ultrasounds over the performance of the Blind Spot Warning System

Author(s):  
Cătălin Meiroşu

AbstractDuring the previous years, the vehicle manufacturers have tried to equip their vehicles with as much technology as possible, making the driving experience for people easier than ever. Most of the modern vehicles come today with ADAS (Advanced Driver Assistance Systems) either for driving (E.g. Cruise Control, Blind Spot Warning) or Parking (E.g. Rear Ultrasonic Sensors, Rear View Camera). Since the vehicle come equipped with more technology, a major task in developing vehicle remains the integration of these ADAS system in the vehicle context with the other components. Since most of the components cope with each other on the vehicle level, some technologies are more affected by other components – such as the case of an ultrasound vehicle scanning system (Blind Spot Warning) and the Exhaust line that emits ultrasounds from the exhaust muffler. The aim of this paper is to study the influence of the exhaust line ultrasounds (ultrasounds that are emitted by the engine cycle and filtered in the exhaust line of the vehicle) over the detection performance of the Blind Spot Warning Ultrasound system. Since vehicles are sold with a wide variety of powertrains, the solution presented took into account also these differences between powertrains equipped. In order to test the solution, mock-ups of the vehicle were made in order to proof the robustness of the method.

PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0252688
Author(s):  
Oscar Oviedo-Trespalacios ◽  
Jennifer Tichon ◽  
Oliver Briant

Advanced Driver Assistance Systems (ADAS) are being developed and installed in increasing numbers. Some of the most popular ADAS include blind spot monitoring and cruise control which are fitted in the majority of new vehicles sold in high-income countries. With more drivers having access to these technologies, it is imperative to develop policy and strategies to guarantee the safe uptake of ADAS. One key issue is that ADAS education has been primarily centred on the user manual which are not widely utilised. Moreover, it is unclear if user manuals are an adequate source of education in terms of content and readability. To address this research gap, a content analysis was used to assess the differences in ADAS-related content and readability among the manuals of the highest selling vehicles in Australia. The qualitative findings showed that there are seven themes in the user manuals: differences between driving with and without ADAS, familiarisation requirements, operational limits of the ADAS, potential ADAS errors, behaviour adaptation warnings, confusion warnings, and malfunction warnings. The quantitative analysis found that some of the manuals require several years of education above the recommended for a universal audience (>8 years) to be understood. Additionally, there is a notable number of text diversions and infographics which could make comprehension of the user manual difficult. This investigation shows that there is a lack of standardisation of ADAS user manuals (in both content and delivery of information) which requires regulatory oversight. Driver ADAS education needs to be prioritised by policymakers and practitioners as smart technology continues to increase across the transport system. It seems that current strategies based on user manuals are insufficient to achieve successful adoption and safe use of these technologies.


2020 ◽  
Author(s):  
Pradip Kumar Sarkar

Topic: Driver Assistance Technology is emerging as new driving technology popularly known as ADAS. It is supported with Adaptive Cruise Control, Automatic Emergency Brake, blind spot monitoring, lane change assistance, and forward collision warnings etc. It is an important platform to integrate these multiple applications by using data from multifunction sensors, cameras, radars, lidars etc. and send command to plural actuators, engine, brake, steering etc. ADAS technology can detect some objects, do basic classification, alert the driver of hazardous road conditions, and in some cases, slow or stop the vehicle. The architecture of the electronic control units (ECUs) is responsible for executing advanced driver assistance systems (ADAS) in vehicle which is changing as per its response during the process of driving. Automotive system architecture integrates multiple applications into ADAS ECUs that serve multiple sensors for their functions. Hardware architecture of ADAS and autonomous driving, includes automotive Ethernet, TSN, Ethernet switch and gateway, and domain controller while Software architecture of ADAS and autonomous driving, including AUTOSAR Classic and Adaptive, ROS 2.0 and QNX. This chapter explains the functioning of Assistance Driving Technology with the help of its architecture and various types of sensors.


2021 ◽  
Author(s):  
Arash Pourhasan Nezhad ◽  
Mehdi Ghatee ◽  
Hedieh Sajedi

<p>With the advent of intelligent systems, we are still facing a high number of fatal traffic accidents. Driver assistance systems can significantly reduce this rate. For example, when a driver uses a turn signal, driver assistance systems alert the object's presence in blind spot areas. Camera-based driver assistance systems for blind spots usually alert by detecting objects, including vehicles, in image frames. Based on a more dynamic dangerous situation classification for lane changing and turning to the sides, we propose an efficient blind-spot warning system that works with a single camera sensor for each side. Our contribution consists of two sections. First, we take a deeper look at classifying dangerous and safe situations in a dynamic environment with moving objects. Second, to distinguish dangerous situations from safe conditions, we install a pre-trained SOTA object detector to track vehicles in consecutive frames and then estimate the distances of tracked cars by a 6% mean percentage error rate. In addition, to detect objects in blind spots, the proposed system uses cars' relative velocity to warn dangerous situations. This classification process is not real-time. So, in the second section, we propose a tiny model as a driver assistance system for the blind spot that works in real-time. This tiny model feeds optical flow into CNN layers. This vision-based system uses self-supervised learning without the necessity of the labeled data. It shows 97% accuracy and can detect dangerous situations as a real-time system.</p>


2021 ◽  
Author(s):  
Arash Pourhasan Nezhad ◽  
Mehdi Ghatee ◽  
Hedieh Sajedi

<p>With the advent of intelligent systems, we are still facing a high number of fatal traffic accidents. Driver assistance systems can significantly reduce this rate. For example, when a driver uses a turn signal, driver assistance systems alert the object's presence in blind spot areas. Camera-based driver assistance systems for blind spots usually alert by detecting objects, including vehicles, in image frames. Based on a more dynamic dangerous situation classification for lane changing and turning to the sides, we propose an efficient blind-spot warning system that works with a single camera sensor for each side. Our contribution consists of two sections. First, we take a deeper look at classifying dangerous and safe situations in a dynamic environment with moving objects. Second, to distinguish dangerous situations from safe conditions, we install a pre-trained SOTA object detector to track vehicles in consecutive frames and then estimate the distances of tracked cars by a 6% mean percentage error rate. In addition, to detect objects in blind spots, the proposed system uses cars' relative velocity to warn dangerous situations. This classification process is not real-time. So, in the second section, we propose a tiny model as a driver assistance system for the blind spot that works in real-time. This tiny model feeds optical flow into CNN layers. This vision-based system uses self-supervised learning without the necessity of the labeled data. It shows 97% accuracy and can detect dangerous situations as a real-time system.</p>


2012 ◽  
Vol 2012 (CICMT) ◽  
pp. 000077-000081
Author(s):  
Sebastian Brunner ◽  
Manfred Stadler ◽  
Xin Wang ◽  
Frank Bauer ◽  
Klaus Aichholzer

In this paper we will present an application of advanced Low Temperature Cofired Ceramic (LTCC) technology beyond 60 GHz. Therefore a RF frontend for 76–81 GHz radar frequency was built. LTCC is a well established technology for applications for consumer handheld units &lt;5 GHz but will provide solutions for applications for high frequencies in the range of 60 GHz and beyond. Radar sensors operating in the 76-81 GHz range are considered key for Advanced Driver Assistance Systems (ADAS) like Adaptive Cruise Control (ACC), Collision Mitigation and Avoidance Systems (CMS) or Lane Change Assist (LCA). These applications are the next wave in automotive safety systems and have thus generated increased interest in lower-cost solutions especially for the mm-wave frontend section.


Author(s):  
Dongho Ka ◽  
Donghoun Lee ◽  
Sunghoon Kim ◽  
Hwasoo Yeo

One of the most widely used advanced driver assistance systems (ADAS) for preventing pedestrian–vehicle collisions is the intersection collision warning system (ICWS). Most previous ICWSs have been implemented with in-vehicle distance sensors, such as radar and lidar. However, the existing ICWSs show some weaknesses in alerting drivers at intersections because of limited detection range and field-of-view. Furthermore, these ICWSs have difficulties in identifying the pedestrian’s crossing intention because the distance sensors cannot capture pedestrian characteristics such as age, gender, and head orientation. To alleviate these defects, this study proposes a novel framework for vision sensor-based ICWS under a cloud-based communication environment, which is called the intersection pedestrian collision warning system (IPCWS). The IPCWS gives a collision warning to drivers approaching an intersection by predicting the pedestrian’s crossing intention based on various machine learning models. With real traffic data extracted by image processing in the IPCWS, a comparison study is conducted to evaluate the performance of the IPCWS in relation to warning timing. The comparison study demonstrates that the IPCWS shows better performance than conventional ICWSs. This result suggests that the proposed system has a great potential for preventing pedestrian–vehicle collisions by capturing the pedestrian’s crossing intention.


2015 ◽  
Vol 764-765 ◽  
pp. 1361-1365
Author(s):  
Cheng Yu Chiu ◽  
Chih Han Chang ◽  
Hsin Jung Lin ◽  
Tsong Liang Huang

This paper addressed a new lane departure warning system (LDWS). We used the side-view cameras to promote Advanced Driver Assistance Systems (ADAS). A left side-view camera detected the right lane next to vehicle, and a right side-view camera detected the right lane. Two cameras processed in their algorithm and gave warning message, independently and separately. Our algorithm combined those warning messages to analyze environment situations. At the end, we used the LUXGEN MPV to test and showed results of verifications and tests.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 4872
Author(s):  
Nicola Albarella ◽  
Francesco Masuccio ◽  
Luigi Novella ◽  
Manuela Tufo ◽  
Giovanni Fiengo

Driver behaviour and distraction have been identified as the main causes of rear end collisions. However a promptly issued warning can reduce the severity of crashes, if not prevent them completely. This paper proposes a Forward Collision Warning System (FCW) based on information coming from a low cost forward monocular camera for low end electric vehicles. The system resorts to a Convolutional Neural Network (CNN) and does not require the reconstruction of a complete 3D model of the surrounding environment. Moreover a closed-loop simulation platform is proposed, which enables the fast development and testing of the FCW and other Advanced Driver Assistance Systems (ADAS). The system is then deployed on embedded hardware and experimentally validated on a test track.


2021 ◽  
Vol 11 (24) ◽  
pp. 11587
Author(s):  
Luca Ulrich ◽  
Francesca Nonis ◽  
Enrico Vezzetti ◽  
Sandro Moos ◽  
Giandomenico Caruso ◽  
...  

Driver inattention is the primary cause of vehicle accidents; hence, manufacturers have introduced systems to support the driver and improve safety; nonetheless, advanced driver assistance systems (ADAS) must be properly designed not to become a potential source of distraction for the driver due to the provided feedback. In the present study, an experiment involving auditory and haptic ADAS has been conducted involving 11 participants, whose attention has been monitored during their driving experience. An RGB-D camera has been used to acquire the drivers’ face data. Subsequently, these images have been analyzed using a deep learning-based approach, i.e., a convolutional neural network (CNN) specifically trained to perform facial expression recognition (FER). Analyses to assess possible relationships between these results and both ADAS activations and event occurrences, i.e., accidents, have been carried out. A correlation between attention and accidents emerged, whilst facial expressions and ADAS activations resulted to be not correlated, thus no evidence that the designed ADAS are a possible source of distraction has been found. In addition to the experimental results, the proposed approach has proved to be an effective tool to monitor the driver through the usage of non-invasive techniques.


Author(s):  
Pavlo Bazilinskyy ◽  
Joost C. F. De Winter

This study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The participants stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (PA) and forward collision warning system (FCWS), as well as towards a futuristic augmented sound system (FS) proposed for fully automated driving. The respondents were positive towards the PA and FCWS, and rated their willingness to have these systems as 3.87 and 3.77, respectively (1 = disagree strongly, 5 = agree strongly). The respondents tolerated the FS. The results showed that a female voice is the most preferred feedback mode for the support of takeover requests in highly automated driving, regardless of whether the respondents’ country is English speaking or not. The present results could be useful for designers of automated vehicles and other stakeholders.


Sign in / Sign up

Export Citation Format

Share Document