Sensor Fusion Algorithm that Has Resilience under Autonomous Driving Conditions: A Survey

2017 ◽  
Vol 865 ◽  
pp. 429-433
Author(s):  
Sung Bum Park ◽  
Hyeok Chan Kwon ◽  
Dong Hoon Lee

Autonomous cars recognize the surroundings through multiple sensors and make decisions to control the car in order to arrive at destination without driver's interventions. In such environment, if sensor data forgery occurs, it could lead to a big (critical) accident that could threaten the life of the driver. In the paper, a research on a way to get accurate driving information through sensor fusion algorithm that has resilience against data forgery and modulation will be discussed.

2021 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Parag Narkhede ◽  
Rahee Walambe ◽  
Shruti Mandaokar ◽  
Pulkit Chandel ◽  
Ketan Kotecha ◽  
...  

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4029 ◽  
Author(s):  
Jiaxuan Wu ◽  
Yunfei Feng ◽  
Peng Sun

Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far from automation, including a costly 24/7 observation by a designated caregiver, self-reporting by the user laboriously, or filling out a written ADL survey. Fortunately, ubiquitous sensors exist in our surroundings and on electronic devices in the Internet of Things (IoT) era. We proposed the ADL Recognition System that utilizes the sensor data from a single point of contact, such as smartphones, and conducts time-series sensor fusion processing. Raw data is collected from the ADL Recorder App constantly running on a user’s smartphone with multiple embedded sensors, including the microphone, Wi-Fi scan module, heading orientation of the device, light proximity, step detector, accelerometer, gyroscope, magnetometer, etc. Key technologies in this research cover audio processing, Wi-Fi indoor positioning, proximity sensing localization, and time-series sensor data fusion. By merging the information of multiple sensors, with a time-series error correction technique, the ADL Recognition System is able to accurately profile a person’s ADLs and discover his life patterns. This paper is particularly concerned with the care for the older adults who live independently.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


2020 ◽  
pp. 1391-1414
Author(s):  
Fritz Ulbrich ◽  
Simon Sebastian Rotter ◽  
Raul Rojas

Swarm behavior can be applied to many aspects of autonomous driving: e.g. localization, perception, path planning or mapping. A reason for this is that from the information observed by swarm members, e.g. the relative position and speed of other cars, further information can be derived. In this chapter the processing pipeline of a “swarm behavior module” is described step by step from selecting and abstracting sensor data to generating a plan – a drivable trajectory – for an autonomous car. Such a swarm-based path planning can play an important role in a scenario where there is a mixture of human drivers and autonomous cars. Experienced human drivers flow with the traffic and adapt their driving to the environment. They do not follow the traffic rules as strictly as computers do, but they are often using common sense. Autonomous cars should not provoke dangerous situations by sticking absolutely to the traffic rules, they must adapt their behavior with respect to the other drivers around them and thus merge with the traffic swarm.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4810 ◽  
Author(s):  
Md Nazmuzzaman Khan ◽  
Sohel Anwar

Multi-sensor data fusion technology in an important tool in building decision-making applications. Modified Dempster–Shafer (DS) evidence theory can handle conflicting sensor inputs and can be applied without any prior information. As a result, DS-based information fusion is very popular in decision-making applications, but original DS theory produces counterintuitive results when combining highly conflicting evidences from multiple sensors. An effective algorithm offering fusion of highly conflicting information in spatial domain is not widely reported in the literature. In this paper, a successful fusion algorithm is proposed which addresses these limitations of the original Dempster–Shafer (DS) framework. A novel entropy function is proposed based on Shannon entropy, which is better at capturing uncertainties compared to Shannon and Deng entropy. An 8-step algorithm has been developed which can eliminate the inherent paradoxes of classical DS theory. Multiple examples are presented to show that the proposed method is effective in handling conflicting information in spatial domain. Simulation results showed that the proposed algorithm has competitive convergence rate and accuracy compared to other methods presented in the literature.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3808 ◽  
Author(s):  
Antonio A. Aguileta ◽  
Ramon F. Brena ◽  
Oscar Mayora ◽  
Erik Molino-Minero-Re ◽  
Luis A. Trejo

In Ambient Intelligence (AmI), the activity a user is engaged in is an essential part of the context, so its recognition is of paramount importance for applications in areas like sports, medicine, personal safety, and so forth. The concurrent use of multiple sensors for recognition of human activities in AmI is a good practice because the information missed by one sensor can sometimes be provided by the others and many works have shown an accuracy improvement compared to single sensors. However, there are many different ways of integrating the information of each sensor and almost every author reporting sensor fusion for activity recognition uses a different variant or combination of fusion methods, so the need for clear guidelines and generalizations in sensor data integration seems evident. In this survey we review, following a classification, the many fusion methods for information acquired from sensors that have been proposed in the literature for activity recognition; we examine their relative merits, either as they are reported and sometimes even replicated and a comparison of these methods is made, as well as an assessment of the trends in the area.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5422
Author(s):  
Ankur Deo ◽  
Vasile Palade ◽  
Md. Nazmul Huda

Many advanced driver assistance systems (ADAS) are currently trying to utilise multi-sensor architectures, where the driver assistance algorithm receives data from a multitude of sensors. As mono-sensor systems cannot provide reliable and consistent readings under all circumstances because of errors and other limitations, fusing data from multiple sensors ensures that the environmental parameters are perceived correctly and reliably for most scenarios, thereby substantially improving the reliability of the multi-sensor-based automotive systems. This paper first highlights the significance of efficiently fusing data from multiple sensors in ADAS features. An emergency brake assist (EBA) system is showcased using multiple sensors, namely, a light detection and ranging (LiDAR) sensor and camera. The architectures of the proposed ‘centralised’ and ‘decentralised’ sensor fusion approaches for EBA are discussed along with their constituents, i.e., the detection algorithms, the fusion algorithm, and the tracking algorithm. The centralised and decentralised architectures are built and analytically compared, and the performance of these two fusion architectures for EBA are evaluated in terms of speed of execution, accuracy, and computational cost. While both fusion methods are seen to drive the EBA application at an acceptable frame rate (~20fps or higher) on an Intel i5-based Ubuntu system, it was concluded through the experiments and analytical comparisons that the decentralised fusion-driven EBA leads to higher accuracy; however, it has the downside of a higher computational cost. The centralised fusion-driven EBA yields comparatively less accurate results, but with the benefits of a higher frame rate and lesser computational cost.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5530
Author(s):  
Raghu Changalvala ◽  
Brandon Fedoruk ◽  
Hafiz Malik

The modern-day vehicle is evolved in a cyber-physical system with internal networks (controller area network (CAN), Ethernet, etc.) connecting hundreds of micro-controllers. From the traditional core vehicle functions, such as vehicle controls, infotainment, and power-train management, to the latest developments, such as advanced driver assistance systems (ADAS) and automated driving features, each one of them uses CAN as their communication network backbone. Automated driving and ADAS features rely on data transferred over the CAN network from multiple sensors mounted on the vehicle. Verifying the integrity of the sensor data is essential for the safety and security of occupants and the proper functionality of these applications. Though the CAN interface ensures reliable data transfer, it lacks basic security features, including message authentication, which makes it vulnerable to a wide array of attacks, including spoofing, replay, DoS, etc. Using traditional cryptography-based methods to verify the integrity of data transmitted over CAN interfaces is expected to increase the computational complexity, latency, and overall cost of the system. In this paper, we propose a light-weight alternative to verify the sensor data’s integrity for vehicle applications that use CAN networks for data transfers. To this end, a framework for 2-dimensional quantization index modulation (2D QIM)-based data hiding is proposed to achieve this goal. Using a typical radar sensor data transmission scenario in an autonomous vehicle application, we analyzed the performance of the proposed framework regarding detecting and localizing the sensor data tampering. The effects of embedding-induced distortion on the applications using the radar data were studied through a sensor fusion algorithm. It was observed that the proposed framework offers the much-needed data integrity verification without compromising on the quality of sensor fusion data and is implemented with low overall design complexity. This proposed framework can also be used on any physical network interface other than CAN, and it offers traceability to in-vehicle data beyond the scope of the in-vehicle applications.


Author(s):  
Wulf Loh ◽  
Janina Loh

In this chapter, we give a brief overview of the traditional notion of responsibility and introduce a concept of distributed responsibility within a responsibility network of engineers, driver, and autonomous driving system. In order to evaluate this concept, we explore the notion of man–machine hybrid systems with regard to self-driving cars and conclude that the unit comprising the car and the operator/driver consists of such a hybrid system that can assume a shared responsibility different from the responsibility of other actors in the responsibility network. Discussing certain moral dilemma situations that are structured much like trolley cases, we deduce that as long as there is something like a driver in autonomous cars as part of the hybrid system, she will have to bear the responsibility for making the morally relevant decisions that are not covered by traffic rules.


Sign in / Sign up

Export Citation Format

Share Document