scholarly journals Radar Data Integrity Verification Using 2D QIM-Based Data Hiding

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5530
Author(s):  
Raghu Changalvala ◽  
Brandon Fedoruk ◽  
Hafiz Malik

The modern-day vehicle is evolved in a cyber-physical system with internal networks (controller area network (CAN), Ethernet, etc.) connecting hundreds of micro-controllers. From the traditional core vehicle functions, such as vehicle controls, infotainment, and power-train management, to the latest developments, such as advanced driver assistance systems (ADAS) and automated driving features, each one of them uses CAN as their communication network backbone. Automated driving and ADAS features rely on data transferred over the CAN network from multiple sensors mounted on the vehicle. Verifying the integrity of the sensor data is essential for the safety and security of occupants and the proper functionality of these applications. Though the CAN interface ensures reliable data transfer, it lacks basic security features, including message authentication, which makes it vulnerable to a wide array of attacks, including spoofing, replay, DoS, etc. Using traditional cryptography-based methods to verify the integrity of data transmitted over CAN interfaces is expected to increase the computational complexity, latency, and overall cost of the system. In this paper, we propose a light-weight alternative to verify the sensor data’s integrity for vehicle applications that use CAN networks for data transfers. To this end, a framework for 2-dimensional quantization index modulation (2D QIM)-based data hiding is proposed to achieve this goal. Using a typical radar sensor data transmission scenario in an autonomous vehicle application, we analyzed the performance of the proposed framework regarding detecting and localizing the sensor data tampering. The effects of embedding-induced distortion on the applications using the radar data were studied through a sensor fusion algorithm. It was observed that the proposed framework offers the much-needed data integrity verification without compromising on the quality of sensor fusion data and is implemented with low overall design complexity. This proposed framework can also be used on any physical network interface other than CAN, and it offers traceability to in-vehicle data beyond the scope of the in-vehicle applications.

2017 ◽  
Vol 865 ◽  
pp. 429-433
Author(s):  
Sung Bum Park ◽  
Hyeok Chan Kwon ◽  
Dong Hoon Lee

Autonomous cars recognize the surroundings through multiple sensors and make decisions to control the car in order to arrive at destination without driver's interventions. In such environment, if sensor data forgery occurs, it could lead to a big (critical) accident that could threaten the life of the driver. In the paper, a research on a way to get accurate driving information through sensor fusion algorithm that has resilience against data forgery and modulation will be discussed.


2021 ◽  
Vol 11 (12) ◽  
pp. 5598
Author(s):  
Felix Nobis ◽  
Ehsan Shafiei ◽  
Phillip Karle ◽  
Johannes Betz ◽  
Markus Lienkamp

Automotive traffic scenes are complex due to the variety of possible scenarios, objects, and weather conditions that need to be handled. In contrast to more constrained environments, such as automated underground trains, automotive perception systems cannot be tailored to a narrow field of specific tasks but must handle an ever-changing environment with unforeseen events. As currently no single sensor is able to reliably perceive all relevant activity in the surroundings, sensor data fusion is applied to perceive as much information as possible. Data fusion of different sensors and sensor modalities on a low abstraction level enables the compensation of sensor weaknesses and misdetections among the sensors before the information-rich sensor data are compressed and thereby information is lost after a sensor-individual object detection. This paper develops a low-level sensor fusion network for 3D object detection, which fuses lidar, camera, and radar data. The fusion network is trained and evaluated on the nuScenes data set. On the test set, fusion of radar data increases the resulting AP (Average Precision) detection score by about 5.1% in comparison to the baseline lidar network. The radar sensor fusion proves especially beneficial in inclement conditions such as rain and night scenes. Fusing additional camera data contributes positively only in conjunction with the radar fusion, which shows that interdependencies of the sensors are important for the detection result. Additionally, the paper proposes a novel loss to handle the discontinuity of a simple yaw representation for object detection. Our updated loss increases the detection and orientation estimation performance for all sensor input configurations. The code for this research has been made available on GitHub.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 618
Author(s):  
Jan Grottke ◽  
Jörg Blankenbach

Due to their distinctive presence in everyday life and the variety of available built-in sensors, smartphones have become the focus of recent indoor localization research. Hence, this paper describes a novel smartphone-based sensor fusion algorithm. It combines the relative inertial measurement unit (IMU) based movements of the pedestrian dead reckoning with the absolute fingerprinting-based position estimations of Wireless Local Area Network (WLAN), Bluetooth (Bluetooth Low Energy—BLE), and magnetic field anomalies as well as a building model in real time. Thus, a step-based position estimation without knowledge of any start position was achieved. For this, a grid-based particle filter and a Bayesian filter approach were combined. Furthermore, various optimization methods were compared to weigh the different information sources within the sensor fusion algorithm, thus achieving high position accuracy. Although a particle filter was used, no particles move due to a novel grid-based particle interpretation. Here, the particles’ probability values change with every new information source and every stepwise iteration via a probability-map-based approach. By adjusting the weights of the individual measurement methods compared to a knowledge-based reference, the mean and the maximum position error were reduced by 31%, the RMSE by 34%, and the 95-percentile positioning errors by 52%.


Author(s):  
Dang Quang Hieu ◽  
Nguyen Xuan Truong

The article presents the basic principles of design and development of integrated middle range Coastal Surveillance System (CSS) used for water surface lookout. It provides solutions for such missions as command and control of maritime forces, border monitoring and control, prevention of illegal activities such as piracy, smuggling, illegal immigration, illegal fishing, supporting search and rescue (SAR) operations, and creates a common situation awareness picture of the Naval Theatre. The system structure diagram is designed to solve computational overload problem when processing large volume of data received from radar stations. The measurement-level fusion algorithm is developed based on the JPDA framework, in which radar data received from a single or group of radars and AIS data is aggregated in a processing center. The servers and workstations make use of local area network (LAN), using standard Gigabit Ethernet technologies for local network communications. Acquisition, analysis, storage and distribution of target data is executed in servers, then the data is sent to automated operator stations (console), where functional operations for managing, identifying and displaying of target on digital situational map are performed.


2014 ◽  
Vol 1 (2) ◽  
pp. 25-31
Author(s):  
T. Subha ◽  
◽  
S. Jayashri ◽  

2021 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Parag Narkhede ◽  
Rahee Walambe ◽  
Shruti Mandaokar ◽  
Pulkit Chandel ◽  
Ketan Kotecha ◽  
...  

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.


Sign in / Sign up

Export Citation Format

Share Document