scholarly journals Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2180 ◽  
Author(s):  
Prasanna Kolar ◽  
Patrick Benavidez ◽  
Mo Jamshidi

This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.

2014 ◽  
Vol 494-495 ◽  
pp. 869-872
Author(s):  
Xian Bao Wang ◽  
Shi Hai Zhao ◽  
Guo Wei

According to the theory of multi-sensor information fusion technology, based on D - S evidence theory to fuse of multiple sensors feedback information from different angles for detecting solution concentration, and achieving the same judgment; This system uses of D - S evidence theory of multi-sensor data fusion method, not only make up the disadvantages of using a single sensor, but also largely reduce the uncertainty of the judgment. Additionally this system improves the rapidity and accuracy of the solution concentration detection, and broadens the application field of multi-sensor information fusion technology.


2021 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Parag Narkhede ◽  
Rahee Walambe ◽  
Shruti Mandaokar ◽  
Pulkit Chandel ◽  
Ketan Kotecha ◽  
...  

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.


Author(s):  
Changxi Wang ◽  
E. A. Elsayed ◽  
Kang Li ◽  
Javier Cabrera

Multiple sensors are commonly used for degradation monitoring. Since different sensors may be sensitive at different stages of the degradation process and each sensor data contain only partial information of the degraded unit, data fusion approaches that integrate degradation data from multiple sensors can effectively improve degradation modeling and life prediction accuracy. We present a non-parametric approach that assigns weights to each sensor based on dynamic clustering of the sensors observations. A case study that involves a fatigue-crack-growth dataset is implemented in order evaluate the prognostic performance of the unit. Results show that the fused path obtained with the proposed approach outperforms any individual sensor data and other paths obtained with an adaptive threshold clustering algorithm in terms of life prediction accuracy.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guangbing Zhou ◽  
Jing Luo ◽  
Shugong Xu ◽  
Shunqing Zhang ◽  
Shige Meng ◽  
...  

Purpose Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments. Design/methodology/approach Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments. Findings The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation. Originality/value Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.


Author(s):  
O. Sekkas ◽  
S. Hadjiefthymiades ◽  
E. Zervas

During the past few years, several location systems have been proposed that use multiple technologies simultaneously in order to locate a user. One such system is described in this article. It relies on multiple sensor readings from Wi-Fi access points, IR beacons, RFID tags, and so forth to estimate the location of a user. This technique is known better as sensor information fusion, which aims to improve accuracy and precision by integrating heterogeneous sensor observations. The proposed location system uses a fusion engine that is based on dynamic Bayesian networks (DBNs), thus substantially improving the accuracy and precision.


Author(s):  
Zude Zhou ◽  
Huaiqing Wang ◽  
Ping Lou

In previous chapters, the engineering scientific foundations of manufacturing intelligence (such as the knowledge-based system, Multi-Agent system, data mining and knowledge discovery, and computing intelligence) have been discussed in detail. Sensor integration and data fusion is another important theory of manufacturing intelligence. With the development of integrated systems, there is an urgent requirement for improving system automaticity and intelligence. Without improvement, the complexity and scale of systems are increased. Such systems need to be more sensitive to their work environment and independent state, and obviously, single sensor technology hardly meets these requirements. Multi-sensor and data fusion technology are therefore employed in automatic and intelligent manufacturing as it is more comprehensive and accurate than traditional single sensor technology if the information redundancy and complementarity are used reasonably. In theory, the outputs of multi-sensors are mutually validated. Multi-sensor integration is a brand new concept for intelligent manufacturing, and without doubt, sensor integration-based intelligent manufacturing is the development orientation of manufacturing in the future. With reference to the information fusion problem of the multi-sensor integration system, the development state, technical background, application scope and basic meaning of the multi-sensor integration and the data fusion are first reviewed in this chapter. Secondly the classification, level, system structure and function model of the data fusion system is discussed. The theoretical method of the data fusion is then introduced, and finally, attention is paid to cutting tool condition detection, machine thermal error compensation and online detection and error compensation because those are the main applications of multi-sensor data fusion technology in intelligent manufacturing.


2014 ◽  
Vol 651-653 ◽  
pp. 831-834
Author(s):  
Xi Pei Ma ◽  
Bing Feng Qian ◽  
Song Jie Zhang ◽  
Ye Wang

The autonomous navigation process of a mobile service robot is usually in uncertain environment. The information only given by sensors has been unable to meet the demand of the modern mobile robots, so multi-sensor data fusion has been widely used in the field of robots. The platform of this project is the achievement of the important 863 Program national research project-a prototype nursing robot. The aim is to study a mobile service robot’s multi-sensor information fusion, path planning and movement control method. It can provide a basis and practical use’s reference for the study of an indoor robot’s localization.


2012 ◽  
Vol 466-467 ◽  
pp. 1222-1226
Author(s):  
Bin Ma ◽  
Lin Chong Hao ◽  
Wan Jiang Zhang ◽  
Jing Dai ◽  
Zhong Hua Han

In this paper, we presented an equipment fault diagnosis method based on multi-sensor data fusion, in order to solve the problems such as uncertainty, imprecision and low reliability caused by using a single sensor to diagnose the equipment faults. We used a variety of sensors to collect the data for diagnosed objects and fused the data by using D-S evidence theory, according to the change of confidence and uncertainty, diagnosed whether the faults happened. Experimental results show that, the D-S evidence theory algorithm can reduce the uncertainty of the results of fault diagnosis, improved diagnostic accuracy and reliability, and compared with the fault diagnosis using a single sensor, this method has a better effect.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


Author(s):  
M. Schmitt ◽  
L. H. Hughes ◽  
X. X. Zhu

<p><strong>Abstract.</strong> While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the <i>SEN1-2</i> dataset to foster deep learning research in SAR-optical data fusion. <i>SEN1-2</i> comprises 282;384 pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since <i>SEN1-2</i> is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion.</p>


Sign in / Sign up

Export Citation Format

Share Document