Information Fusion of Multi-Sensor Images

Author(s):  
Yu-Jin Zhang

The human perception to the outside world is the results of action among brain and many organs. For example, the intelligent robots that people currently investigate can have many sensors for sense of vision, sense of hearing, sense of taste, sense of smell, sense of touch, sense of pain, sense of heat, sense of force, sense of slide, sense of approach (Luo, 2002). All these sensors provide different profile information of scene in same environment. To use suitable techniques for assorting with various sensors and combining their obtained information, the theories and methods of multi-sensor fusion are required. Multi-sensor information fusion is a basic ability of human beings. Single sensor can only provide incomplete, un-accurate, vague, uncertainty information. Sometimes, information obtained by different sensors can even be contradictory. Human beings have the ability to combine the information obtained by different organs and then make estimation and decision for environment and events. Using computer to perform multi-sensor information fusion can be considered as a simulation of the function of human brain for treating complex problems. Multi-sensor information fusion consists of operating on the information data come from various sensors and obtaining more comprehensive, accurate, and robust results than that obtained from single sensor. Fusion can be defined as the process of combined treating of data acquired from multiple sensors, as well as assorting, optimizing and conforming of these data to increase the ability of extracting information and improving the decision capability. Fusion can extend the coverage for space and time information, reducing the fuzziness, increasing the reliability of making decision, and the robustness of systems. Image fusion is a particular type of multi-sensor fusion, which takes images as operating objects. In a more general sense of image engineering (Zhang, 2006), the combination of multi-resolution images also can be counted as a fusion process. In this article, however, the emphasis is put on the information fusion of multi-sensor images.

2011 ◽  
Vol 225-226 ◽  
pp. 115-119
Author(s):  
Lian Jun Hu ◽  
Hong Song ◽  
Yi Luo ◽  
Xiao Hui Zeng ◽  
Bing Qiang Wang

A controller based on fuzzy neural network is designed in the paper. Fuzzy neural networks are introduced into the information fusion of signals from sensors of an AS-R intelligent robot. Characteristic information of unknown environments acquired by ultrasonic sensors, infrared sensors and vision sensors are fused together in order to eliminate uncertainty caused by single sensor. Therefore, precise environment information can be obtained and the fault tolerant capabilities of robots are improved. It is proved that intelligent robots adopting multi-sensor information fusing techniques have better real-time and robust characteristics according to simulation results.


2014 ◽  
Vol 494-495 ◽  
pp. 869-872
Author(s):  
Xian Bao Wang ◽  
Shi Hai Zhao ◽  
Guo Wei

According to the theory of multi-sensor information fusion technology, based on D - S evidence theory to fuse of multiple sensors feedback information from different angles for detecting solution concentration, and achieving the same judgment; This system uses of D - S evidence theory of multi-sensor data fusion method, not only make up the disadvantages of using a single sensor, but also largely reduce the uncertainty of the judgment. Additionally this system improves the rapidity and accuracy of the solution concentration detection, and broadens the application field of multi-sensor information fusion technology.


2021 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Parag Narkhede ◽  
Rahee Walambe ◽  
Shruti Mandaokar ◽  
Pulkit Chandel ◽  
Ketan Kotecha ◽  
...  

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.


2012 ◽  
Vol 490-495 ◽  
pp. 91-94 ◽  
Author(s):  
Li Fu ◽  
Jun Xiang Wang

A design and implementation of a detection system for dangerous driving was proposed based on multi-sensor-fusion. It is actually an embedded system consisting of visual,sensor, acceleration sensor, alcohol sensor input, and ARM cortex-M3 microcontroller. Experiment results show that the system has high linearity, high sensitivity,and excellent real-time performance. It can be further used to validate the multi-sensor information fusion algorithms in the field for improving the low reliability of the current detection by using one single-sensor method


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2180 ◽  
Author(s):  
Prasanna Kolar ◽  
Patrick Benavidez ◽  
Mo Jamshidi

This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.


2012 ◽  
Vol 532-533 ◽  
pp. 1006-1010 ◽  
Author(s):  
Ye Li ◽  
Yan Qing Jiang

The application of distributed multi-sensor information fusion technology in accurate positioning of Underwater Vehicle was introduced in this paper. According to the system structure of Distributed multi-sensor in an AUV “T1”, this article establishes the Kalman filtering mathematical model, accomplishes the fusion algorithm based on Kalman filtering and a numerical simulation. The experimental result shows that the application of fusion algorithm based on Kalman filtering can avoid the limitations of a single sensor, reduce its uncertainty impact and increase the confidence level of data.


2013 ◽  
Vol 448-453 ◽  
pp. 3549-3552
Author(s):  
Guo Qing Qiu ◽  
Yong Can Yu ◽  
Ming Li ◽  
Yi Long

Multi-sensor information fusion is that fuses information of multiple sensors gained through use of redundant, complementary, or timelier information in a system can provide more reliable and accurate information. Under the research of mobile robot environmental information, a control method of fuzzy neural network based on T-S (Takagi-Sugeno) type is given, it can fuses effectively collected information from multiple ultrasonic sensors and a CCD camera, and realize the real-time control for mobile robot. The results on mobile robot obstacles avoidance verified the effectiveness of the method.


2013 ◽  
Vol 443 ◽  
pp. 299-302
Author(s):  
Ran Zhang ◽  
Jing Zi Wei

Multi-sensor Information Fusion can get more accurate information required by the system through fusing redundant, complementary, or more real-time information provided by multiple sensors. This paper, with the emphasis on the MIF technology, combining its application in mobile robots, is discussed from two aspects, namely theory and simulation experiments. First of all, this paper expounds the basic principle of MIF technology, systematic structure and gradation of information fusion and information fusion methods; Secondly, based on the theory of neural network integration, it discusses the application of MIF technology in the field of robots , providing an effective method for navigation and obstacle avoidance of mobile robots.


2021 ◽  
Vol 2136 (1) ◽  
pp. 012036
Author(s):  
Chaoyu Wang ◽  
Zhi Liu ◽  
Yakun Wang

Abstract Intelligent fault diagnosis technology has become the focus of research in various fields. Its realization depends on the acquisition of equipment state by sensors. Because the fault information provided by a single sensor has limitations and cannot fully reflect the fault state of the tested object, we need to use multiple sensors to collect and fuse the fault information of rolling bearings to ensure the accuracy and accuracy of intelligent fault diagnosis. Based on this, this paper analyzes the application of fuzzy rules of multi-sensor information fusion technology in the fault diagnosis of bearings in the optoelectronic pod, so as to provide a reference for the realization of intelligent fault diagnosis of each structure in the optoelectronic pod.


Sign in / Sign up

Export Citation Format

Share Document