The human-based multi-sensor fusion method for artificial nose and tongue sensor data

Author(s):  
P. Wide ◽  
F. Winquist ◽  
P. Bergsten ◽  
E.M. Petriu
2013 ◽  
Vol 303-306 ◽  
pp. 912-917
Author(s):  
Xiao Jing Yang ◽  
Zheng Hu Yan

In order to explore the application of multi-sensor fusion technology in nondestructive testing of agricultural products and develop new detection methods for agriculture, multi-sensor information fusion technology are performed to identify freshness of freshwater fish meat. Different freshness of grass carp specimens were identified by multi-sensor fusion method which three characteristic data that are PH value, conductance and smell were measured, collected, and then fused by fuzzy theory method. During the experiment process the value of the total volatile basic nitrogen (TVB-N) of standard samples and test samples were measured. The freshness standard was established by the TVB-N value of standard samples. Correctness of the results of multi-sensor data fusion was verified by comparing the TVB-N value of the test samples with the freshness standard. The results show that the freshwater fish meat with different freshness can be identified correctly by multi-sensor data fusion method which is fuzzy theory and the accuracy rate is 94%.


1998 ◽  
Vol 47 (5) ◽  
pp. 1072-1077 ◽  
Author(s):  
P. Wide ◽  
F. Winquist ◽  
P. Bergsten ◽  
E.M. Petriu

2021 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Parag Narkhede ◽  
Rahee Walambe ◽  
Shruti Mandaokar ◽  
Pulkit Chandel ◽  
Ketan Kotecha ◽  
...  

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.


2014 ◽  
Vol 607 ◽  
pp. 791-794 ◽  
Author(s):  
Wei Kang Tey ◽  
Che Fai Yeong ◽  
Yip Loon Seow ◽  
Eileen Lee Ming Su ◽  
Swee Ho Tang

Omnidirectional mobile robot has gained popularity among researchers. However, omnidirectional mobile robot is rarely been applied in industry field especially in the factory which is relatively more dynamic than normal research setting condition. Hence, it is very important to have a stable yet reliable feedback system to allow a more efficient and better performance controller on the robot. In order to ensure the reliability of the robot, many of the researchers use high cost solution in the feedback of the robot. For example, there are researchers use global camera as feedback. This solution has increases the cost of the robot setup fee to a relatively high amount. The setup system is also hard to modify and lack of flexibility. In this paper, a novel sensor fusion technique is proposed and the result is discussed.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4029 ◽  
Author(s):  
Jiaxuan Wu ◽  
Yunfei Feng ◽  
Peng Sun

Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far from automation, including a costly 24/7 observation by a designated caregiver, self-reporting by the user laboriously, or filling out a written ADL survey. Fortunately, ubiquitous sensors exist in our surroundings and on electronic devices in the Internet of Things (IoT) era. We proposed the ADL Recognition System that utilizes the sensor data from a single point of contact, such as smartphones, and conducts time-series sensor fusion processing. Raw data is collected from the ADL Recorder App constantly running on a user’s smartphone with multiple embedded sensors, including the microphone, Wi-Fi scan module, heading orientation of the device, light proximity, step detector, accelerometer, gyroscope, magnetometer, etc. Key technologies in this research cover audio processing, Wi-Fi indoor positioning, proximity sensing localization, and time-series sensor data fusion. By merging the information of multiple sensors, with a time-series error correction technique, the ADL Recognition System is able to accurately profile a person’s ADLs and discover his life patterns. This paper is particularly concerned with the care for the older adults who live independently.


1997 ◽  
Vol 30 (9) ◽  
pp. 347-351 ◽  
Author(s):  
Z. Boger ◽  
L. Ratton ◽  
T.A. Kunt ◽  
T.J. Mc Avoy ◽  
R.E. Cavicchi ◽  
...  

Author(s):  
Sherong Zhang ◽  
Ting Liu ◽  
Chao Wang

Abstract Building safety assessment based on single sensor data has the problems of low reliability and high uncertainty. Therefore, this paper proposes a novel multi-source sensor data fusion method based on Improved Dempster–Shafer (D-S) evidence theory and Back Propagation Neural Network (BPNN). Before data fusion, the improved self-support function is adopted to preprocess the original data. The process of data fusion is divided into three steps: Firstly, the feature of the same kind of sensor data is extracted by the adaptive weighted average method as the input source of BPNN. Then, BPNN is trained and its output is used as the basic probability assignment (BPA) of D-S evidence theory. Finally, Bhattacharyya Distance (BD) is introduced to improve D-S evidence theory from two aspects of evidence distance and conflict factors, and multi-source data fusion is realized by D-S synthesis rules. In practical application, a three-level information fusion framework of the data level, the feature level, and the decision level is proposed, and the safety status of buildings is evaluated by using multi-source sensor data. The results show that compared with the fusion result of the traditional D-S evidence theory, the algorithm improves the accuracy of the overall safety state assessment of the building and reduces the MSE from 0.18 to 0.01%.


Sign in / Sign up

Export Citation Format

Share Document