Motion sensor data correction using multiple sensors and multiple measurements

Author(s):  
Tibor Tajti ◽  
Nagy Benedek
2021 ◽  
Vol 4 (1) ◽  
pp. 3
Author(s):  
Parag Narkhede ◽  
Rahee Walambe ◽  
Shruti Mandaokar ◽  
Pulkit Chandel ◽  
Ketan Kotecha ◽  
...  

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.


Author(s):  
Changxi Wang ◽  
E. A. Elsayed ◽  
Kang Li ◽  
Javier Cabrera

Multiple sensors are commonly used for degradation monitoring. Since different sensors may be sensitive at different stages of the degradation process and each sensor data contain only partial information of the degraded unit, data fusion approaches that integrate degradation data from multiple sensors can effectively improve degradation modeling and life prediction accuracy. We present a non-parametric approach that assigns weights to each sensor based on dynamic clustering of the sensors observations. A case study that involves a fatigue-crack-growth dataset is implemented in order evaluate the prognostic performance of the unit. Results show that the fused path obtained with the proposed approach outperforms any individual sensor data and other paths obtained with an adaptive threshold clustering algorithm in terms of life prediction accuracy.


2018 ◽  
Vol 41 (8) ◽  
pp. 2338-2351 ◽  
Author(s):  
Anna Swider ◽  
Eilif Pedersen

In the phase of industry digitalization, data are collected from many sensors and signal processing techniques play a crucial role. Data preprocessing is a fundamental step in the analysis of measurements, and a first step before applying machine learning. To reduce the influence of distortions from signals, selective digital filtering is applied to minimize or remove unwanted components. Standard software and hardware digital filtering algorithms introduce a delay, which has to be compensated for to avoid destroying signal associations. The delay from filtering becomes more crucial when the analysis involves measurements from multiple sensors, therefore in this paper we provide an overview and comparison of existing digital filtering methods with an application based on real-life marine examples. In addition, the design of special-purpose filters is a complex process and for preprocessing data from many sources, the application of digital filtering in the time domain can have a high numerical cost. For this reason we describe discrete Fourier transformation digital filtering as a tool for efficient sensor data preprocessing, which does not introduce a time delay and has low numerical cost. The discrete Fourier transformation digital filtering has a simpler implementation and does not require expert-level filter design knowledge, which is beneficial for practitioners from various disciplines. Finally, we exemplify and show the application of the methods on real signals from marine systems.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4029 ◽  
Author(s):  
Jiaxuan Wu ◽  
Yunfei Feng ◽  
Peng Sun

Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far from automation, including a costly 24/7 observation by a designated caregiver, self-reporting by the user laboriously, or filling out a written ADL survey. Fortunately, ubiquitous sensors exist in our surroundings and on electronic devices in the Internet of Things (IoT) era. We proposed the ADL Recognition System that utilizes the sensor data from a single point of contact, such as smartphones, and conducts time-series sensor fusion processing. Raw data is collected from the ADL Recorder App constantly running on a user’s smartphone with multiple embedded sensors, including the microphone, Wi-Fi scan module, heading orientation of the device, light proximity, step detector, accelerometer, gyroscope, magnetometer, etc. Key technologies in this research cover audio processing, Wi-Fi indoor positioning, proximity sensing localization, and time-series sensor data fusion. By merging the information of multiple sensors, with a time-series error correction technique, the ADL Recognition System is able to accurately profile a person’s ADLs and discover his life patterns. This paper is particularly concerned with the care for the older adults who live independently.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 546 ◽  
Author(s):  
Haibin Yu ◽  
Guoxiong Pan ◽  
Mian Pan ◽  
Chong Li ◽  
Wenyan Jia ◽  
...  

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.


2020 ◽  
Vol 4 (3) ◽  
pp. 626
Author(s):  
Sarmayanta Sembiring ◽  
Hadir Kaban ◽  
Rido Zulfahmi

Efficiency system in using electrical energy has been designed using a PIR motion sensor, current sensor SCT-013-030, infrared LED and relay with a controller using Arduino Uno. The system is designed to turn off electronic equipment such as air conditioners, projectors and lights automatically as a solution from users forgetting to turn off electronic equipment when it is no longer in use. The experimental results show that the system has been running well, where the system can detect no movement for a predetermined time by using a PIR motion sensor. Detection of electronic equipment using sensors SCT-013-030 has been able to distinguish the state of the equipment whether it is ON or OFF based on differences in sensor output data that is read by the Arduino analog port. Sensor data when detecting the lamp when OFF is average = 1.333 while the mini projector and TV when off the average sensor data value = 1.667. The average current sensor data when detecting lights when ON = 5,333, mini projector = 8,333 and TV = 11,333. Overall the system designed has been able to turn off the equipment that is still active when the sensor does not detect any human movement during a predetermined time


Sign in / Sign up

Export Citation Format

Share Document