Sensor data fusion using machine learning techniques in indoor occupancy detection

Author(s):  
Pushpanjali Kumari ◽  
S. R. N. Reddy ◽  
Richa Yadav
AI Magazine ◽  
2012 ◽  
Vol 33 (2) ◽  
pp. 55 ◽  
Author(s):  
Nisarg Vyas ◽  
Jonathan Farringdon ◽  
David Andre ◽  
John Ivo Stivoric

In this article we provide insight into the BodyMedia FIT armband system — a wearable multi-sensor technology that continuously monitors physiological events related to energy expenditure for weight management using machine learning and data modeling methods. Since becoming commercially available in 2001, more than half a million users have used the system to track their physiological parameters and to achieve their individual health goals including weight-loss. We describe several challenges that arise in applying machine learning techniques to the health care domain and present various solutions utilized in the armband system. We demonstrate how machine learning and multi-sensor data fusion techniques are critical to the system’s success.


2020 ◽  
Author(s):  
Yosoon Choi ◽  
Jieun Baek ◽  
Jangwon Suh ◽  
Sung-Min Kim

<p>In this study, we proposed a method to utilize a multi-sensor Unmanned Aerial System (UAS) for exploration of hydrothermal alteration zones. This study selected an area (10m × 20m) composed mainly of the andesite and located on the coast, with wide outcrops and well-developed structural and mineralization elements. Multi-sensor (visible, multispectral, thermal, magnetic) data were acquired in the study area using UAS, and were studied using machine learning techniques. For utilizing the machine learning techniques, we applied the stratified random method to sample 1000 training data in the hydrothermal zone and 1000 training data in the non-hydrothermal zone identified through the field survey. The 2000 training data sets created for supervised learning were first classified into 1500 for training and 500 for testing. Then, 1500 for training were classified into 1200 for training and 300 for validation. The training and validation data for machine learning were generated in five sets to enable cross-validation. Five types of machine learning techniques were applied to the training data sets: k-Nearest Neighbors (k-NN), Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Deep Neural Network (DNN). As a result of integrated analysis of multi-sensor data using five types of machine learning techniques, RF and SVM techniques showed high classification accuracy of about 90%. Moreover, performing integrated analysis using multi-sensor data showed relatively higher classification accuracy in all five machine learning techniques than analyzing magnetic sensing data or single optical sensing data only.</p>


2020 ◽  
Author(s):  
Priscilla Addison ◽  
Stephen Alwon ◽  
Alex Janevski ◽  
Kristopher Purens ◽  
Clyde Wheeler

Author(s):  
U. Isikdag ◽  
K. Sahin ◽  
S. Cansiz

<p><strong>Abstract.</strong> The knowledge about the occupancy of an indoor space can serve to various domains ranging from emergency response to energy efficiency in buildings. The literature in the field presents various methods for occupancy detection. Data gathered for occupancy detection, can also be used to predict the number of occupants at a certain indoor space and time. The aim of this research was to determine the number of occupants in an indoor space, through the utilisation of information acquired from a set of sensors and machine learning techniques. The sensor types used in this research was a sound level sensor, temperature/humidity level sensor and an air quality level sensor. Based on data acquired from these sensors six automatic classification techniques are employed and tested with the aim of automatically detecting the number of occupants in an indoor space by making use of multi-sensor information. The results of the tests demonstrated that machine learning techniques can be used as a tool for prediction of number of occupants in an indoor space.</p>


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 299 ◽  
Author(s):  
Georgios Tsaramirsis ◽  
Seyed Buhari ◽  
Mohammed Basheri ◽  
Milos Stojmenovic

Realization of navigation in virtual environments remains a challenge as it involves complex operating conditions. Decomposition of such complexity is attainable by fusion of sensors and machine learning techniques. Identifying the right combination of sensory information and the appropriate machine learning technique is a vital ingredient for translating physical actions to virtual movements. The contributions of our work include: (i) Synchronization of actions and movements using suitable multiple sensor units, and (ii) selection of the significant features and an appropriate algorithm to process them. This work proposes an innovative approach that allows users to move in virtual environments by simply moving their legs towards the desired direction. The necessary hardware includes only a smartphone that is strapped to the subjects’ lower leg. Data from the gyroscope, accelerometer and campus sensors of the mobile device are transmitted to a PC where the movement is accurately identified using a combination of machine learning techniques. Once the desired movement is identified, the movement of the virtual avatar in the virtual environment is realized. After pre-processing the sensor data using the box plot outliers approach, it is observed that Artificial Neural Networks provided the highest movement identification accuracy of 84.2% on the training dataset and 84.1% on testing dataset.


Sign in / Sign up

Export Citation Format

Share Document