A Context Recognition System for Various Food Intake using Mobile and Wearable Sensor Data

2016 ◽  
Vol 43 (5) ◽  
pp. 531-540
Author(s):  
Kee-Hoon Kim ◽  
Sung-Bae Cho
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 516
Author(s):  
Brinnae Bent ◽  
Baiying Lu ◽  
Juseong Kim ◽  
Jessilyn P. Dunn

A critical challenge to using longitudinal wearable sensor biosignal data for healthcare applications and digital biomarker development is the exacerbation of the healthcare “data deluge,” leading to new data storage and organization challenges and costs. Data aggregation, sampling rate minimization, and effective data compression are all methods for consolidating wearable sensor data to reduce data volumes. There has been limited research on appropriate, effective, and efficient data compression methods for biosignal data. Here, we examine the application of different data compression pipelines built using combinations of algorithmic- and encoding-based methods to biosignal data from wearable sensors and explore how these implementations affect data recoverability and storage footprint. Algorithmic methods tested include singular value decomposition, the discrete cosine transform, and the biorthogonal discrete wavelet transform. Encoding methods tested include run-length encoding and Huffman encoding. We apply these methods to common wearable sensor data, including electrocardiogram (ECG), photoplethysmography (PPG), accelerometry, electrodermal activity (EDA), and skin temperature measurements. Of the methods examined in this study and in line with the characteristics of the different data types, we recommend direct data compression with Huffman encoding for ECG, and PPG, singular value decomposition with Huffman encoding for EDA and accelerometry, and the biorthogonal discrete wavelet transform with Huffman encoding for skin temperature to maximize data recoverability after compression. We also report the best methods for maximizing the compression ratio. Finally, we develop and document open-source code and data for each compression method tested here, which can be accessed through the Digital Biomarker Discovery Pipeline as the “Biosignal Data Compression Toolbox,” an open-source, accessible software platform for compressing biosignal data.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


2021 ◽  
Vol 5 (2) ◽  
pp. 1-4
Author(s):  
Lucie Klus ◽  
Roman Klus ◽  
Elena Simona Lohan ◽  
Carlos Granell ◽  
Jukka Talvitie ◽  
...  

2017 ◽  
Vol 98 (10) ◽  
pp. e65
Author(s):  
Claire Meagher ◽  
Stefano Sapienza ◽  
Catherine Adans-Dester ◽  
Anne O’Brien ◽  
Shyamal Patel ◽  
...  

Informatics ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 38 ◽  
Author(s):  
Martin Jänicke ◽  
Bernhard Sick ◽  
Sven Tomforde

Personal wearables such as smartphones or smartwatches are increasingly utilized in everyday life. Frequently, activity recognition is performed on these devices to estimate the current user status and trigger automated actions according to the user’s needs. In this article, we focus on the creation of a self-adaptive activity recognition system based on IMU that includes new sensors during runtime. Starting with a classifier based on GMM, the density model is adapted to new sensor data fully autonomously by issuing the marginalization property of normal distributions. To create a classifier from that, label inference is done, either based on the initial classifier or based on the training data. For evaluation, we used more than 10 h of annotated activity data from the publicly available PAMAP2 benchmark dataset. Using the data, we showed the feasibility of our approach and performed 9720 experiments, to get resilient numbers. One approach performed reasonably well, leading to a system improvement on average, with an increase in the F-score of 0.0053, while the other one shows clear drawbacks due to a high loss of information during label inference. Furthermore, a comparison with state of the art techniques shows the necessity for further experiments in this area.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4029 ◽  
Author(s):  
Jiaxuan Wu ◽  
Yunfei Feng ◽  
Peng Sun

Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far from automation, including a costly 24/7 observation by a designated caregiver, self-reporting by the user laboriously, or filling out a written ADL survey. Fortunately, ubiquitous sensors exist in our surroundings and on electronic devices in the Internet of Things (IoT) era. We proposed the ADL Recognition System that utilizes the sensor data from a single point of contact, such as smartphones, and conducts time-series sensor fusion processing. Raw data is collected from the ADL Recorder App constantly running on a user’s smartphone with multiple embedded sensors, including the microphone, Wi-Fi scan module, heading orientation of the device, light proximity, step detector, accelerometer, gyroscope, magnetometer, etc. Key technologies in this research cover audio processing, Wi-Fi indoor positioning, proximity sensing localization, and time-series sensor data fusion. By merging the information of multiple sensors, with a time-series error correction technique, the ADL Recognition System is able to accurately profile a person’s ADLs and discover his life patterns. This paper is particularly concerned with the care for the older adults who live independently.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Saad Albawi ◽  
Oguz Bayat ◽  
Saad Al-Azawi ◽  
Osman N. Ucan

Recently, social touch gesture recognition has been considered an important topic for touch modality, which can lead to highly efficient and realistic human-robot interaction. In this paper, a deep convolutional neural network is selected to implement a social touch recognition system for raw input samples (sensor data) only. The touch gesture recognition is performed using a dataset previously measured with numerous subjects that perform varying social gestures. This dataset is dubbed as the corpus of social touch, where touch was performed on a mannequin arm. A leave-one-subject-out cross-validation method is used to evaluate system performance. The proposed method can recognize gestures in nearly real time after acquiring a minimum number of frames (the average range of frame length was from 0.2% to 4.19% from the original frame lengths) with a classification accuracy of 63.7%. The achieved classification accuracy is competitive in terms of the performance of existing algorithms. Furthermore, the proposed system outperforms other classification algorithms in terms of classification ratio and touch recognition time without data preprocessing for the same dataset.


Sign in / Sign up

Export Citation Format

Share Document