scholarly journals Human Activity Recognition: A Dynamic Inductive Bias Selection Perspective

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7278
Author(s):  
Massinissa Hamidi ◽  
Aomar Osmani

In this article, we study activity recognition in the context of sensor-rich environments. In these environments, many different constraints arise at various levels during the data generation process, such as the intrinsic characteristics of the sensing devices, their energy and computational constraints, and their collective (collaborative) dimension. These constraints have a fundamental impact on the final activity recognition models as the quality of the data, its availability, and its reliability, among other things, are not ensured during model deployment in real-world configurations. Current approaches for activity recognition rely on the activity recognition chain which defines several steps that the sensed data undergo: This is an inductive process that involves exploring a hypothesis space to find a theory able to explain the observations. For activity recognition to be effective and robust, this inductive process must consider the constraints at all levels and model them explicitly. Whether it is a bias related to sensor measurement, transmission protocol, sensor deployment topology, heterogeneity, dynamicity, or stochastic effects, it is essential to understand their substantial impact on the quality of the data and ultimately on activity recognition models. This study highlights the need to exhibit the different types of biases arising in real situations so that machine learning models, e.g., can adapt to the dynamicity of these environments, resist sensor failures, and follow the evolution of the sensors’ topology. We propose a metamodeling approach in which these biases are specified as hyperparameters that can control the structure of the activity recognition models. Via these hyperparameters, it becomes easier to optimize the inductive processes, reason about them, and incorporate additional knowledge. It also provides a principled strategy to adapt the models to the evolutions of the environment. We illustrate our approach on the SHL dataset, which features motion sensor data for a set of human activities collected in real conditions. The obtained results make a case for the proposed metamodeling approach; noticeably, the robustness gains achieved when the deployed models are confronted with the evolution of the initial sensing configurations. The trade-offs exhibited and the broader implications of the proposed approach are discussed with alternative techniques to encode and incorporate knowledge into activity recognition models.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder

We present and benchmark FilterNet, a flexible deep learning architecture for time series classification tasks, such as activity recognition via multichannel sensor data. It adapts popular CNN and CNN-LSTM motifs which have excelled in activity recognition benchmarks, implementing them in a many-to-many architecture to markedly improve frame-by-frame accuracy, event segmentation accuracy, model size, and computational efficiency. We propose several model variants, evaluate them alongside other published models using the Opportunity benchmark dataset, demonstrate the effect of model ensembling and of altering key parameters, and quantify the quality of the models’ segmentation of discrete events. We also offer recommendations for use and suggest potential model extensions. FilterNet advances the state of the art in all measured accuracy and speed metrics on the benchmarked dataset, and it can be extensively customized for other applications.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2498 ◽  
Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder

In this paper, we present and benchmark FilterNet, a flexible deep learning architecture for time series classification tasks, such as activity recognition via multichannel sensor data. It adapts popular convolutional neural network (CNN) and long short-term memory (LSTM) motifs which have excelled in activity recognition benchmarks, implementing them in a many-to-many architecture to markedly improve frame-by-frame accuracy, event segmentation accuracy, model size, and computational efficiency. We propose several model variants, evaluate them alongside other published models using the Opportunity benchmark dataset, demonstrate the effect of model ensembling and of altering key parameters, and quantify the quality of the models’ segmentation of discrete events. We also offer recommendations for use and suggest potential model extensions. FilterNet advances the state of the art in all measured accuracy and speed metrics when applied to the benchmarked dataset, and it can be extensively customized for other applications.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2397
Author(s):  
Bernd Zimmering ◽  
Oliver Niggemann ◽  
Constanze Hasterok ◽  
Erik Pfannstiel ◽  
Dario Ramming ◽  
...  

In the field of Cyber-Physical Systems (CPS), there is a large number of machine learning methods, and their intrinsic hyper-parameters are hugely varied. Since no agreed-on datasets for CPS exist, developers of new algorithms are forced to define their own benchmarks. This leads to a large number of algorithms each claiming benefits over other approaches but lacking a fair comparison. To tackle this problem, this paper defines a novel model for a generation process of data, similar to that found in CPS. The model is based on well-understood system theory and allows many datasets with different characteristics in terms of complexity to be generated. The data will pave the way for a comparison of selected machine learning methods in the exemplary field of unsupervised learning. Based on the synthetic CPS data, the data generation process is evaluated by analyzing the performance of the methods of the Self-Organizing Map, One-Class Support Vector Machine and Long Short-Term Memory Neural Net in anomaly detection.


Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder

We present and benchmark FilterNet, a flexible deep learning architecture for time series classification tasks, such as activity recognition via multichannel sensor data. It adapts popular CNN and CNN-LSTM motifs which have excelled in activity recognition benchmarks, implementing them in a many-to-many architecture to markedly improve frame-by-frame accuracy, event segmentation accuracy, model size, and computational efficiency. We propose several model variants, evaluate them alongside other published models using the Opportunity benchmark dataset, demonstrate the effect of model ensembling and of altering key parameters, and quantify the quality of the models’ segmentation of discrete events. We also offer recommendations for use and suggest potential model extensions. FilterNet advances the state of the art in all measured accuracy and speed metrics on the benchmarked dataset, and it can be extensively customized for other applications.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
K. Vidyasankar

A Fog Computing architecture consists of edge nodes that generate and possibly pre-process (sensor) data, fog nodes that do some processing quickly and do any actuations that may be needed, and cloud nodes that may perform further detailed analysis for long-term and archival purposes. Processing of a batch of input data is distributed into sub-computations which are executed at the different nodes of the architecture. In many applications, the computations are expected to preserve the order in which the batches arrive at the sources. In this paper, we discuss mechanisms for performing the computations at a node in correct order, by storing some batches temporarily and/or dropping some batches. The former option causes a delay in processing and the latter option affects Quality of Service (QoS). We bring out the trade-offs between processing delay and storage capabilities of the nodes, and also between QoS and the storage capabilities.


2009 ◽  
pp. 132-143
Author(s):  
K. Sonin ◽  
I. Khovanskaya

Hiring decisions are typically made by committees members of which have different capacity to estimate the quality of candidates. Organizational structure and voting rules in the committees determine the incentives and strategies of applicants; thus, construction of a modern university requires a political structure that provides committee members and applicants with optimal incentives. The existing political-economic model of informative voting typically lacks any degree of variance in the organizational structure, while political-economic models of organization typically assume a parsimonious information structure. In this paper, we propose a simple framework to analyze trade-offs in optimal subdivision of universities into departments and subdepartments, and allocation of political power.


2020 ◽  
Author(s):  
Juqing Zhao ◽  
Pei Chen ◽  
Guangming Wan

BACKGROUND There has been an increase number of eHealth and mHealth interventions aimed to support symptoms among cancer survivors. However, patient engagement has not been guaranteed and standardized in these interventions. OBJECTIVE The objective of this review was to address how patient engagement has been defined and measured in eHealth and mHealth interventions designed to improve symptoms and quality of life for cancer patients. METHODS Searches were performed in MEDLINE, PsychINFO, Web of Science, and Google Scholar to identify eHealth and mHealth interventions designed specifically to improve symptom management for cancer patients. Definition and measurement of engagement and engagement related outcomes of each intervention were synthesized. This integrated review was conducted using Critical Interpretive Synthesis to ensure the quality of data synthesis. RESULTS A total of 792 intervention studies were identified through the searches; 10 research papers met the inclusion criteria. Most of them (6/10) were randomized trial, 2 were one group trail, 1 was qualitative design, and 1 paper used mixed method. Majority of identified papers defined patient engagement as the usage of an eHealth and mHealth intervention by using different variables (e.g., usage time, log in times, participation rate). Engagement has also been described as subjective experience about the interaction with the intervention. The measurement of engagement is in accordance with the definition of engagement and can be categorized as objective and subjective measures. Among identified papers, 5 used system usage data, 2 used self-reported questionnaire, 1 used sensor data and 3 used qualitative method. Almost all studies reported engagement at a moment to moment level, but there is a lack of measurement of engagement for the long term. CONCLUSIONS There have been calls to develop standard definition and measurement of patient engagement in eHealth and mHealth interventions. Besides, it is important to provide cancer patients with more tailored and engaging eHealth and mHealth interventions for long term engagement.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1685
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul

Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).


Sign in / Sign up

Export Citation Format

Share Document