scholarly journals Fusing Object Information and Inertial Data for Activity Recognition

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4119 ◽  
Author(s):  
Alexander Diete ◽  
Heiner Stuckenschmidt

In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6%.

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6316
Author(s):  
Dinis Moreira ◽  
Marília Barandas ◽  
Tiago Rocha ◽  
Pedro Alves ◽  
Ricardo Santos ◽  
...  

With the fast increase in the demand for location-based services and the proliferation of smartphones, the topic of indoor localization is attracting great interest. In indoor environments, users’ performed activities carry useful semantic information. These activities can then be used by indoor localization systems to confirm users’ current relative locations in a building. In this paper, we propose a deep-learning model based on a Convolutional Long Short-Term Memory (ConvLSTM) network to classify human activities within the indoor localization scenario using smartphone inertial sensor data. Results show that the proposed human activity recognition (HAR) model accurately identifies nine types of activities: not moving, walking, running, going up in an elevator, going down in an elevator, walking upstairs, walking downstairs, or going up and down a ramp. Moreover, predicted human activities were integrated within an existing indoor positioning system and evaluated in a multi-story building across several testing routes, with an average positioning error of 2.4 m. The results show that the inclusion of human activity information can reduce the overall localization error of the system and actively contribute to the better identification of floor transitions within a building. The conducted experiments demonstrated promising results and verified the effectiveness of using human activity-related information for indoor localization.


2021 ◽  
Vol 5 (6) ◽  
pp. 1193-1206
Author(s):  
Humaira Nur Pradani ◽  
Faizal Mahananto

Human activity recognition (HAR) is one of the topics that is being widely researched because of its diverse implementation in various fields such as health, construction, and UI / UX. As MEMS (Micro Electro Mechanical Systems) evolves, HAR data acquisition can be done more easily and efficiently using inertial sensors. Inertial sensor data processing for HAR requires a series of processes and a variety of techniques. This literature study aims to summarize the various approaches that have been used in existing research in building the HAR model. Published articles are collected from ScienceDirect, IEEE Xplore, and MDPI over the past five years (2017-2021). From the 38 studies identified, information extracted are the overview of the areas of HAR implementation, data acquisition, public datasets, pre-process methods, feature extraction approaches, feature selection methods, classification models, training scenarios, model performance, and research challenges in this topic. The analysis showed that there is still room to improve the performance of the HAR model. Therefore, future research on the topic of HAR using inertial sensors can focus on extracting and selecting more optimal features, considering the robustness level of the model, increasing the complexity of classified activities, and balancing accuracy with computation time.  


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1685
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul

Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).


10.2196/13961 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13961
Author(s):  
Kim Sarah Sczuka ◽  
Lars Schwickert ◽  
Clemens Becker ◽  
Jochen Klenk

Background Falls are a common health problem, which in the worst cases can lead to death. To develop reliable fall detection algorithms as well as suitable prevention interventions, it is important to understand circumstances and characteristics of real-world fall events. Although falls are common, they are seldom observed, and reports are often biased. Wearable inertial sensors provide an objective approach to capture real-world fall signals. However, it is difficult to directly derive visualization and interpretation of body movements from the fall signals, and corresponding video data is rarely available. Objective The re-enactment method uses available information from inertial sensors to simulate fall events, replicate the data, validate the simulation, and thereby enable a more precise description of the fall event. The aim of this paper is to describe this method and demonstrate the validity of the re-enactment approach. Methods Real-world fall data, measured by inertial sensors attached to the lower back, were selected from the Fall Repository for the Design of Smart and Self-Adaptive Environments Prolonging Independent Living (FARSEEING) database. We focused on well-described fall events such as stumbling to be re-enacted under safe conditions in a laboratory setting. For the purposes of exemplification, we selected the acceleration signal of one fall event to establish a detailed simulation protocol based on identified postures and trunk movement sequences. The subsequent re-enactment experiments were recorded with comparable inertial sensor configurations as well as synchronized video cameras to analyze the movement behavior in detail. The re-enacted sensor signals were then compared with the real-world signals to adapt the protocol and repeat the re-enactment method if necessary. The similarity between the simulated and the real-world fall signals was analyzed with a dynamic time warping algorithm, which enables the comparison of two temporal sequences varying in speed and timing. Results A fall example from the FARSEEING database was used to show the feasibility of producing a similar sensor signal with the re-enactment method. Although fall events were heterogeneous concerning chronological sequence and curve progression, it was possible to reproduce a good approximation of the motion of a person’s center of mass during fall events based on the available sensor information. Conclusions Re-enactment is a promising method to understand and visualize the biomechanics of inertial sensor-recorded real-world falls when performed in a suitable setup, especially if video data is not available.


Author(s):  
Edgar Charry ◽  
Daniel T.H. Lai

The use of inertial sensors to measure human movement has recently gained momentum with the advent of low cost micro-electro-mechanical systems (MEMS) technology. These sensors comprise accelerometer and gyroscopes which measure accelerations and angular velocities respectively. Secondary quantities such as displacement can be obtained by integration of these quantities, a method which presents challenging issues due to the problem of accumulative sensor errors. This chapter investigates the spectral evaluation of individual sensor errors and looks at the effectiveness of minimizing these errors using static digital filters. The primary focus is on the derivation of foot displacement data from inertial sensor measurements. The importance of foot, in particular toe displacement measurements is evident in the context of tripping and falling which are serious health concerns for the elderly. The Minimum Toe Clearance (MTC) as an important gait variable for falls-risk prediction and assessment, and therefore the measurement variable of interest. A brief sketch of the current devices employing accelerometers and gyroscopes is presented, highlighting the problems and difficulties reported in literature to achieve good precision. These have been mainly due to the presence of sensor errors and the error accumulative process employed in obtaining displacement measurements. The investigation first proceeds to identify the location of these sensor errors in the frequency domain using the Fast Fourier Transform (FFT) on raw inertial sensor data. The frequency content of velocity and displacement measurements obtained from integrating the inertial data using a well known strap-down method is then explored. These investigations revealed that large sensor errors occurred mainly in the low frequency spectrum while white noise exists in all frequency spectra. The efficacy of employing a band-pass filter to remove a large portion of these errors and their effect on the derived displacements is elaborated on. The cross-correlation of the FFT power spectra from a highly accurate optical measurement system and processed sensor data is used as a metric to evaluate the performance of the band-pass filter at several stages of the processing stage. The motivation is that a more fundamental method would require less computational demand and could lead to more efficient implementations in low-power and systems with limited resources, so that portable sensor based motion measurement system would provide a good degree of measurement accuracy.


2013 ◽  
Vol 823 ◽  
pp. 107-110
Author(s):  
Zi Ming Xiao ◽  
Yu Long Shi ◽  
Yong Xue ◽  
Feng Hu ◽  
Yu Chuan Wu

This paper introduces some techniques on classifying human activities with inertial sensors and point out a number of characteristics of classification algorithm. The goal of human activity recognition is to automatically analyze ongoing activities from people who wear inertial sensor. Initially, we provide introduce information about the activity recognition, such as the way of acquisition, sensors used and the steps of activity recognition using machine learning algorithm. Next, we focus on the classification techniques together with a detailed taxonomy, and the classification techniques implemented and compared in this study are: Decision Tree Algorithm (DTA), Bayesian Decision Making (BDM), Support Vector Machines (SVM), Artificial Neural Networks (ANN) and Hidden Markov Model (HMM)[. Finally, we make a summarize about it investigate the directions for future research.


2020 ◽  
Author(s):  
Timo von Marcard

This thesis explores approaches to capture human motions with a small number of sensors. In the first part of this thesis an approach is presented that reconstructs the body pose from only six inertial sensors. Instead of relying on pre-recorded motion databases, a global optimization problem is solved to maximize the consistency of measurements and model over an entire recording sequence. The second part of this thesis deals with a hybrid approach to fuse visual information from a single hand-held camera with inertial sensor data. First, a discrete optimization problem is solved to automatically associate people detections in the video with inertial sensor data. Then, a global optimization problem is formulated to combine visual and inertial information. The propose  approach enables capturing of multiple interacting people and works even if many more people are visible in the camera image. In addition, systematic inertial sensor errors can be compensated, leading to a substantial in...


2019 ◽  
Vol 28 (04) ◽  
pp. 1940006 ◽  
Author(s):  
Olga C. Santos

Recent trends in educational technology focus on designing systems that can support students while learning complex psychomotor skills, such as those required when practicing sports and martial arts, dancing or playing a musical instrument. In this context, artificial intelligence can be key to personalize the development of these psychomotor skills by enabling the provision of effective feedback when the instructor is not present, or scaling up to a larger pool of students the feedback that an instructor would typically provide one-on-one. This paper presents the modeling of human motion gathered with inertial sensors aimed to offer a personalized support to students when learning complex psychomotor skills. In particular, when comparing learner data with those of an expert during the psychomotor learning process, artificial intelligence algorithms can allow to: (i) recognize specific motion learning units and (ii) assess learning performance in a motion unit. However, it seems that this field is still emerging, since when reviewed systematically, search results hardly included the motion modeling with artificial intelligence techniques of complex human activities measured with inertial sensors.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3811
Author(s):  
Tahera Hossain ◽  
Md. Atiqur Rahman Ahad ◽  
Sozo Inoue

Sensor-based human activity recognition has various applications in the arena of healthcare, elderly smart-home, sports, etc. There are numerous works in this field—to recognize various human activities from sensor data. However, those works are based on data patterns that are clean data and have almost no missing data, which is a genuine concern for real-life healthcare centers. Therefore, to address this problem, we explored the sensor-based activity recognition when some partial data were lost in a random pattern. In this paper, we propose a novel method to improve activity recognition while having missing data without any data recovery. For the missing data pattern, we considered data to be missing in a random pattern, which is a realistic missing pattern for sensor data collection. Initially, we created different percentages of random missing data only in the test data, while the training was performed on good quality data. In our proposed approach, we explicitly induce different percentages of missing data randomly in the raw sensor data to train the model with missing data. Learning with missing data reinforces the model to regulate missing data during the classification of various activities that have missing data in the test module. This approach demonstrates the plausibility of the machine learning model, as it can learn and predict from an identical domain. We exploited several time-series statistical features to extricate better features in order to comprehend various human activities. We explored both support vector machine and random forest as machine learning models for activity classification. We developed a synthetic dataset to empirically evaluate the performance and show that the method can effectively improve the recognition accuracy from 80.8% to 97.5%. Afterward, we tested our approach with activities from two challenging benchmark datasets: the human activity sensing consortium (HASC) dataset and single chest-mounted accelerometer dataset. We examined the method for different missing percentages, varied window sizes, and diverse window sliding widths. Our explorations demonstrated improved recognition performances even in the presence of missing data. The achieved results provide persuasive findings on sensor-based activity recognition in the presence of missing data.


Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 416 ◽  
Author(s):  
Lei Chen ◽  
Shurui Fan ◽  
Vikram Kumar ◽  
Yating Jia

Human activity recognition (HAR) has been increasingly used in medical care, behavior analysis, and entertainment industry to improve the experience of users. Most of the existing works use fixed models to identify various activities. However, they do not adapt well to the dynamic nature of human activities. We investigated the activity recognition with postural transition awareness. The inertial sensor data was processed by filters and we used both time domain and frequency domain of the signals to extract the feature set. For the corresponding posture classification, three feature selection algorithms were considered to select 585 features to obtain the optimal feature subset for the posture classification. And We adopted three classifiers (support vector machine, decision tree, and random forest) for comparative analysis. After experiments, the support vector machine gave better classification results than other two methods. By using the support vector machine, we could achieve up to 98% accuracy in the Multi-class classification. Finally, the results were verified by probability estimation.


Sign in / Sign up

Export Citation Format

Share Document