scholarly journals A Method for Sensor-Based Activity Recognition in Missing Data Scenario

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3811
Author(s):  
Tahera Hossain ◽  
Md. Atiqur Rahman Ahad ◽  
Sozo Inoue

Sensor-based human activity recognition has various applications in the arena of healthcare, elderly smart-home, sports, etc. There are numerous works in this field—to recognize various human activities from sensor data. However, those works are based on data patterns that are clean data and have almost no missing data, which is a genuine concern for real-life healthcare centers. Therefore, to address this problem, we explored the sensor-based activity recognition when some partial data were lost in a random pattern. In this paper, we propose a novel method to improve activity recognition while having missing data without any data recovery. For the missing data pattern, we considered data to be missing in a random pattern, which is a realistic missing pattern for sensor data collection. Initially, we created different percentages of random missing data only in the test data, while the training was performed on good quality data. In our proposed approach, we explicitly induce different percentages of missing data randomly in the raw sensor data to train the model with missing data. Learning with missing data reinforces the model to regulate missing data during the classification of various activities that have missing data in the test module. This approach demonstrates the plausibility of the machine learning model, as it can learn and predict from an identical domain. We exploited several time-series statistical features to extricate better features in order to comprehend various human activities. We explored both support vector machine and random forest as machine learning models for activity classification. We developed a synthetic dataset to empirically evaluate the performance and show that the method can effectively improve the recognition accuracy from 80.8% to 97.5%. Afterward, we tested our approach with activities from two challenging benchmark datasets: the human activity sensing consortium (HASC) dataset and single chest-mounted accelerometer dataset. We examined the method for different missing percentages, varied window sizes, and diverse window sliding widths. Our explorations demonstrated improved recognition performances even in the presence of missing data. The achieved results provide persuasive findings on sensor-based activity recognition in the presence of missing data.

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7853
Author(s):  
Aleksej Logacjov ◽  
Kerstin Bach ◽  
Atle Kongsvold ◽  
Hilde Bremseth Bårdstu ◽  
Paul Jarle Mork

Existing accelerometer-based human activity recognition (HAR) benchmark datasets that were recorded during free living suffer from non-fixed sensor placement, the usage of only one sensor, and unreliable annotations. We make two contributions in this work. First, we present the publicly available Human Activity Recognition Trondheim dataset (HARTH). Twenty-two participants were recorded for 90 to 120 min during their regular working hours using two three-axial accelerometers, attached to the thigh and lower back, and a chest-mounted camera. Experts annotated the data independently using the camera’s video signal and achieved high inter-rater agreement (Fleiss’ Kappa =0.96). They labeled twelve activities. The second contribution of this paper is the training of seven different baseline machine learning models for HAR on our dataset. We used a support vector machine, k-nearest neighbor, random forest, extreme gradient boost, convolutional neural network, bidirectional long short-term memory, and convolutional neural network with multi-resolution blocks. The support vector machine achieved the best results with an F1-score of 0.81 (standard deviation: ±0.18), recall of 0.85±0.13, and precision of 0.79±0.22 in a leave-one-subject-out cross-validation. Our highly professional recordings and annotations provide a promising benchmark dataset for researchers to develop innovative machine learning approaches for precise HAR in free living.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6316
Author(s):  
Dinis Moreira ◽  
Marília Barandas ◽  
Tiago Rocha ◽  
Pedro Alves ◽  
Ricardo Santos ◽  
...  

With the fast increase in the demand for location-based services and the proliferation of smartphones, the topic of indoor localization is attracting great interest. In indoor environments, users’ performed activities carry useful semantic information. These activities can then be used by indoor localization systems to confirm users’ current relative locations in a building. In this paper, we propose a deep-learning model based on a Convolutional Long Short-Term Memory (ConvLSTM) network to classify human activities within the indoor localization scenario using smartphone inertial sensor data. Results show that the proposed human activity recognition (HAR) model accurately identifies nine types of activities: not moving, walking, running, going up in an elevator, going down in an elevator, walking upstairs, walking downstairs, or going up and down a ramp. Moreover, predicted human activities were integrated within an existing indoor positioning system and evaluated in a multi-story building across several testing routes, with an average positioning error of 2.4 m. The results show that the inclusion of human activity information can reduce the overall localization error of the system and actively contribute to the better identification of floor transitions within a building. The conducted experiments demonstrated promising results and verified the effectiveness of using human activity-related information for indoor localization.


Author(s):  
Wahyu Andhyka Kusuma ◽  
Zamah Sari ◽  
Agus Eko Minarno ◽  
Hardianto Wibowo ◽  
Denar Regata Akbi ◽  
...  

Human activity recognition (HAR) with daily activities have become leading problems in human physical analysis. HAR with wide application in several areas of human physical analysis were increased along with several machine learning methods. This topic such as fall detection, medical rehabilitation or other smart appliance in physical analysis application has increase degree of life. Smart wearable devices with inertial sensor accelerometer and gyroscope were popular sensor for physical analysis. The previous research used this sensor with a various position in the human body part. Activities can classify in three class, static activity (SA), transition activity (TA), and dynamic activity (DA). Activity from complexity in activities can be separated in low and high complexity based on daily activity. Daily activity pattern has the same shape and patterns with gathering sensor. Dataset used in this paper have acquired from 30 volunteers.  Seven basic machine learning algorithm Logistic Regression, Support Vector Machine, Decision Tree, Random Forest, Gradient Boosted and K-Nearest Neighbor. Confusion activities were solved with a simple linear method. The purposed method Logistic Regression achieves 98% accuracy same as SVM with linear kernel, with same result hyperparameter tuning for both methods have the same accuracy. LR and SVC its better used in SA and DA without TA in each recognizing.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 317 ◽  
Author(s):  
Nadeem Ahmed ◽  
Jahir Ibna Rafiq ◽  
Md Rashedul Islam

Human activity recognition (HAR) techniques are playing a significant role in monitoring the daily activities of human life such as elderly care, investigation activities, healthcare, sports, and smart homes. Smartphones incorporated with varieties of motion sensors like accelerometers and gyroscopes are widely used inertial sensors that can identify different physical conditions of human. In recent research, many works have been done regarding human activity recognition. Sensor data of smartphone produces high dimensional feature vectors for identifying human activities. However, all the vectors are not contributing equally for identification process. Including all feature vectors create a phenomenon known as ‘curse of dimensionality’. This research has proposed a hybrid method feature selection process, which includes a filter and wrapper method. The process uses a sequential floating forward search (SFFS) to extract desired features for better activity recognition. Features are then fed to a multiclass support vector machine (SVM) to create nonlinear classifiers by adopting the kernel trick for training and testing purpose. We validated our model with a benchmark dataset. Our proposed system works efficiently with limited hardware resource and provides satisfactory activity identification.


Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 416 ◽  
Author(s):  
Lei Chen ◽  
Shurui Fan ◽  
Vikram Kumar ◽  
Yating Jia

Human activity recognition (HAR) has been increasingly used in medical care, behavior analysis, and entertainment industry to improve the experience of users. Most of the existing works use fixed models to identify various activities. However, they do not adapt well to the dynamic nature of human activities. We investigated the activity recognition with postural transition awareness. The inertial sensor data was processed by filters and we used both time domain and frequency domain of the signals to extract the feature set. For the corresponding posture classification, three feature selection algorithms were considered to select 585 features to obtain the optimal feature subset for the posture classification. And We adopted three classifiers (support vector machine, decision tree, and random forest) for comparative analysis. After experiments, the support vector machine gave better classification results than other two methods. By using the support vector machine, we could achieve up to 98% accuracy in the Multi-class classification. Finally, the results were verified by probability estimation.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1570
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul ◽  
Phichai Youplao ◽  
Preecha Yupapin

The creation of the Internet of Things (IoT), along with the latest developments in wearable technology, has provided new opportunities in human activity recognition (HAR). The modern smartwatch offers the potential for data from sensors to be relayed to novel IoT platforms, which allow the constant tracking and monitoring of human movement and behavior. Recently, traditional activity recognition techniques have done research in advance by choosing machine learning methods such as artificial neural network, decision tree, support vector machine, and naive Bayes. Nonetheless, these conventional machine learning techniques depend inevitably on heuristically handcrafted feature extraction, in which human domain knowledge is normally limited. This work proposes a hybrid deep learning model called CNN-LSTM that employed Long Short-Term Memory (LSTM) networks for activity recognition with the Convolution Neural Network (CNN). The study makes use of HAR involving smartwatches to categorize hand movements. Using the study based on the Wireless Sensor Data Mining (WISDM) public benchmark dataset, the recognition abilities of the deep learning model can be accessed. The accuracy, precision, recall, and F-measure statistics are employed using the evaluation metrics to assess the recognition abilities of LSTM models proposed. The findings indicate that this hybrid deep learning model offers better performance than its rivals, where the achievement of 96.2% accuracy, while the f-measure is 96.3%, is obtained. The results show that the proposed CNN-LSTM can support an improvement of the performance of activity recognition.


2019 ◽  
Vol 10 (2) ◽  
pp. 34-47 ◽  
Author(s):  
Bagavathi Lakshmi ◽  
S.Parthasarathy

Discovering human activities on mobile devices is a challenging task for human action recognition. The ability of a device to recognize its user's activity is important because it enables context-aware applications and behavior. Recently, machine learning algorithms have been increasingly used for human action recognition. During the past few years, principal component analysis and support vector machines is widely used for robust human activity recognition. However, with global dynamic tendency and complex tasks involved, this robust human activity recognition (HAR) results in error and complexity. To deal with this problem, a machine learning algorithm is proposed and explores its application on HAR. In this article, a Max Pool Convolution Neural Network based on Nearest Neighbor (MPCNN-NN) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics. The MPCNN-NN framework for HAR consists of three steps. In the first step, for each activity, the features of interest or foreground frame are detected using Median Background Subtraction. The second step consists of organizing the features (i.e. postures) that represent the strongest generic discriminating features (i.e. postures) based on Max Pool. The third and the final step is the HAR based on Nearest Neighbor that postures which maximizes the probability. Experiments have been conducted to demonstrate the superiority of the proposed MPCNN-NN framework on human action dataset, KARD (Kinect Activity Recognition Dataset).


Sign in / Sign up

Export Citation Format

Share Document