scholarly journals Classification of Human Activity Recognition Utilizing Smartphone Data of CNN-LSTM

Author(s):  
Widya Rizka Ulul Fadilah ◽  
Wahyu Andhyka Kusuma ◽  
Agus Eko Minarno ◽  
Yuda Munarko

Human activity recognition has been applied in various areas of life by utilizing the gyroscope and accelerometer sensors embedded in smartphones. One of the functions of recognizing human activities is by understanding the pattern of human activity, thereby minimizing the possibility of unexpected incidents. This study classified of human activity recognition through CNN-LSTM on the UCI HAR dataset by applying the divide and conquer algorithm. This study additionally employs tuning hyperparameter to obtain the best accuracy value from the parameters and the proposed architecture. From the test results with the CNN-LSTM method, the accuracy rate for dynamic activity is 99.35%, for static activity is 96.08%, and the combination of the two models is 97.62%.

Author(s):  
Pranjal Kumar

Human Activity Recognition (HAR) has become a vibrant research field over the last decade, especially because of the spread of electronic devices like mobile phones, smart cell phones, and video cameras in our daily lives. In addition, the progress of deep learning and other algorithms has made it possible for researchers to use HAR in many fields including sports, health, and well-being. HAR is, for example, one of the most promising resources for helping older people with the support of their cognitive and physical function through day-to-day activities. This study focuses on the key role machine learning plays in the development of HAR applications. While numerous HAR surveys and review articles have previously been carried out, the main/overall HAR issue was not taken into account, and these studies concentrate only on specific HAR topics. A detailed review paper covering major HAR topics is therefore essential. This study analyses the most up-to-date studies on HAR in recent years and provides a classification of HAR methodology and demonstrates advantages and disadvantages for each group of methods. This paper finally addresses many problems in the current HAR subject and provides recommendations for potential study.


2018 ◽  
Vol 7 (3.8) ◽  
pp. 63
Author(s):  
Nilam Dhatrak ◽  
Anil Kumar Dudyala

In today’s world individuals health concern has improved a lot with the help of advancement in the technology. To monitor an age old person or a person with disability, now-a-days modern wearable smartphone devices are available in the market which are equipped with good collection of built in sensors that can be used for Human Activity Recognition (HAR). These type of devices generate lot of data with many number of features. When this data is used for classification, the classifier may be over trained or will definitely give high error rate. Hence, in this paper, we propose a two hybrid frameworks which gives us optimal number of features that can be used with different classifiers to recognize the Human Activity accurately. It is observed from our experiments that SVM was able to classify the HAR accurately.  


2018 ◽  
Author(s):  
Kenan Li ◽  
Rima Habre ◽  
Huiyu Deng ◽  
Robert Urman ◽  
John Morrison ◽  
...  

BACKGROUND Time-resolved quantification of physical activity can contribute to both personalized medicine and epidemiological research studies, for example, managing and identifying triggers of asthma exacerbations. A growing number of reportedly accurate machine learning algorithms for human activity recognition (HAR) have been developed using data from wearable devices (eg, smartwatch and smartphone). However, many HAR algorithms depend on fixed-size sampling windows that may poorly adapt to real-world conditions in which activity bouts are of unequal duration. A small sliding window can produce noisy predictions under stable conditions, whereas a large sliding window may miss brief bursts of intense activity. OBJECTIVE We aimed to create an HAR framework adapted to variable duration activity bouts by (1) detecting the change points of activity bouts in a multivariate time series and (2) predicting activity for each homogeneous window defined by these change points. METHODS We applied standard fixed-width sliding windows (4-6 different sizes) or greedy Gaussian segmentation (GGS) to identify break points in filtered triaxial accelerometer and gyroscope data. After standard feature engineering, we applied an Xgboost model to predict physical activity within each window and then converted windowed predictions to instantaneous predictions to facilitate comparison across segmentation methods. We applied these methods in 2 datasets: the human activity recognition using smartphones (HARuS) dataset where a total of 30 adults performed activities of approximately equal duration (approximately 20 seconds each) while wearing a waist-worn smartphone, and the Biomedical REAl-Time Health Evaluation for Pediatric Asthma (BREATHE) dataset where a total of 14 children performed 6 activities for approximately 10 min each while wearing a smartwatch. To mimic a real-world scenario, we generated artificial unequal activity bout durations in the BREATHE data by randomly subdividing each activity bout into 10 segments and randomly concatenating the 60 activity bouts. Each dataset was divided into ~90% training and ~10% holdout testing. RESULTS In the HARuS data, GGS produced the least noisy predictions of 6 physical activities and had the second highest accuracy rate of 91.06% (the highest accuracy rate was 91.79% for the sliding window of size 0.8 second). In the BREATHE data, GGS again produced the least noisy predictions and had the highest accuracy rate of 79.4% of predictions for 6 physical activities. CONCLUSIONS In a scenario with variable duration activity bouts, GGS multivariate segmentation produced smart-sized windows with more stable predictions and a higher accuracy rate than traditional fixed-size sliding window approaches. Overall, accuracy was good in both datasets but, as expected, it was slightly lower in the more real-world study using wrist-worn smartwatches in children (BREATHE) than in the more tightly controlled study using waist-worn smartphones in adults (HARuS). We implemented GGS in an offline setting, but it could be adapted for real-time prediction with streaming data.


Author(s):  
Pranjal Kumar

Human Activity Recognition (HAR) has become a vibrant research field over the last decade, especially because of the spread of electronic devices like mobile phones, smart cell phones, and video cameras in our daily lives. In addition, the progress of deep learning and other algorithms has made it possible for researchers to use HAR in many fields including sports, health, and well-being. HAR is, for example, one of the most promising resources for helping older people with the support of their cognitive and physical function through day-to-day activities. This study focuses on the key role machine learning plays in the development of HAR applications. While numerous HAR surveys and review articles have previously been carried out, the main/overall HAR issue was not taken into account, and these studies concentrate only on specific HAR topics. A detailed review paper covering major HAR topics is therefore essential. This study analyses the most up-to-date studies on HAR in recent years and provides a classification of HAR methodology and demonstrates advantages and disadvantages for each group of methods. This paper finally addresses many problems in the current HAR subject and provides recommendations for potential study.


2021 ◽  
Vol 11 (9) ◽  
pp. 4153
Author(s):  
Jisu Kim ◽  
Deokwoo Lee

While human activity recognition and pose estimation are closely related, these two issues are usually treated as separate tasks. In this thesis, two-dimension and three-dimension pose estimation is obtained for human activity recognition in a video sequence, and final activity is determined by combining it with an activity algorithm with visual attention. Two problems can be solved efficiently using a single architecture. It is also shown that end-to-end optimization leads to much higher accuracy than separated learning. The proposed architecture can be trained seamlessly with different categories of data. For visual attention, soft visual attention is used, and a multilayer recurrent neural network using long short term memory that can be used both temporally and spatially is used. The image, pose estimated skeleton, and RGB-based activity recognition data are all synthesized to determine the final activity to increase reliability. Visual attention evaluates the model in UCF-11 (Youtube Action), HMDB-51 and Hollywood2 data sets, and analyzes how to focus according to the scene and task the model is performing. Pose estimation and activity recognition are tested and analyzed on MPII, Human3.6M, Penn Action and NTU data sets. Test results are Penn Action 98.9%, NTU 87.9%, and NW-UCLA 88.6%.


Author(s):  
Jay Prakash Gupta ◽  
Nishant Singh ◽  
Pushkar Dixit ◽  
Vijay Bhaskar Semwal ◽  
Shiv Ram Dubey

Vision-based human activity recognition is the process of labelling image sequences with action labels. Accurate systems for this problem are applied in areas such as visual surveillance, human computer interaction and video retrieval. The challenges are due to variations in motion, recording settings and gait differences. Here the authors propose an approach to recognize the human activities through gait. Activity recognition through Gait is the process of identifying an activity by the manner in which they walk. The identification of human activities in a video, such as a person is walking, running, jumping, jogging etc are important activities in video surveillance. The authors contribute the use of Model based approach for activity recognition with the help of movement of legs only. Experimental results suggest that their method are able to recognize the human activities with a good accuracy rate and robust to shadows present in the videos.


Sign in / Sign up

Export Citation Format

Share Document