A study on smartphone sensor-based Human Activity Recognition using deep learning approaches

Author(s):  
Riktim Mondal ◽  
Dibyendu Mukhopadhyay ◽  
Sayanwita Barua ◽  
Pawan Kumar Singh ◽  
Ram Sarkar ◽  
...  

The rise in life expectancy rate and dwindled birth rate in new age society has led to the phenomenon of population ageing which is being witnessed across the world from past few decades. India is also a part of this demographic transition which will have the direct impact on the societal and economic conditions of the country. In order to effectively deal with the prevailing phenomenon, stakeholders involved are coming up with the Information and Communication Technology (ICT) based ecosystem to address the needs of elderly people such as independent living, activity recognition, vital health sign monitoring, prevention from social isolation etc. Ambient Assisted Living (AAL) is one such ecosystem which is capable of providing safe and secured living environment for the elderly and disabled people. In this paper we will focus on reviewing the sensor based Human Activity Recognition (HAR) and Vital Health Sign Monitoring (VHSM) which is applicable for AAL environments. At first we generally describe the AAL environment. Next we present brief insights into sensor modalities and different deep learning architectures. Later, we survey the existing literature for HAR and VHSM based on sensor modality and deep learning approach used.


2019 ◽  
Vol 11 (9) ◽  
pp. 1068 ◽  
Author(s):  
Xinyu Li ◽  
Yuan He ◽  
Xiaojun Jing

Radar, as one of the sensors for human activity recognition (HAR), has unique characteristics such as privacy protection and contactless sensing. Radar-based HAR has been applied in many fields such as human–computer interaction, smart surveillance and health assessment. Conventional machine learning approaches rely on heuristic hand-crafted feature extraction, and their generalization capability is limited. Additionally, extracting features manually is time–consuming and inefficient. Deep learning acts as a hierarchical approach to learn high-level features automatically and has achieved superior performance for HAR. This paper surveys deep learning based HAR in radar from three aspects: deep learning techniques, radar systems, and deep learning for radar-based HAR. Especially, we elaborate deep learning approaches designed for activity recognition in radar according to the dimension of radar returns (i.e., 1D, 2D and 3D echoes). Due to the difference of echo forms, corresponding deep learning approaches are different to fully exploit motion information. Experimental results have demonstrated the feasibility of applying deep learning for radar-based HAR in 1D, 2D and 3D echoes. Finally, we address some current research considerations and future opportunities.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1669
Author(s):  
Philip Boyer ◽  
David Burns ◽  
Cari Whyne

Out-of-distribution (OOD) in the context of Human Activity Recognition (HAR) refers to data from activity classes that are not represented in the training data of a Machine Learning (ML) algorithm. OOD data are a challenge to classify accurately for most ML algorithms, especially deep learning models that are prone to overconfident predictions based on in-distribution (IIN) classes. To simulate the OOD problem in physiotherapy, our team collected a new dataset (SPARS9x) consisting of inertial data captured by smartwatches worn by 20 healthy subjects as they performed supervised physiotherapy exercises (IIN), followed by a minimum 3 h of data captured for each subject as they engaged in unrelated and unstructured activities (OOD). In this paper, we experiment with three traditional algorithms for OOD-detection using engineered statistical features, deep learning-generated features, and several popular deep learning approaches on SPARS9x and two other publicly-available human activity datasets (MHEALTH and SPARS). We demonstrate that, while deep learning algorithms perform better than simple traditional algorithms such as KNN with engineered features for in-distribution classification, traditional algorithms outperform deep learning approaches for OOD detection for these HAR time series datasets.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4787 ◽  
Author(s):  
Nati Daniel ◽  
Itzik Klein

Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial sensor readings are used as input to a machine learning algorithm which performs the classification. There are several approaches to tackle such a task: feature based approaches, one dimensional deep learning algorithms, and two dimensional deep learning architectures. When using deep learning approaches, feature engineering is redundant. In addition, while utilizing two-dimensional deep learning approaches enables to utilize methods from the well-established computer vision domain. In this paper, a framework for smartphone location and human activity recognition, based on the smartphone’s inertial sensors, is proposed. The contributions of this work are a novel time series encoding approach, from inertial signals to inertial images, and transfer learning from computer vision domain to the inertial sensors classification problem. Four different datasets are employed to show the benefits of using the proposed approach. In addition, as the proposed framework performs classification on inertial sensors readings, it can be applied for other classification tasks using inertial data. It can also be adopted to handle other types of sensory data collected for a classification task.


2018 ◽  
Vol 69 (3) ◽  
pp. 14-24 ◽  
Author(s):  
Milica Janković ◽  
Andrej Savić ◽  
Marija Novičić ◽  
Mirjana Popović

Sign in / Sign up

Export Citation Format

Share Document