Unsupervised Human Activity Representation Learning with Multi-task Deep Clustering

Author(s):  
Haojie Ma ◽  
Zhijie Zhang ◽  
Wenzhong Li ◽  
Sanglu Lu

Human activity recognition (HAR) based on sensing data from wearable and mobile devices has become an active research area in ubiquitous computing, and it envisions a wide range of application scenarios in mobile social networking, environmental context sensing, health and well-being monitoring, etc. However, activity recognition based on manually annotated sensing data is manpower-expensive, time-consuming, and privacy-sensitive, which prevents HAR systems from being really deployed in scale. In this paper, we address the problem of unsupervised human activity recognition, which infers activities from unlabeled datasets without the need of domain knowledge. We propose an end-to-end multi-task deep clustering framework to solve the problem. Taking the unlabeled multi-dimensional sensing signals as input, we firstly apply a CNN-BiLSTM autoencoder to form a compressed latent feature representation. Then we apply a K-means clustering algorithm based on the extracted features to partition the dataset into different groups, which produces pseudo labels for the instances. We further train a deep neural network (DNN) with the latent features and pseudo labels for activity recognition. The tasks of feature representation, clustering, and classification are integrated into a uniform multi-task learning framework and optimized jointly to achieve unsupervised activity classification. We conduct extensive experiments based on three public datasets. It is shown that the proposed approach outperforms shallow unsupervised learning approaches, and it performs close to the state-of-the-art supervised approaches by fine-tuning with a small number of labeled data. The proposed approach significantly reduces the cost of human-based data annotation and narrows down the gap between unsupervised and supervised human activity recognition.

Author(s):  
Pranjal Kumar

Human Activity Recognition (HAR) has become a vibrant research field over the last decade, especially because of the spread of electronic devices like mobile phones, smart cell phones, and video cameras in our daily lives. In addition, the progress of deep learning and other algorithms has made it possible for researchers to use HAR in many fields including sports, health, and well-being. HAR is, for example, one of the most promising resources for helping older people with the support of their cognitive and physical function through day-to-day activities. This study focuses on the key role machine learning plays in the development of HAR applications. While numerous HAR surveys and review articles have previously been carried out, the main/overall HAR issue was not taken into account, and these studies concentrate only on specific HAR topics. A detailed review paper covering major HAR topics is therefore essential. This study analyses the most up-to-date studies on HAR in recent years and provides a classification of HAR methodology and demonstrates advantages and disadvantages for each group of methods. This paper finally addresses many problems in the current HAR subject and provides recommendations for potential study.


The rise in life expectancy rate and dwindled birth rate in new age society has led to the phenomenon of population ageing which is being witnessed across the world from past few decades. India is also a part of this demographic transition which will have the direct impact on the societal and economic conditions of the country. In order to effectively deal with the prevailing phenomenon, stakeholders involved are coming up with the Information and Communication Technology (ICT) based ecosystem to address the needs of elderly people such as independent living, activity recognition, vital health sign monitoring, prevention from social isolation etc. Ambient Assisted Living (AAL) is one such ecosystem which is capable of providing safe and secured living environment for the elderly and disabled people. In this paper we will focus on reviewing the sensor based Human Activity Recognition (HAR) and Vital Health Sign Monitoring (VHSM) which is applicable for AAL environments. At first we generally describe the AAL environment. Next we present brief insights into sensor modalities and different deep learning architectures. Later, we survey the existing literature for HAR and VHSM based on sensor modality and deep learning approach used.


2021 ◽  
Author(s):  
Jiacheng Mai ◽  
zhiyuan chen ◽  
Chunzhi Yi ◽  
Zhen Ding

Abstract Lower limbs exoskeleton robots improve the motor ability of humans and can facilitate superior rehabilitative training. By training large datasets, many of the currently available mobile and signal devices that may be worn on the body can employ machine learning approaches to forecast and classify people's movement characteristics. This approach could help exoskeleton robots improve their ability to predict human activities. Two popular data sets are PAMAP2, which was obtained by measuring people's movement through inertial sensors, and WISDM, which was collected people's activity information through mobile phones. With the focus on human activity recognition, this paper applied the traditional machine learning method and deep learning method to train and test these datasets, whereby it was found that the prediction performance of a decision tree model was highest on these two data sets, which is 99% and 72% separately, and the time consumption of decision tree is the least. In addition, a comparison of the signals collected from different parts of the human body showed that the signals deriving from the hands presented the best performance in terms of recognizing human movement types.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 99152-99160 ◽  
Author(s):  
Abdu Gumaei ◽  
Mohammad Mehedi Hassan ◽  
Abdulhameed Alelaiwi ◽  
Hussain Alsalman

Sign in / Sign up

Export Citation Format

Share Document