scholarly journals Trends in human activity recognition using smartphones

Author(s):  
Anna Ferrari ◽  
Daniela Micucci ◽  
Marco Mobilio ◽  
Paolo Napoletano

AbstractRecognizing human activities and monitoring population behavior are fundamental needs of our society. Population security, crowd surveillance, healthcare support and living assistance, and lifestyle and behavior tracking are some of the main applications that require the recognition of human activities. Over the past few decades, researchers have investigated techniques that can automatically recognize human activities. This line of research is commonly known as Human Activity Recognition (HAR). HAR involves many tasks: from signals acquisition to activity classification. The tasks involved are not simple and often require dedicated hardware, sophisticated engineering, and computational and statistical techniques for data preprocessing and analysis. Over the years, different techniques have been tested and different solutions have been proposed to achieve a classification process that provides reliable results. This survey presents the most recent solutions proposed for each task in the human activity classification process, that is, acquisition, preprocessing, data segmentation, feature extraction, and classification. Solutions are analyzed by emphasizing their strengths and weaknesses. For completeness, the survey also presents the metrics commonly used to evaluate the goodness of a classifier and the datasets of inertial signals from smartphones that are mostly used in the evaluation phase.

2019 ◽  
Vol 5 (1) ◽  
pp. 1-9
Author(s):  
Mohammad Iqbal ◽  
Chandrawati Putri Wulandari ◽  
Wawan Yunanto ◽  
Ghaluh Indah Permata Sari

Discovering rare human activity patterns—from triggered motion sensors deliver peculiar information to notify people about hazard situations. This study aims to recognize rare human activities using mining non-zero-rare sequential patterns technique. In particular, this study mines the triggered motion sensor sequences to obtain non-zero-rare human activity patterns—the patterns which most occur in the motion sensor sequences and the occurrence numbers are less than the pre-defined occurrence threshold. This study proposes an algorithm to mine non-zero-rare pattern on human activity recognition called Mining Multi-class Non-Zero-Rare Sequential Patterns (MMRSP).  The experimental result showed that non-zero-rare human activity patterns succeed to capture the unusual activity. Furthermore, the MMRSP performed well according to the precision value of rare activities.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6316
Author(s):  
Dinis Moreira ◽  
Marília Barandas ◽  
Tiago Rocha ◽  
Pedro Alves ◽  
Ricardo Santos ◽  
...  

With the fast increase in the demand for location-based services and the proliferation of smartphones, the topic of indoor localization is attracting great interest. In indoor environments, users’ performed activities carry useful semantic information. These activities can then be used by indoor localization systems to confirm users’ current relative locations in a building. In this paper, we propose a deep-learning model based on a Convolutional Long Short-Term Memory (ConvLSTM) network to classify human activities within the indoor localization scenario using smartphone inertial sensor data. Results show that the proposed human activity recognition (HAR) model accurately identifies nine types of activities: not moving, walking, running, going up in an elevator, going down in an elevator, walking upstairs, walking downstairs, or going up and down a ramp. Moreover, predicted human activities were integrated within an existing indoor positioning system and evaluated in a multi-story building across several testing routes, with an average positioning error of 2.4 m. The results show that the inclusion of human activity information can reduce the overall localization error of the system and actively contribute to the better identification of floor transitions within a building. The conducted experiments demonstrated promising results and verified the effectiveness of using human activity-related information for indoor localization.


IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 451-473
Author(s):  
Liliana I. Carvalho ◽  
Rute C. Sofia

Mobile sensing has been gaining ground due to the increasing capabilities of mobile and personal devices that are carried around by citizens, giving access to a large variety of data and services based on the way humans interact. Mobile sensing brings several advantages in terms of the richness of available data, particularly for human activity recognition. Nevertheless, the infrastructure required to support large-scale mobile sensing requires an interoperable design, which is still hard to achieve today. This review paper contributes to raising awareness of challenges faced today by mobile sensing platforms that perform learning and behavior inference with respect to human routines: how current solutions perform activity recognition, which classification models they consider, and which types of behavior inferences can be seamlessly provided. The paper provides a set of guidelines that contribute to a better functional design of mobile sensing infrastructures, keeping scalability as well as interoperability in mind.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1871 ◽  
Author(s):  
Tianqi Lv ◽  
Xiaojuan Wang ◽  
Lei Jin ◽  
Yabo Xiao ◽  
Mei Song

Human activity recognition (HAR) is a popular and challenging research topic, driven by a variety of applications. More recently, with significant progress in the development of deep learning networks for classification tasks, many researchers have made use of such models to recognise human activities in a sensor-based manner, which have achieved good performance. However, sensor-based HAR still faces challenges; in particular, recognising similar activities that only have a different sequentiality and similarly classifying activities with large inter-personal variability. This means that some human activities have large intra-class scatter and small inter-class separation. To deal with this problem, we introduce a margin mechanism to enhance the discriminative power of deep learning networks. We modified four kinds of common neural networks with our margin mechanism to test the effectiveness of our proposed method. The experimental results demonstrate that the margin-based models outperform the unmodified models on the OPPORTUNITY, UniMiB-SHAR, and PAMAP2 datasets. We also extend our research to the problem of open-set human activity recognition and evaluate the proposed method’s performance in recognising new human activities.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2368
Author(s):  
Fatima Amjad ◽  
Muhammad Hassan Khan ◽  
Muhammad Adeel Nisar ◽  
Muhammad Shahid Farid ◽  
Marcin Grzegorzek

Human activity recognition (HAR) aims to recognize the actions of the human body through a series of observations and environmental conditions. The analysis of human activities has drawn the attention of the research community in the last two decades due to its widespread applications, diverse nature of activities, and recording infrastructure. Lately, one of the most challenging applications in this framework is to recognize the human body actions using unobtrusive wearable motion sensors. Since the human activities of daily life (e.g., cooking, eating) comprises several repetitive and circumstantial short sequences of actions (e.g., moving arm), it is quite difficult to directly use the sensory data for recognition because the multiple sequences of the same activity data may have large diversity. However, a similarity can be observed in the temporal occurrence of the atomic actions. Therefore, this paper presents a two-level hierarchical method to recognize human activities using a set of wearable sensors. In the first step, the atomic activities are detected from the original sensory data, and their recognition scores are obtained. Secondly, the composite activities are recognized using the scores of atomic actions. We propose two different methods of feature extraction from atomic scores to recognize the composite activities, and they include handcrafted features and the features obtained using the subspace pooling technique. The proposed method is evaluated on the large publicly available CogAge dataset, which contains the instances of both atomic and composite activities. The data is recorded using three unobtrusive wearable devices: smartphone, smartwatch, and smart glasses. We also investigated the performance evaluation of different classification algorithms to recognize the composite activities. The proposed method achieved 79% and 62.8% average recognition accuracies using the handcrafted features and the features obtained using subspace pooling technique, respectively. The recognition results of the proposed technique and their comparison with the existing state-of-the-art techniques confirm its effectiveness.


Author(s):  
Pranjal Kumar

Human Activity Recognition (HAR) is a process to automatically detect human activities based on stream data generated from various sensors, including inertial sensors, physiological sensors, location sensors, cameras, time, and many others. Unsupervised contrastive learning has been excellent, while the contrastive loss mechanism is less studied. In this paper, we provide a temperature (τ) variance study affecting the loss of SimCLR model and ultimately full HAR evaluation results. We focus on understanding the implications of unsupervised contrastive loss in context of HAR data. In this work, also regulation of the temperature(τ) coefficient is incorporated for improving the HAR feature qualities and overall performance for downstream tasks in healthcare setting. Performance boost of 1.3% is observed in experimentation.


Author(s):  
Pranjal Kumar

Human Activity Recognition (HAR) is a process to automatically detect human activities based on stream data generated from various sensors, including inertial sensors, physiological sensors, location sensors, cameras, time, and many others. In this paper, we propose a robust SimCLR model for human activity recognition with a temperature variance study. In this work, SimCLR, a contrasting learning technique is optimized via regulating the temperature for visual representations, is incorporated for improving the HAR performance in healthcare.


2020 ◽  
Vol 34 (01) ◽  
pp. 1104-1111
Author(s):  
Xiaodong Yang ◽  
Yiqiang Chen ◽  
Hanchao Yu ◽  
Yingwei Zhang ◽  
Wang Lu ◽  
...  

Human Activity Recognition (HAR) is an important application of smart wearable/mobile systems for many human-centric problems such as healthcare. The multi-sensor synchronous measurement has shown better performance for HAR than a single sensor. However, the multi-sensor setting increases the costs of data transmission, computation and energy. Therefore, the efficient sensor selection to balance recognition accuracy and sensor cost is the critical challenge. In this paper, we propose an Instance-wise Dynamic Sensor Selection (IDSS) method for HAR. Firstly, we formalize this problem as minimizing both activity classification loss and sensor number by dynamically selecting a sparse subset for each instance. Then, IDSS solves the above minimization problem via Markov Decision Process whose policy for sensor selection is learned by exploiting the instance-wise states using Imitation Learning. In order to optimize the parameters of the activity classification model and the sensor selection policy, an algorithm named Mutual DAgger is proposed to alternatively enhance their learning process. To evaluate the performance of IDSS, we conduct experiments on three real-world HAR datasets. The experimental results show that IDSS can effectively reduce the overall sensor number without losing accuracy and outperforms the state-of-the-art methods regarding the combined measurement of accuracy and sensor number.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Mashhour M Bani Amer

Human activity recognition (HAR) systems are developed as aspect of a model to allow continual assessment of human behaviors in IoT environments in the areas of ambient assisted living, sports injury detection, elderly care, rehabilitation, and entertainment and close monitoring. Smartphones are already used to recognize activity. Most of the research done in this field placed a restriction on fixing the smartphone securely in a certain location on the human body, along with the machine learning system, to promote the process of classifying raw data from smartphone sensors to human activities. Smartwatches solve this limitation by placing them in a consistent position, which becomes steady and precisely sensitive to body movements. For this experiment, we evaluate both the accelerometer and the gyroscope sensor on the smartphone and the smartwatch, and decide which sensors hybrid does superiorly. Five daily physical human activities are evaluated using five classifiers from WEKA, in addition to Artificial Neural Network (ANN), K- Nearest Neighbor (KNN), and Support Vector Machine (SVM) algorithms builtin MATLAB 2018a. We used confusion matrix and random simulation to compare the accuracy and efficiency of those models. The results showed that the accelerometer sensors combination has the highest accuracy among other combinations and achieved an overall accuracy of 97.7% using SVM that gives the best performance among all other classifiers.


2021 ◽  
Vol 10 (6) ◽  
pp. 3191-3201
Author(s):  
Vijaya Kumar Kambala ◽  
Harikiran Jonnadula

There is ever increasing need to use computer vision devices to capture videos as part of many real-world applications. However, invading privacy of people is the cause of concern. There is need for protecting privacy of people while videos are used purposefully based on objective functions. One such use case is human activity recognition without disclosing human identity. In this paper, we proposed a multi-task learning based hybrid prediction algorithm (MTL-HPA) towards realising privacy preserving human activity recognition framework (PPHARF). It serves the purpose by recognizing human activities from videos while preserving identity of humans present in the multimedia object. Face of any person in the video is anonymized to preserve privacy while the actions of the person are exposed to get them extracted. Without losing utility of human activity recognition, anonymization is achieved. Humans and face detection methods file to reveal identity of the persons in video. We experimentally confirm with joint-annotated human motion data base (JHMDB) and daily action localization in YouTube (DALY) datasets that the framework recognises human activities and ensures non-disclosure of privacy information. Our approach is better than many traditional anonymization techniques such as noise adding, blurring, and masking.


Sign in / Sign up

Export Citation Format

Share Document