Motion and Location-Based Online Human Daily Activity Recognition

2018 ◽  
pp. 277-296
Author(s):  
Chun Zhu ◽  
Weihua Sheng

In this chapter, the authors propose an approach to indoor human daily activity recognition that combines motion data and location information. One inertial sensor is worn on the thigh of a human subject to provide motion data while a motion capture system is used to record the human location information. Such a combination has the advantage of significantly reducing the obtrusiveness to the human subject at a moderate cost of vision processing, while maintaining a high accuracy of recognition. The approach has two phases. First, a two-step algorithm is proposed to recognize the activity based on motion data only. In the coarse-grained classification, two neural networks are used to classify the basic activities. In the fine-grained classification, the sequence of activities is modeled by a Hidden Markov Model (HMM) to consider the sequential constraints. The modified short-time Viterbi algorithm is used for real-time daily activity recognition. Second, to fuse the motion data with the location information, Bayes' theorem is used to refine the activities recognized from the motion data. The authors conduct experiments in a mock apartment, and the obtained results prove the effectiveness and accuracy of the algorithms.

Author(s):  
Chun Zhu ◽  
Weihua Sheng

In this chapter, the authors propose an approach to indoor human daily activity recognition that combines motion data and location information. One inertial sensor is worn on the thigh of a human subject to provide motion data while a motion capture system is used to record the human location information. Such a combination has the advantage of significantly reducing the obtrusiveness to the human subject at a moderate cost of vision processing, while maintaining a high accuracy of recognition. The approach has two phases. First, a two-step algorithm is proposed to recognize the activity based on motion data only. In the coarse-grained classification, two neural networks are used to classify the basic activities. In the fine-grained classification, the sequence of activities is modeled by a Hidden Markov Model (HMM) to consider the sequential constraints. The modified short-time Viterbi algorithm is used for real-time daily activity recognition. Second, to fuse the motion data with the location information, Bayes’ theorem is used to refine the activities recognized from the motion data. The authors conduct experiments in a mock apartment, and the obtained results prove the effectiveness and accuracy of the algorithms.


2021 ◽  
pp. 1-12
Author(s):  
Gokay Saldamli ◽  
Richard Chow ◽  
Hongxia Jin

Social networking services are increasingly accessed through mobile devices. This trend has prompted services such as Facebook and Google+to incorporate location as a de facto feature of user interaction. At the same time, services based on location such as Foursquare and Shopkick are also growing as smartphone market penetration increases. In fact, this growth is happening despite concerns (growing at a similar pace) about security and third-party use of private location information (e.g., for advertising). Nevertheless, service providers have been unwilling to build truly private systems in which they do not have access to location information. In this paper, we describe an architecture and a trial implementation of a privacy-preserving location sharing system called ILSSPP. The system protects location information from the service provider and yet enables fine grained location-sharing. One main feature of the system is to protect an individual’s social network structure. The pattern of location sharing preferences towards contacts can reveal this structure without any knowledge of the locations themselves. ILSSPP protects locations sharing preferences through protocol unification and masking. ILSSPP has been implemented as a standalone solution, but the technology can also be integrated into location-based services to enhance privacy.


2017 ◽  
Vol 14 (4) ◽  
pp. 172988141770907 ◽  
Author(s):  
Hanbo Wu ◽  
Xin Ma ◽  
Zhimeng Zhang ◽  
Haibo Wang ◽  
Yibin Li

Human daily activity recognition has been a hot spot in the field of computer vision for many decades. Despite best efforts, activity recognition in naturally uncontrolled settings remains a challenging problem. Recently, by being able to perceive depth and visual cues simultaneously, RGB-D cameras greatly boost the performance of activity recognition. However, due to some practical difficulties, the publicly available RGB-D data sets are not sufficiently large for benchmarking when considering the diversity of their activities, subjects, and background. This severely affects the applicability of complicated learning-based recognition approaches. To address the issue, this article provides a large-scale RGB-D activity data set by merging five public RGB-D data sets that differ from each other on many aspects such as length of actions, nationality of subjects, or camera angles. This data set comprises 4528 samples depicting 7 action categories (up to 46 subcategories) performed by 74 subjects. To verify the challengeness of the data set, three feature representation methods are evaluated, which are depth motion maps, spatiotemporal depth cuboid similarity feature, and curvature space scale. Results show that the merged large-scale data set is more realistic and challenging and therefore more suitable for benchmarking.


2015 ◽  
Vol 19 (5) ◽  
pp. 26-35 ◽  
Author(s):  
Debraj De ◽  
Pratool Bharti ◽  
Sajal K. Das ◽  
Sriram Chellappan

Sign in / Sign up

Export Citation Format

Share Document