codebook generation
Recently Published Documents


TOTAL DOCUMENTS

103
(FIVE YEARS 11)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
Vol 13 (4) ◽  
pp. 1699
Author(s):  
Madiha Javeed ◽  
Munkhjargal Gochoo ◽  
Ahmad Jalal ◽  
Kibum Kim

The daily life-log routines of elderly individuals are susceptible to numerous complications in their physical healthcare patterns. Some of these complications can cause injuries, followed by extensive and expensive recovery stages. It is important to identify physical healthcare patterns that can describe and convey the exact state of an individual’s physical health while they perform their daily life activities. In this paper, we propose a novel Sustainable Physical Healthcare Pattern Recognition (SPHR) approach using a hybrid features model that is capable of distinguishing multiple physical activities based on a multiple wearable sensors system. Initially, we acquired raw data from well-known datasets, i.e., mobile health and human gait databases comprised of multiple human activities. The proposed strategy includes data pre-processing, hybrid feature detection, and feature-to-feature fusion and reduction, followed by codebook generation and classification, which can recognize sustainable physical healthcare patterns. Feature-to-feature fusion unites the cues from all of the sensors, and Gaussian mixture models are used for the codebook generation. For the classification, we recommend deep belief networks with restricted Boltzmann machines for five hidden layers. Finally, the results are compared with state-of-the-art techniques in order to demonstrate significant improvements in accuracy for physical healthcare pattern recognition. The experiments show that the proposed architecture attained improved accuracy rates for both datasets, and that it represents a significant sustainable physical healthcare pattern recognition (SPHR) approach. The anticipated system has potential for use in human–machine interaction domains such as continuous movement recognition, pattern-based surveillance, mobility assistance, and robot control systems.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Muhammmad Bilal ◽  
Zahid Ullah ◽  
Ihtesham Ul Islam

2020 ◽  
Vol 10 (12) ◽  
pp. 4412
Author(s):  
Ammar Mohsin Butt ◽  
Muhammad Haroon Yousaf ◽  
Fiza Murtaza ◽  
Saima Nazir ◽  
Serestina Viriri ◽  
...  

Human action recognition has gathered significant attention in recent years due to its high demand in various application domains. In this work, we propose a novel codebook generation and hybrid encoding scheme for classification of action videos. The proposed scheme develops a discriminative codebook and a hybrid feature vector by encoding the features extracted from CNNs (convolutional neural networks). We explore different CNN architectures for extracting spatio-temporal features. We employ an agglomerative clustering approach for codebook generation, which intends to combine the advantages of global and class-specific codebooks. We propose a Residual Vector of Locally Aggregated Descriptors (R-VLAD) and fuse it with locality-based coding to form a hybrid feature vector. It provides a compact representation along with high order statistics. We evaluated our work on two publicly available standard benchmark datasets HMDB-51 and UCF-101. The proposed method achieves 72.6% and 96.2% on HMDB51 and UCF101, respectively. We conclude that the proposed scheme is able to boost recognition accuracy for human action recognition.


Surveillance Camera System is installed in the supermarkets mainly for security purposes. But the main idea of this paper is to use this surveillance camera system to improve the sales performance by targeting a particular stimulus (child) through marketing promotions. The owner of the supermarket monitors the entire store with the help of the security camera system. The owner suddenly finds an abnormal action in a stimulus (child) on looking at a particular product. On observation of the stimulus head and arm movements, the owner concludes the stimulus interest on that product which the parents refuse to buy. This scenario is implemented in this paper using live video analytics which identifies the abnormality. Action recognition is a technique that is used in the classification of actions present in the given video. The Bag of Visual Words Model is implemented for recognizing the action made by the stimulus. This model includes feature extraction, codebook generation and classification. The features from the stimulus such as arm and head are extracted using Speeded up Robust Features (SURF) algorithm. Codebook generation is done by K-means clustering and the histogram of discriminative features is generated and fed as input to SVM classifier which recognizes the action made by the stimulus (child) in order to identify the child’s interest factor on a particular product.


Sign in / Sign up

Export Citation Format

Share Document