Virtual interaction algorithm of cultural heritage based on multi feature fusion

Author(s):  
Hao Li

During the traditional cultural heritage virtual interaction algorithm in the interaction action recognition, the database is too single, resulting in low recognition accuracy, recognition time-consumer and other issues. Therefore, this paper introduces the multi feature fusion method to optimize the cultural heritage virtual interaction algorithm. Kinect bone tracking technology is applied to identify the movement of the tracking object, 20 joints of the human body are tracked, and interactive action recognition is realized according to the fingertip candidate points. In order to carry out the judgment virtual interactive operation of subsequent recognition actions, a multi feature fusion database is established. The mean shift is used to derive the moving mean of the target’s action position and to track the interactive object. The Euclidean distance formula is used to train samples of multi feature fusion database data to realize the judgment of recognition action and virtual interaction. In order to verify the feasibility of the research algorithm, the virtual interactive script of ink painting in a cultural heritage museum is used to simulate the research algorithm, and a comparative experiment is designed. The experimental results show that the proposed algorithm is superior to the traditional virtual interactive algorithm in recognition accuracy and efficiency, which proves the feasibility of this method.

2009 ◽  
Vol 29 (8) ◽  
pp. 2074-2076
Author(s):  
Hua LI ◽  
Ming-xin ZHANG ◽  
Jing-long ZHENG

2021 ◽  
Vol 11 (11) ◽  
pp. 4940
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

The field of research related to video data has difficulty in extracting not only spatial but also temporal features and human action recognition (HAR) is a representative field of research that applies convolutional neural network (CNN) to video data. The performance for action recognition has improved, but owing to the complexity of the model, some still limitations to operation in real-time persist. Therefore, a lightweight CNN-based single-stream HAR model that can operate in real-time is proposed. The proposed model extracts spatial feature maps by applying CNN to the images that develop the video and uses the frame change rate of sequential images as time information. Spatial feature maps are weighted-averaged by frame change, transformed into spatiotemporal features, and input into multilayer perceptrons, which have a relatively lower complexity than other HAR models; thus, our method has high utility in a single embedded system connected to CCTV. The results of evaluating action recognition accuracy and data processing speed through challenging action recognition benchmark UCF-101 showed higher action recognition accuracy than the HAR model using long short-term memory with a small amount of video frames and confirmed the real-time operational possibility through fast data processing speed. In addition, the performance of the proposed weighted mean-based HAR model was verified by testing it in Jetson NANO to confirm the possibility of using it in low-cost GPU-based embedded systems.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1245 ◽  
Author(s):  
Tao Wang ◽  
Wen Wang ◽  
Hui Liu ◽  
Tianping Li

With the revolutionary development of cloud computing and internet of things, the integration and utilization of “big data” resources is a hot topic of the artificial intelligence research. Face recognition technology information has the advantages of being non-replicable, non-stealing, simple and intuitive. Video face tracking in the context of big data has become an important research hotspot in the field of information security. In this paper, a multi-feature fusion adaptive adjustment target tracking window and an adaptive update template particle filter tracking framework algorithm are proposed. Firstly, the skin color and edge features of the face are extracted in the video sequence. The weighted color histogram are extracted which describes the face features. Then we use the integral histogram method to simplify the histogram calculation of the particles. Finally, according to the change of the average distance, the tracking window is adjusted to accurately track the tracking object. At the same time, the algorithm can adaptively update the tracking template which improves the accuracy and accuracy of the tracking. The experimental results show that the proposed method improves the tracking effect and has strong robustness in complex backgrounds such as skin color, illumination changes and face occlusion.


2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Chao Tang ◽  
Huosheng Hu ◽  
Wenjian Wang ◽  
Wei Li ◽  
Hua Peng ◽  
...  

The representation and selection of action features directly affect the recognition effect of human action recognition methods. Single feature is often affected by human appearance, environment, camera settings, and other factors. Aiming at the problem that the existing multimodal feature fusion methods cannot effectively measure the contribution of different features, this paper proposed a human action recognition method based on RGB-D image features, which makes full use of the multimodal information provided by RGB-D sensors to extract effective human action features. In this paper, three kinds of human action features with different modal information are proposed: RGB-HOG feature based on RGB image information, which has good geometric scale invariance; D-STIP feature based on depth image, which maintains the dynamic characteristics of human motion and has local invariance; and S-JRPF feature-based skeleton information, which has good ability to describe motion space structure. At the same time, multiple K-nearest neighbor classifiers with better generalization ability are used to integrate decision-making classification. The experimental results show that the algorithm achieves ideal recognition results on the public G3D and CAD60 datasets.


Author(s):  
Jie Miao ◽  
Xiangmin Xu ◽  
Xiaoyi Jia ◽  
Haoyu Huang ◽  
Bolun Cai ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document