scholarly journals Research on Action Recognition Method of Dance Video Image Based on Human-Computer Interaction

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
FenTian Peng ◽  
Hongkai Zhang

Human-computer interaction technology simplifies the complicated procedures, which aims at solving the problems of inadequate description and low recognition rate of dance action, studying the action recognition method of dance video image based on human-computer interaction. This method constructs the recognition process based on human-computer interaction technology, constructs the human skeleton model according to the spatial position of skeleton, motion characteristics of skeleton, and change angles of skeleton, describes the dance posture features by generating skeleton node graph, and extracts the key frames of dance video image by using the clustering algorithm to recognize the dance action. The experimental results show that the recognition rate of this method under different entropy values is not less than 88%. Under the test conditions of complex, dark, bright, and multiuser interference, this method can make the model to describe the dance posture accurately. Furthermore, the average recognition rates are 93.43%, 91.27%, 97.15%, and 89.99%, respectively. It is suitable for action recognition of most dance video images.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shuai Jiang ◽  
Lei Wang ◽  
Yuanyuan Dong

In order to improve the online English teaching effect, the paper applies the sensor and human-computer interaction into the English teaching. The paper improves the sensor information by applying Kalman Filter, combines sensor positioning algorithm to trace the students in the English teaching online, and turns the kernels by the skeleton algorithm into corresponding coordinates of space rectangular coordinate system taking the waist as a coordinate origin to get a human-computer interaction skeleton model in the virtual reality. According to the actual needs of English teaching human-computer interaction, the paper builds a new English teaching system based on the sensor and the human-computer interaction and tests its performance. The experiments suggest that the smart system in the paper can effectively improve English teaching effects.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wei Wang

Shuttlecock is an excellent traditional national sport in China. Because of its simplicity, convenience, and fun, it is loved by the broad masses of people, especially teenagers and children. The development of shuttlecock sports into a confrontational event is not long, and it takes a period of research to master the tactics and strategies of shuttlecock sports. Based on this, this article proposes the use of machine learning algorithms to recognize the movement of shuttlecock movements, aiming to provide more theoretical and technical support for shuttlecock competitions by identifying features through actions with the assistance of technical algorithms. This paper uses literature research methods, model methods, comparative analysis methods, and other methods to deeply study the motion characteristics of shuttlecock motion, the key algorithms of machine learning algorithms, and other theories and construct the shuttlecock motion recognition based on multiview clustering algorithm. The model analyzes the robustness and accuracy of the machine learning algorithm and other algorithms, such as a variety of performance comparisons, and the results of the shuttlecock motion recognition image. For the key movements of shuttlecock movement, disk, stretch, hook, wipe, knock, and abduction, the algorithm proposed in this paper has a good movement recognition rate, which can reach 91.2%. Although several similar actions can be recognized well, the average recognition accuracy rate can exceed 75%, and even through continuous image capture, the number of occurrences of the action can be automatically analyzed, which is beneficial to athletes. And the coach can better analyze tactics and research strategies.


2020 ◽  
pp. 1-6
Author(s):  
Fei Liu ◽  
Peng Xu ◽  
Hongliu Yu

BACKGROUND: The traditional meal assistance robots use human-computer interaction such as buttons, voice, and EEG. However, most of them rely on excellent programming technology for development, in parallelism with exhibiting inconvenient interaction or unsatisfactory recognition rates in most cases. OBJECTIVE: To develop a convenient human-computer interaction mode with a high recognition rate, which allows users to make the robot show excellent adaptability in the new environment without programming ability. METHODS: A visual interaction method based on deep learning was used to develop the feeding robot: when the camera detects that the user’s mouth is open for 2 seconds, the feeding command is turned on, and the feeding is temporarily conducted when the eyes are closed for 2 seconds. A programming method of learning from the demonstration, which is simple and has strong adaptability to different environments, was employed to generate a feeding trajectory. RESULTS: The user is able to eat independently through convenient visual interaction, and it only requires the caregiver to drag and teach the robotic arm once in the face of a new eating environment.


2015 ◽  
Vol 2015 ◽  
pp. 1-10
Author(s):  
Seongjo Lee ◽  
Sohyun Sim ◽  
Kyhyun Um ◽  
Young-Sik Jeong ◽  
Seung-won Jung ◽  
...  

Concomitant with the advent of the ubiquitous era, research into better human computer interaction (HCI) for human-focused interfaces has intensified. Natural user interface (NUI), in particular, is being actively investigated with the objective of more intuitive and simpler interaction between humans and computers. However, developing NUI-based applications without special NUI-related knowledge is difficult. This paper proposes a NUI-specific SDK, called “Gesture SDK,” for development of NUI-based applications. Gesture SDK provides a gesture generator with which developers can directly define gestures. Further, a “Gesture Recognition Component” is provided that enables defined gestures to be recognized by applications. We generated gestures using the proposed SDK and developed a “Smart Interior,” NUI-based application using the Gesture Recognition Component. The results of experiments conducted indicate that the recognition rate of the generated gestures was 96% on average.


2013 ◽  
Vol 13 (02) ◽  
pp. 1340001
Author(s):  
SIDDHARTH SWARUP RAUTARAY ◽  
ANUPAM AGRAWAL

Traditional human–computer interaction devices such as the keyboard and mouse become ineffective for an effective interaction with the virtual environment applications because the 3D applications need a new interaction device. An efficient human interaction with the modern virtual environments requires more natural devices. Among them the "Hand Gesture" human–computer interaction modality has recently become of major interest. The main objective of gesture recognition research is to build a system which can recognize human gestures and utilize them to control an application. One of the drawbacks of present gesture recognition systems is being application-dependent which makes it difficult to transfer one gesture control interface into multiple applications. This paper focuses on designing a hand gesture recognition system which is vocabulary independent as well as adaptable to multiple applications. This makes the proposed system vocabulary independent and application independent. The designed system is comprised of the different processing steps like detection, segmentation, tracking, recognition, etc. Vocabulary independence has been incorporated in the proposed system with the help of a robust gesture mapping module that allows the user for cognitive mapping of different gestures to the same command and vice versa. For performance analysis of the proposed system accuracy, recognition rate and command response time have been compared. These parameters have been considered because they analyze the vital impact on the performance of the proposed vocabulary and application-independent hand gesture recognition system.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lei Qiao ◽  
QiuHao Shen

In order to effectively improve the recognition rate of human action in dance video image, shorten the recognition time of human action, and ensure the recognition effect of dance motion, this study proposes a human motion recognition method of dance video image. This recognition method uses neural network theory to transform and process the human action posture in the dance video image, constructs the hybrid model of human motion feature pixels according to the feature points of human action in the image coordinate system, and extracts the human motion features in dance video image. This study uses the background probability model of human action image to sum the variance of human action feature function and update the human action feature function. It can also use Kalman filter to detect human action in dance video image. In the research process, it gets the human multiposture action image features according to the linear combination of human action features. Combined with the feature distribution matrix, it processes the human action features through pose transformation and obtains the human action feature model in the dance video image to accurately identify the human action in the dance video image. The experimental results show that the dance motion recognition effect of the proposed method is good, which can effectively improve the recognition rate of human action in dance video image and shorten the recognition time.


Author(s):  
Tomaž Vodlan ◽  
Andrej Košir

This chapter presents the methodology for transformation of behavioural cues into Social Signals (SSs) in human-computer interaction that consists of three steps: acquisition of behavioural cues, manual and algorithmic pre-selection of behaviour cues, and classifier selection. The methodology was used on the SS class {hesitation, no hesitation} in the interaction between a user and video-on-demand system. The first step included observation of the user during interaction and obtaining information about behavioural cues. This step was tested on several users. The second step was the manual and algorithmic pre-selection of all cues that occurred into a subset of most significant cues. Different combinations of selected cues were then used in verification process with the aim of finding the combination with the best recognition rate. The last step involved the selection of an appropriate classifier. For example, a logistic regression model was obtained in combination with four features.


Sign in / Sign up

Export Citation Format

Share Document