The effect of real-time pose recognition on badminton learning performance

Author(s):  
Kuo-Chin Lin ◽  
Cheng-Wen Ko ◽  
Hui-Chun Hung ◽  
Nian-Shing Chen
Author(s):  
Yugo Hayashi

AbstractResearch on collaborative learning has revealed that peer-collaboration explanation activities facilitate reflection and metacognition and that establishing common ground and successful coordination are keys to realizing effective knowledge-sharing in collaborative learning tasks. Studies on computer-supported collaborative learning have investigated how awareness tools can facilitate coordination within a group and how the use of external facilitation scripts can elicit elaborated knowledge during collaboration. However, the separate and joint effects of these tools on the nature of the collaborative process and performance have rarely been investigated. This study investigates how two facilitation methods—coordination support via learner gaze-awareness feedback and metacognitive suggestion provision via a pedagogical conversational agent (PCA)—are able to enhance the learning process and learning gains. Eighty participants, organized into dyads, were enrolled in a 2 × 2 between-subject study. The first and second factors were the presence of real-time gaze feedback (no vs. visible gaze) and that of a suggestion-providing PCA (no vs. visible agent), respectively. Two evaluation methods were used: namely, dialog analysis of the collaborative process and evaluation of learning gains. The real-time gaze feedback and PCA suggestions facilitated the coordination process, while gaze was relatively more effective in improving the learning gains. Learners in the Gaze-feedback condition achieved superior learning gains upon receiving PCA suggestions. A successful coordination/high learning performance correlation was noted solely for learners receiving visible gaze feedback and PCA suggestions simultaneously (visible gaze/visible agent). This finding has the potential to yield improved collaborative processes and learning gains through integration of these two methods as well as contributing towards design principles for collaborative-learning support systems more generally.


2009 ◽  
Vol 83 (1) ◽  
pp. 72-84 ◽  
Author(s):  
Michael Van den Bergh ◽  
Esther Koller-Meier ◽  
Luc Van Gool
Keyword(s):  

Author(s):  
Jamie Shotton ◽  
Andrew Fitzgibbon ◽  
Mat Cook ◽  
Toby Sharp ◽  
Mark Finocchio ◽  
...  

2021 ◽  
Vol 12 ◽  
Author(s):  
Chengming Ma ◽  
Qian Liu ◽  
Yaqi Dang

This paper provides an in-depth study and analysis of human artistic poses through intelligently enhanced multimodal artistic pose recognition. A complementary network model architecture of multimodal information based on motion energy proposed. The network exploits both the rich information of appearance features provided by RGB data and the depth information provided by depth data as well as the characteristics of robustness to luminance and observation angle. The multimodal fusion is accomplished by the complementary information characteristics of the two modalities. Moreover, to better model the long-range temporal structure while considering action classes with sub-action sharing phenomena, an energy-guided video segmentation method is employed. And in the feature fusion stage, a cross-modal cross-fusion approach is proposed, which enables the convolutional network to share local features of two modalities not only in the shallow layer but also to obtain the fusion of global features in the deep convolutional layer by connecting the feature maps of multiple convolutional layers. Firstly, the Kinect camera is used to acquire the color image data of the human body, the depth image data, and the 3D coordinate data of the skeletal points using the Open pose open-source framework. Then, the action automatically extracted from keyframes based on the distance between the hand and the head, and the relative distance features are extracted from the keyframes to describe the action, the local occupancy pattern features and HSV color space features are extracted to describe the object, and finally, the feature fusion is performed and the complex action recognition task is completed. To solve the consistency problem of virtual-reality fusion, the mapping relationship between hand joint point coordinates and the virtual scene is determined in the augmented reality scene, and the coordinate consistency model of natural hand and virtual model is established; finally, the real-time interaction between hand gesture and virtual model is realized, and the average correct rate of its hand gesture reaches 99.04%, which improves the robustness and real-time interaction of hand gesture recognition.


2014 ◽  
Author(s):  
ByungIn Yoo ◽  
Changkyu Choi ◽  
Jae-Joon Han ◽  
Changkyo Lee ◽  
Wonjun Kim ◽  
...  

Author(s):  
Eder de Oliveira ◽  
Esteban Walter Gonzalez Clua ◽  
Cristina Nader Vasconcelos ◽  
Bruno Augusto Dorta Marques ◽  
Daniela Gorski Trevisan ◽  
...  

Author(s):  
Jamie Shotton ◽  
Andrew Fitzgibbon ◽  
Mat Cook ◽  
Toby Sharp ◽  
Mark Finocchio ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document