A real-time visual feedback system of strength self-training with motion capture

Author(s):  
Hikaru Kaneko ◽  
Mitsunori Makino
2021 ◽  
Author(s):  
Satoshi Miura ◽  
Kento Nakagawa ◽  
Kazumasa Hirooka ◽  
Yuya Matsumoto ◽  
Yumi Umesawa ◽  
...  

Abstract Sports-assisting technologies have been developed; however, most are to improve performances in individual sports such as ski, batting, and swimming. Few studies focused on team sports which require not only motor ability of individual players but also perceptual abilities to grasp positions of their own and others. In the present study, we aim to validate the feasibility of a visual feedback system for the improvement of space perception in relation to other persons that is necessary. Herein, the visual feedback system is composed of a flying drone that transmits the image to the participant’s smart glasses. With and without the system, the participant was able to see his/her own relative position in real time though the glass. Nine participants tried to position themselves on the line between two experimenters 30 m away from each other, which simulated the situation of a baseball cutoff man. As a result, the error distance between the participants’ position and the line significantly decreased when using the system than that without the system. Furthermore, after participants practiced the task with the system the error decreased compared to that before the practice. In conclusion, the real-time feedback system from the bird’s-eye view would work for improving the accuracy of space perception.


2014 ◽  
Author(s):  
William Katz ◽  
Thomas F. Campbell ◽  
Jun Wang ◽  
Eric Farrar ◽  
J. Coleman Eubanks ◽  
...  

Author(s):  
Jeffrey R. Gould ◽  
Lisa Campana ◽  
Danielle Rabickow ◽  
Richard Raymond ◽  
Robert Partridge

2021 ◽  
Author(s):  
Randy Tan

This thesis presents a real-time human activity analysis system, where a user’s activity can be quantitatively evaluated with respect to a ground truth recording. Multiple Kinects are used to solve the problem of self-occlusion while performing an activity. The Kinects are placed in locations with different perspectives to extract the optimal joint positions of a user using Singular Value Decomposition (SVD) and Sequential Quadratic Programming (SQP). The extracted joint positions are then fed through our Incremental Dynamic Time Warping (IDTW) algorithm so that an incomplete sequence of an user can be optimally compared against the complete sequence from an expert (ground truth). Furthermore, the user’s performance is communicated through a novel visual feedback system, where colors on the skeleton present the user’s level of performance. Experimental results demonstrate the impact of our system, where through elaborate user testing we show that our IDTW algorithm combined with visual feedback improves the user’s performance quantitatively.


Proceedings ◽  
2020 ◽  
Vol 49 (1) ◽  
pp. 40
Author(s):  
Hiroki Yokota ◽  
Munekazu Naito ◽  
Naoki Mizuno ◽  
Shigemichi Ohshima

In this research, we propose a visual-feedback system and evaluate it based on motion-sensing and computational technologies. This system will help amateur athletes imitate the motor skills of professionals. Using a self-organizing map (SOM) to visualize high-dimensional time-series motion data, we recorded the cyclic motion information, including the muscle activities, of a male subject as he pedaled a bicycle ergometer. To clarify the difference between the subject’s motor skill and the target motor skill in a cyclic movement, we used the modified SOM algorithm; a visual-feedback system was developed, which displayed the target motion as a circular trajectory on a two-dimensional motor skills map. The subject trained by observing only the displayed static target trajectory; the subject’s real-time trajectory was constructed from the subject’s real-time motion. We validated our proposed framework for the visual-feedback system by evaluating the motion performance of a subject using feedback training.


2013 ◽  
Vol 22 (3) ◽  
pp. 202-215 ◽  
Author(s):  
Manuel Varlet ◽  
Alessandro Filippeschi ◽  
Grégory Ben-sadoun ◽  
Mickael Ratto ◽  
Ludovic Marin ◽  
...  

The success of interpersonal activities strongly depends on the coordination between our movements and those of others. Learning to coordinate with other people requires a long training time and is often limited by the difficulty of having people available at the same time and of giving them accurate and real-time feedback about their coordination. The goal of the present study was to determine in an indoor team rowing situation whether virtual-reality and motion-capture technologies can help the acquisition of interpersonal coordination. More specifically, we investigated the possibility for participants to (1) learn the skill of interpersonal coordination when training with a virtual teammate, (2) accelerate learning with real-time visual feedback, and (3) transfer this skill to synchronizing situations with a real teammate. Our results show that participants improved their coordination with both virtual and real teammates, and that this improvement was better for participants who received the feedback. Generally, our results demonstrate the interest of virtual reality for learning the coordination with other people; further, our results open promising training perspectives for team rowing but also for several other interpersonal activities.


In this paper, an adaptive visual feedback system and controller has been designed and implemented in real-time to control the movements of a line follower robot to be smoother and faster. The robot consists of a couple of motorized wheels, the real-time controller and a CMOS camera as the only sensor for detecting line and feedback. The measurement based on real-time image processing and motor drive feedback used in this robot makes it robust to the obstacles and surface disturbances that may deviate robot. The image processing algorithm is adaptive to the line’s color and width too. Image processing techniques have been implemented in real-time to detect the line in the image frame and extract the necessary information (like line’s edge, coordinates and angle). A NI myRIO module is used as a stand-alone hardware unit and RT (Real-Time) target for implementation of controllers and image processing in LabVIEW environment. Both results of real-time and non-real-time implementation of controllers have been compared. To show the performance of real-time image processing in the control of this robot, three types of controllers (i.e. P, PI and Fuzzy controllers) have been implemented for line following tests and the results have been compared. At the end, it was found that the fuzzy controller controls the robot movements smoother, faster, with less errors and quicker response time compare to the other controllers


Sign in / Sign up

Export Citation Format

Share Document