scholarly journals Computer Vision-Driven Evaluation System for Assisted Decision-Making in Sports Training

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Lijin Zhu

Computer vision has become a fast-developing technology in the field of artificial intelligence, and its application fields are also expanding, thanks to the rapid development of deep learning. It will be of great practical value if it is combined with sports. When a traditional exercise assistance system is introduced into sports training, the athlete’s training information can be obtained by monitoring the exercise process through sensors and other equipment, which can assist the athlete in retrospectively analyzing the technical actions. However, the traditional system must be equipped with multiple sensor devices, and the exercise information provided must be accurate. This paper proposes a motion assistance evaluation system based on deep learning algorithms for human posture recognition. The system is divided into three sections: a standard motion database, auxiliary instruction, and overall evaluation. The standard motion database can be customized by the system user, and the auxiliary teaching system can be integrated. The user’s actions are compared to the standard actions and intuitively displayed to the trainers as data. The system’s overall evaluation component can recognize and display video files, giving trainers an intelligent training platform. Simulator tests are also available. It also demonstrates the efficacy of the algorithm used in this paper.

2021 ◽  
Vol 2021 ◽  
pp. 1-8 ◽  
Author(s):  
Zhongxiao Wang

With the rapid development of deep learning, computer vision has also become a rapidly developing field in the field of artificial intelligence. Combining the physical training of deep learning will bring good practical value. Physical training has different effects on people’s body shape, physical function, and physical quality. It is mainly reflected in the changes of relevant physical indicators after physical training. Therefore, the purpose of this article is to study the method of evaluating the impact of sports training on physical indicators based on deep learning. This paper mainly uses the convolutional neural network in deep learning to design sports training, then constructs the evaluation system of physical index impact, and finally uses the deep learning algorithm to evaluate the impact of physical index. The experimental results show that the accuracy of the algorithm proposed in this paper is significantly higher than that of the other three algorithms. Firstly, in the angular motion, the accuracy of the mean algorithm is 0.4, the accuracy of the variance algorithm is 0.2, the accuracy of the RFE algorithm is 0.4, and the accuracy of the DLA algorithm is 0.6. Similarly, in foot racing and skill sports, the accuracy of the algorithm proposed in this paper is significantly higher than that of other algorithms. Therefore, the method proposed in this paper is more effective in the evaluation of the impact of physical training on physical indicators.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Peng Wang

With the rapid development of science and technology in today’s society, various industries are pursuing information digitization and intelligence, and pattern recognition and computer vision are also constantly carrying out technological innovation. Computer vision is to let computers, cameras, and other machines receive information like human beings, analyze and process their semantic information, and make coping strategies. As an important research direction in the field of computer vision, human motion recognition has new solutions with the gradual rise of deep learning. Human motion recognition technology has a high market value, and it has broad application prospects in the fields of intelligent monitoring, motion analysis, human-computer interaction, and medical monitoring. This paper mainly studies the recognition of sports training action based on deep learning algorithm. Experimental work has been carried out in order to show the validity of the proposed research.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Naichun Gao

Embedded networking has a broad prospect. Because of the Internet and the rapid development of PC skills, computer vision technology has a wide range of applications in many fields, especially the importance of identifying wrong movements in sports training. To study the computer vision technology to identify the wrong movement of athletes in sports training, in this paper, a hidden Markov model based on computer vision technology is constructed to collect video and identify the landing and take-off movements and badminton serving movements of a team of athletes under the condition of sports training, Bayesian classification algorithm to analyze the acquired sports training action data, obtain the error frequency, and the number of errors of the landing jump action, and the three characteristic data of the displacement, velocity, and acceleration of the body’s center of gravity of the athlete in the two cases of successful and incorrect badminton serve actions and compared and analyzed the accuracy of the action recognition method used in this article, the action recognition method based on deep learning and the action recognition method based on EMG signal under 30 experiments. The training process of deep learning is specifically split into two stages: 1st, a monolayer neuron is built layer by layer so that the network is trained one layer at a time; when all layers are fully trained, a tuning is performed using a wake-sleep operation. The final result shows that the frequency of the wrong actions of the athletes on the landing jump is concentrated in the knee valgus, the total frequency of error has reached 58%, and the frequency of personal error has reached 45%; the problem of the landing distance of the two feet of the team athletes also appeared more frequently, the total frequency reached 50%, and the personal frequency reached 30%. Therefore, athletes should pay more attention to the problems of knee valgus and the distance between feet when performing landing jumps; the difference in the displacement, speed, and acceleration of the body’s center of gravity during the badminton serve will affect the error of the action. And the action recognition method used in this study has certain advantages compared with the other two action recognition methods, and the accuracy of action recognition is higher.


2014 ◽  
Vol 926-930 ◽  
pp. 2743-2746 ◽  
Author(s):  
Rui Min Hu ◽  
Zhen Dong He ◽  
Feng Bai

With the rapid development of computer technology, human motion tracking based on video is a kind of using ordinary camera tracking unmarked human movement technology. It has important application value in automatic monitoring, human-computer interaction, sports analysis and many other fields. This research is a hot research direction in the field of computer vision in recent years. Because of the complexity of the problem and the lack of understanding of the nature of the human visual tracking based on video is always a difficult problem in computer vision. The research content of this article is set in sports training, for motion analysis of non-contact, no interfere with measurement and simulation requirements, the use of computer graphics and computer vision technology, discussing 3D human motion simulation technology based on video analysis.


2021 ◽  
Vol 233 ◽  
pp. 04039
Author(s):  
Zhu Denghui ◽  
Song Lizhong ◽  
Feng yuan ◽  
Yang Quanshun

One of the core tasks of computer vision is target detection. With the rapid development of deep learning, target detection technology based on deep learning has become the mainstream algorithm in this field. As one of the main application fields, damage identification has achieved important development in the past decade. This paper systematically summarizes the research progress of damage identification algorithm based on deep learning, analyzes the advantages and disadvantages of each algorithm and its detection results on voc2007, voc2012 and coco data sets. Finally, the main contents of this paper are summarized, and the research prospect of deep learning based damage identification algorithm is prospect.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5598
Author(s):  
Jiaqi Li ◽  
Xuefeng Zhao ◽  
Guangyi Zhou ◽  
Mingyuan Zhang ◽  
Dongfang Li ◽  
...  

With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity’s evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management.


2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Sign in / Sign up

Export Citation Format

Share Document