scholarly journals Key Frame Extraction for Sports Training Based on Improved Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Changhai Lv ◽  
Junfeng Li ◽  
Jian Tian

With the rapid technological advances in sports, the number of athletics increases gradually. For sports professionals, it is obligatory to oversee and explore the athletics pose in athletes’ training. Key frame extraction of training videos plays a significant role to ease the analysis of sport training videos. This paper develops a sports actions’ classification system for accurately classifying athlete’s actions. The key video frames are extracted from the sports training video to highlight the distinct actions in sports training. Subsequently, a fully convolutional network (FCN) is used to extract the region of interest (ROI) pose detection of frames followed by the application of a convolution neural network (CNN) to estimate the pose probability of each frame. Moreover, a distinct key frame extraction approach is established to extract the key frames considering neighboring frames’ probability differences. The experimental results determine that the proposed method showed better performance and can recognize the athlete’s posture with an average classification rate of 98%. The experimental results and analysis validate that the proposed key frame extraction method outperforms its counterparts in key pose probability estimation and key pose extraction.

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Shaoshuai Lei ◽  
Gang Xie ◽  
Gaowei Yan

Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.


Author(s):  
Meng Jian ◽  
Shijie Zhang ◽  
Xiangdong Wang ◽  
Yudi He ◽  
Lifang Wu

2019 ◽  
Vol 328 ◽  
pp. 147-156 ◽  
Author(s):  
Meng Jian ◽  
Shuai Zhang ◽  
Lifang Wu ◽  
Shijie Zhang ◽  
Xiangdong Wang ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Chen Zhang ◽  
Bin Hu ◽  
Yucong Suo ◽  
Zhiqiang Zou ◽  
Yimu Ji

In this paper, we study the challenge of image-to-video retrieval, which uses the query image to search relevant frames from a large collection of videos. A novel framework based on convolutional neural networks (CNNs) is proposed to perform large-scale video retrieval with low storage cost and high search efficiency. Our framework consists of the key-frame extraction algorithm and the feature aggregation strategy. Specifically, the key-frame extraction algorithm takes advantage of the clustering idea so that redundant information is removed in video data and storage cost is greatly reduced. The feature aggregation strategy adopts average pooling to encode deep local convolutional features followed by coarse-to-fine retrieval, which allows rapid retrieval in the large-scale video database. The results from extensive experiments on two publicly available datasets demonstrate that the proposed method achieves superior efficiency as well as accuracy over other state-of-the-art visual search methods.


Sign in / Sign up

Export Citation Format

Share Document