Shot Type Classification in Sports Video Using Fuzzy Information Granular

Author(s):  
Congyan Lang ◽  
De Xu ◽  
Wengang Cheng ◽  
Yiwei Jiang
2020 ◽  
Vol 10 (10) ◽  
pp. 3390
Author(s):  
Hui-Yong Bak ◽  
Seung-Bo Park

The shot-type decision is a very important pre-task in movie analysis due to the vast information, such as the emotion, psychology of the characters, and space information, from the shot type chosen. In order to analyze a variety of movies, a technique that automatically classifies shot types is required. Previous shot type classification studies have classified shot types by the proportion of the face on-screen or using a convolutional neural network (CNN). Studies that have classified shot types by the proportion of the face on-screen have not classified the shot if a person is not on the screen. A CNN classifies shot types even in the absence of a person on the screen, but there are certain shots that cannot be classified because instead of semantically analyzing the image, the method classifies them only by the characteristics and patterns of the image. Therefore, additional information is needed to access the image semantically, which can be done through semantic segmentation. Consequently, in the present study, the performance of shot type classification was improved by preprocessing the semantic segmentation of the frame extracted from the movie. Semantic segmentation approaches the images semantically and distinguishes the boundary relationships among objects. The representative technologies of semantic segmentation include Mask R-CNN and Yolact. A study was conducted to compare and evaluate performance using these as pretreatments for shot type classification. As a result, the average accuracy of shot type classification using a frame preprocessed with semantic segmentation increased by 1.9%, from 93% to 94.9%, when compared with shot type classification using the frame without such preprocessing. In particular, when using ResNet-50 and Yolact, the classification of shot type showed a 3% performance improvement (to 96% accuracy from 93%).


Author(s):  
Anyi Rao ◽  
Jiaze Wang ◽  
Linning Xu ◽  
Xuekun Jiang ◽  
Qingqiu Huang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document