video semantic analysis
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 7)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Feifei Chen

Video Event Recognition is an important problem in video semantic analysis. In Video Event Recognition, video scenes can be summarized into video event through understanding their contents. Previous research has proposed many solutions to solve this problem. However, so far, all of them only target high-quality videos. In this thesis, we find, with the constraints in modern applications, that low-quality videos also deserve our attention. Compared to previous works, this brings a greater challenge, as low-quality videos are lacking essential information for previous methods to work. With the quality degraded, technical assumptions made by previous works no longer stay intact. Thus, this thesis provides a solution to address this problem. Based on the generic framework proposed by previous work, we propose a novel feature extraction technique that can work well with low-quality videos. We also improve the sequence summary model based on previous work. As a result, comparing to previous works, our method reaches a similar accuracy but is tested with a much lower video quality.


2021 ◽  
Author(s):  
Feifei Chen

Video Event Recognition is an important problem in video semantic analysis. In Video Event Recognition, video scenes can be summarized into video event through understanding their contents. Previous research has proposed many solutions to solve this problem. However, so far, all of them only target high-quality videos. In this thesis, we find, with the constraints in modern applications, that low-quality videos also deserve our attention. Compared to previous works, this brings a greater challenge, as low-quality videos are lacking essential information for previous methods to work. With the quality degraded, technical assumptions made by previous works no longer stay intact. Thus, this thesis provides a solution to address this problem. Based on the generic framework proposed by previous work, we propose a novel feature extraction technique that can work well with low-quality videos. We also improve the sequence summary model based on previous work. As a result, comparing to previous works, our method reaches a similar accuracy but is tested with a much lower video quality.


Author(s):  
Daniel Danso Essel ◽  
Ben-Bright Benuwa ◽  
Benjamin Ghansah

Sparse Representation (SR) and Dictionary Learning (DL) based Classifier have shown promising results in classification tasks, with impressive recognition rate on image data. In Video Semantic Analysis (VSA) however, the local structure of video data contains significant discriminative information required for classification. To the best of our knowledge, this has not been fully explored by recent DL-based approaches. Further, similar coding findings are not being realized from video features with the same video category. Based on the foregoing, a novel learning algorithm, Sparsity based Locality-Sensitive Discriminative Dictionary Learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of Locality-Sensitive Dictionary Learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experimental results show that, the proposed SLSDDL significantly improves the performance of video semantic detection compared with state-of-the-art approaches. The proposed approach also shows robustness to diverse video environments, proving the universality of the novel approach.


In this paper, a subspace-based multimedia datamining framework is proposed for video semantic analysis; specifically Current content management systems support retrieval using low-level features, such as motion, color, and texture. The proposed frameworks achieves full automation via a knowledge-based video indexing and retrieve an appropriate result, and replace a presented object with the retrieval result in real time. Along with this indexing mechanism a histogrambased color descriptors also introduced to reliably capture and represent the color properties of multiple images. Including of this a classification approach is also carried out by the classified associations and by assigning, each of them with a class label, and uses their appearances in the video to construct video indices. Our experimental results demonstrate the performance of the proposed approach.


2019 ◽  
Vol 119 ◽  
pp. 429-440 ◽  
Author(s):  
Ben-Bright Benuwa ◽  
Yongzhao Zhan ◽  
Augustine Monney ◽  
Benjamin Ghansah ◽  
Ernest K. Ansah

2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Ben-Bright Benuwa ◽  
Yongzhao Zhan ◽  
Benjamin Ghansah ◽  
Ernest K. Ansah ◽  
Andriana Sarkodie

Dictionary learning (DL) and sparse representation (SR) based classifiers have greatly impacted the classification performance and have had good recognition rate on image data. In video semantic analysis (VSA), the local structure of video data contains more vital discriminative information needed for classification. However, this has not been fully exploited by the current DL based approaches. Besides, similar coding findings are not being realized from video features with the same video category. Based on the issues stated afore, a novel learning algorithm, called sparsity based locality-sensitive discriminative dictionary learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of locality-sensitive dictionary learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experiment results show that the proposed SLSDDL significantly improves the performance of video semantic detection compared with the comparative state-of-the-art approaches. Moreover, the robustness to various diverse environments in video is also demonstrated, which proves the universality of the novel approach.


2018 ◽  
Vol 78 (6) ◽  
pp. 6721-6744 ◽  
Author(s):  
Ben-Bright Benuwa ◽  
Yongzhao Zhan ◽  
JunQi Liu ◽  
Jianping Gou ◽  
Benjamin Ghansah ◽  
...  

2018 ◽  
Vol 77 (21) ◽  
pp. 29143-29162 ◽  
Author(s):  
Junqi Liu ◽  
Jianping Gou ◽  
Yongzhao Zhan ◽  
Qirong Mao

Sign in / Sign up

Export Citation Format

Share Document