GraDED: A graph-based parametric dictionary learning algorithm for event detection

Author(s):  
Tamal Batabyal ◽  
Rituparna Sarkar ◽  
Scott T. Acton
2021 ◽  
Vol 429 ◽  
pp. 89-100
Author(s):  
Zhenni Li ◽  
Chao Wan ◽  
Benying Tan ◽  
Zuyuan Yang ◽  
Shengli Xie

2021 ◽  
pp. 1-11
Author(s):  
Yanan Huang ◽  
Yuji Miao ◽  
Zhenjing Da

The methods of multi-modal English event detection under a single data source and isomorphic event detection of different English data sources based on transfer learning still need to be improved. In order to improve the efficiency of English and data source time detection, based on the transfer learning algorithm, this paper proposes multi-modal event detection under a single data source and isomorphic event detection based on transfer learning for different data sources. Moreover, by stacking multiple classification models, this paper makes each feature merge with each other, and conducts confrontation training through the difference between the two classifiers to further make the distribution of different source data similar. In addition, in order to verify the algorithm proposed in this paper, a multi-source English event detection data set is collected through a data collection method. Finally, this paper uses the data set to verify the method proposed in this paper and compare it with the current most mainstream transfer learning methods. Through experimental analysis, convergence analysis, visual analysis and parameter evaluation, the effectiveness of the algorithm proposed in this paper is demonstrated.


Author(s):  
Daniel Danso Essel ◽  
Ben-Bright Benuwa ◽  
Benjamin Ghansah

Sparse Representation (SR) and Dictionary Learning (DL) based Classifier have shown promising results in classification tasks, with impressive recognition rate on image data. In Video Semantic Analysis (VSA) however, the local structure of video data contains significant discriminative information required for classification. To the best of our knowledge, this has not been fully explored by recent DL-based approaches. Further, similar coding findings are not being realized from video features with the same video category. Based on the foregoing, a novel learning algorithm, Sparsity based Locality-Sensitive Discriminative Dictionary Learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of Locality-Sensitive Dictionary Learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experimental results show that, the proposed SLSDDL significantly improves the performance of video semantic detection compared with state-of-the-art approaches. The proposed approach also shows robustness to diverse video environments, proving the universality of the novel approach.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 212456-212466
Author(s):  
Zhuoyun Miao ◽  
Hongjuan Zhang ◽  
Shuang Ma

2020 ◽  
Vol 29 ◽  
pp. 9220-9233
Author(s):  
Na Han ◽  
Jigang Wu ◽  
Xiaozhao Fang ◽  
Shaohua Teng ◽  
Guoxu Zhou ◽  
...  

Author(s):  
Tao Xiong ◽  
Jie Zhang ◽  
Yuanming Suo ◽  
Dung N. Tran ◽  
Ralph Etienne-Cummings ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document