scholarly journals Robust video event recognition

Author(s):  
Feifei Chen

Video Event Recognition is an important problem in video semantic analysis. In Video Event Recognition, video scenes can be summarized into video event through understanding their contents. Previous research has proposed many solutions to solve this problem. However, so far, all of them only target high-quality videos. In this thesis, we find, with the constraints in modern applications, that low-quality videos also deserve our attention. Compared to previous works, this brings a greater challenge, as low-quality videos are lacking essential information for previous methods to work. With the quality degraded, technical assumptions made by previous works no longer stay intact. Thus, this thesis provides a solution to address this problem. Based on the generic framework proposed by previous work, we propose a novel feature extraction technique that can work well with low-quality videos. We also improve the sequence summary model based on previous work. As a result, comparing to previous works, our method reaches a similar accuracy but is tested with a much lower video quality.

2021 ◽  
Author(s):  
Feifei Chen

Video Event Recognition is an important problem in video semantic analysis. In Video Event Recognition, video scenes can be summarized into video event through understanding their contents. Previous research has proposed many solutions to solve this problem. However, so far, all of them only target high-quality videos. In this thesis, we find, with the constraints in modern applications, that low-quality videos also deserve our attention. Compared to previous works, this brings a greater challenge, as low-quality videos are lacking essential information for previous methods to work. With the quality degraded, technical assumptions made by previous works no longer stay intact. Thus, this thesis provides a solution to address this problem. Based on the generic framework proposed by previous work, we propose a novel feature extraction technique that can work well with low-quality videos. We also improve the sequence summary model based on previous work. As a result, comparing to previous works, our method reaches a similar accuracy but is tested with a much lower video quality.


2016 ◽  
Vol 25 (12) ◽  
pp. 5689-5701 ◽  
Author(s):  
Litao Yu ◽  
Yang Yang ◽  
Zi Huang ◽  
Peng Wang ◽  
Jingkuan Song ◽  
...  

2019 ◽  
Vol 119 ◽  
pp. 429-440 ◽  
Author(s):  
Ben-Bright Benuwa ◽  
Yongzhao Zhan ◽  
Augustine Monney ◽  
Benjamin Ghansah ◽  
Ernest K. Ansah

Author(s):  
Jinhui Tang ◽  
Xian-Sheng Hua ◽  
Tao Mei ◽  
Guo-Jun Qi ◽  
Shipeng Li ◽  
...  

Author(s):  
Daniel Danso Essel ◽  
Ben-Bright Benuwa ◽  
Benjamin Ghansah

Sparse Representation (SR) and Dictionary Learning (DL) based Classifier have shown promising results in classification tasks, with impressive recognition rate on image data. In Video Semantic Analysis (VSA) however, the local structure of video data contains significant discriminative information required for classification. To the best of our knowledge, this has not been fully explored by recent DL-based approaches. Further, similar coding findings are not being realized from video features with the same video category. Based on the foregoing, a novel learning algorithm, Sparsity based Locality-Sensitive Discriminative Dictionary Learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of Locality-Sensitive Dictionary Learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experimental results show that, the proposed SLSDDL significantly improves the performance of video semantic detection compared with state-of-the-art approaches. The proposed approach also shows robustness to diverse video environments, proving the universality of the novel approach.


High definition television is becoming ever more popular, opening up the market to new high-definition technologies. Image quality and color fidelity have experienced improvements faster than ever. The video surveillance market has been affected by high definition television demand. Since video surveillance calls for large amounts of image data, high-quality video frame rates are generally compromised. However, a network camera that conforms to high definition television standards shows good performance in high frame rate, resolution, and color fidelity. High quality network cameras are a good choice for surveillance video quality.


2020 ◽  
Vol 14 (03) ◽  
pp. 395-422
Author(s):  
Maria Krommyda ◽  
Verena Kantere

As more and more datasets become available, their utilization in different applications increases in popularity. Their volume and production rate, however, means that their quality and content control is in most cases non-existing, resulting in many datasets that contain inaccurate information of low quality. Especially, in the field of conversational assistants, where the datasets come from many heterogeneous sources with no quality assurance, the problem is aggravated. We present here an integrated platform that creates task- and topic-specific conversational datasets to be used for training conversational agents. The platform explores available conversational datasets, extracts information based on semantic similarity and relatedness, and applies a weight-based score function to rank the information based on its value for the specific task and topic. The finalized dataset can then be used for the training of an automated conversational assistance over accurate data of high quality.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Kanak Mahadik ◽  
Christopher Wright ◽  
Milind Kulkarni ◽  
Saurabh Bagchi ◽  
Somali Chaterji

Abstract Remarkable advancements in high-throughput gene sequencing technologies have led to an exponential growth in the number of sequenced genomes. However, unavailability of highly parallel and scalable de novo assembly algorithms have hindered biologists attempting to swiftly assemble high-quality complex genomes. Popular de Bruijn graph assemblers, such as IDBA-UD, generate high-quality assemblies by iterating over a set of k-values used in the construction of de Bruijn graphs (DBG). However, this process of sequentially iterating from small to large k-values slows down the process of assembly. In this paper, we propose ScalaDBG, which metamorphoses this sequential process, building DBGs for each distinct k-value in parallel. We develop an innovative mechanism to “patch” a higher k-valued graph with contigs generated from a lower k-valued graph. Moreover, ScalaDBG leverages multi-level parallelism, by both scaling up on all cores of a node, and scaling out to multiple nodes simultaneously. We demonstrate that ScalaDBG completes assembling the genome faster than IDBA-UD, but with similar accuracy on a variety of datasets (6.8X faster for one of the most complex genome in our dataset).


Sign in / Sign up

Export Citation Format

Share Document