video search
Recently Published Documents


TOTAL DOCUMENTS

313
(FIVE YEARS 41)

H-INDEX

20
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Tan Yu ◽  
Yi Yang ◽  
Yi Li ◽  
Lin Liu ◽  
Mingming Sun ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Aozhu Chen ◽  
Fan Hu ◽  
Zihan Wang ◽  
Fangming Zhou ◽  
Xirong Li

Author(s):  
Phuong-Anh Nguyen ◽  
Chong-Wah Ngo

This article conducts user evaluation to study the performance difference between interactive and automatic search. Particularly, the study aims to provide empirical insights of how the performance landscape of video search changes, with tens of thousands of concept detectors freely available to exploit for query formulation. We compare three types of search modes: free-to-play (i.e., search from scratch), non-free-to-play (i.e., search by inspecting results provided by automatic search), and automatic search including concept-free and concept-based retrieval paradigms. The study involves a total of 40 participants; each performs interactive search over 15 queries of various difficulty levels using two search modes on the IACC.3 dataset provided by TRECVid organizers. The study suggests that the performance of automatic search is still far behind interactive search. Furthermore, providing users with the result of automatic search for exploration does not show obvious advantage over asking users to search from scratch. The study also analyzes user behavior to reveal insights of how users compose queries, browse results, and discover new query terms for search, which can serve as guideline for future research of both interactive and automatic search.


2021 ◽  
Vol 7 (5) ◽  
pp. 76
Author(s):  
Giuseppe Amato ◽  
Paolo Bolettieri ◽  
Fabio Carrara ◽  
Franca Debole ◽  
Fabrizio Falchi ◽  
...  

This paper describes in detail VISIONE, a video search system that allows users to search for videos using textual keywords, the occurrence of objects and their spatial relationships, the occurrence of colors and their spatial relationships, and image similarity. These modalities can be combined together to express complex queries and meet users’ needs. The peculiarity of our approach is that we encode all information extracted from the keyframes, such as visual deep features, tags, color and object locations, using a convenient textual encoding that is indexed in a single text retrieval engine. This offers great flexibility when results corresponding to various parts of the query (visual, text and locations) need to be merged. In addition, we report an extensive analysis of the retrieval performance of the system, using the query logs generated during the Video Browser Showdown (VBS) 2019 competition. This allowed us to fine-tune the system by choosing the optimal parameters and strategies from those we tested.


Author(s):  
Ahmad Adly ◽  
Islam Hegazy ◽  
Taha Elarif ◽  
M. S. Abdelwahab
Keyword(s):  

2021 ◽  
Vol 23 ◽  
pp. 353-364
Author(s):  
Yan Wu ◽  
Xianglong Liu ◽  
Haotong Qin ◽  
Ke Xia ◽  
Sheng Hu ◽  
...  

2021 ◽  
pp. 1-1
Author(s):  
Luca Rossetto ◽  
Ralph Gasser ◽  
Silvan Heller ◽  
Mahnaz Parian-Scherb ◽  
Loris Sauter ◽  
...  

Author(s):  
Yoonho Lee ◽  
Heeju Choi ◽  
Sungjune Park ◽  
Yong Man Ro

Sign in / Sign up

Export Citation Format

Share Document