Exploring Bias and Information Bubbles in YouTube’s Video Recommendation Networks

Author(s):  
Baris Kirdemir ◽  
Nitin Agarwal
Keyword(s):  
Author(s):  
Xiaoran Xu ◽  
Laming Chen ◽  
Songpeng Zu ◽  
Hanning Zhou
Keyword(s):  

Author(s):  
Chanjal C

Predicting the relevance between two given videos with respect to their visual content is a key component for content-based video recommendation and retrieval. The application is in video recommendation, video annotation, Category or near-duplicate video retrieval, video copy detection and so on. In order to estimate video relevance previous works utilize textual content of videos and lead to poor performance. The proposed method is feature re-learning for video relevance prediction. This work focus on the visual contents to predict the relevance between two videos. A given feature is projected into a new space by an affine transformation. Different from previous works this use a standard triplet ranking loss that optimize the projection process by a novel negative-enhanced triplet ranking loss. In order to generate more training data, propose a data augmentation strategy which works directly on video features. The multi-level augmentation strategy works for video features, which benefits the feature relearning. The proposed augmentation strategy can be flexibly used for frame-level or video-level features. The loss function that consider the absolute similarity of positive pairs and supervise the feature re-learning process and a new formula for video relevance computation.


2021 ◽  
pp. 279-294
Author(s):  
Wei Zhuo ◽  
Kunchi Liu ◽  
Taofeng Xue ◽  
Beihong Jin ◽  
Beibei Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document