scholarly journals Disguise Adversarial Networks for Click-through Rate Prediction

Author(s):  
Yue Deng ◽  
Yilin Shen ◽  
Hongxia Jin

We introduced an adversarial learning framework for improving CTR prediction in Ads recommendation. Our approach was motivated by observing the extremely low click-through rate and imbalanced label distribution in the historical Ads impressions. We hence proposed a Disguise-Adversarial-Networks (DAN) to improve the accuracy of supervised learning with limited positive-class information. In the context of CTR prediction, the rationality behind DAN could be intuitively understood as ``non-clicked Ads makeup''. DAN disguises the disliked Ads impressions (non-clicks) to be interesting ones and encourages a discriminator to classify these disguised Ads as positive recommendations. In an adversarial aspect, the discriminator should be sober-minded which is optimized to allocate these disguised Ads to their inherent classes according to an unsupervised information theoretic assignment strategy. We applied DAN to two Ads datasets including both mobile and display Ads for CTR prediction. The results showed that our DAN approach significantly outperformed other supervised learning and generative adversarial networks (GAN) in CTR prediction.

Author(s):  
Tao He ◽  
Yuan-Fang Li ◽  
Lianli Gao ◽  
Dongxiang Zhang ◽  
Jingkuan Song

With the recent explosive increase of digital data, image recognition and retrieval become a critical practical application. Hashing is an effective solution to this problem, due to its low storage requirement and high query speed. However, most of past works focus on hashing in a single (source) domain. Thus, the learned hash function may not adapt well in a new (target) domain that has a large distributional difference with the source domain. In this paper, we explore an end-to-end domain adaptive learning framework that simultaneously and precisely generates discriminative hash codes and classifies target domain images. Our method encodes two domains images into a semantic common space, followed by two independent generative adversarial networks arming at crosswise reconstructing two domains’ images, reducing domain disparity and improving alignment in the shared space. We evaluate our framework on four public benchmark datasets, all of which show that our method is superior to the other state-of-the-art methods on the tasks of object recognition and image retrieval.


Sign in / Sign up

Export Citation Format

Share Document