A Matching Approach to Factor Scores Based on Online Sponsored Search Auction

2018 ◽  
Vol 6 (1) ◽  
pp. 11-30 ◽  
Author(s):  
Xiaohui Li ◽  
Hongbin Dong ◽  
Yang Zhou ◽  
Jun He

How does a search engine company decide which advertisements to display for each query to maximize its revenue? This turns out to be a generalization of the online bipartite matching problem. In this paper, search engines decide the strategy to allocate resources with an advertiser credibility factor under budget constraints. Based on the optimal algorithm, which is the notion of a trade-off revealing LP, this paper remains the competitive ratio as 1-1/e with an advertiser credibility factor. During the ranking in the keywords auctions, the authors calculate factor scores. The new arrival keywords should match to the advertisement which has the maximum factor scores. The amount of spent money, budget constraints, CTR (Click Through Rate) and a credibility factor are added to the trade-off function using in the authors' algorithms. In the long term, users tend to use the search engine with high credibility, which will bring greater revenues. The search engine will attract advertisers to bid on keywords and improve their credibility actively.

Author(s):  
Stepan Balcar ◽  
Vit Skrhak ◽  
Ladislav Peska

AbstractIn this paper, we focus on the problem of rank-sensitive proportionality preservation when aggregating outputs of multiple recommender systems in dynamic recommendation scenarios. We believe that individual recommenders may provide complementary views on the user’s preferences or needs, and therefore, their proportional (i.e. unbiased) aggregation may be beneficial for the long-term user satisfaction. We propose an aggregation framework (FuzzDA) based on a modified D’Hondt’s algorithm (DA) for proportional mandates allocation. Specifically, we adjusted DA to register fuzzy membership of items and modified the selection procedure to balance both relevance and proportionality criteria. Furthermore, we propose several iterative votes assignment strategies and negative implicit feedback incorporation strategies to make FuzzDA framework applicable in dynamic recommendation scenarios. Overall, the framework should provide benefits w.r.t. long-term novelty of recommendations, diversity of recommended items as well as overall relevance. We evaluated FuzzDA framework thoroughly both in offline simulations and in online A/B testing. Framework variants outperformed baselines w.r.t. click-through rate (CTR) in most of the evaluated scenarios. Some variants of FuzzDA also provided the best or close-to-best iterative novelty (while maintaining very high CTR). While the impact of the framework variants on user-wise diversity was not so extensive, the trade-off between CTR and diversity seems reasonable.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Daniel Vert ◽  
Renaud Sirdey ◽  
Stéphane Louise

AbstractThis paper experimentally investigates the behavior of analog quantum computers as commercialized by D-Wave when confronted to instances of the maximum cardinality matching problem which is specifically designed to be hard to solve by means of simulated annealing. We benchmark a D-Wave “Washington” (2X) with 1098 operational qubits on various sizes of such instances and observe that for all but the most trivially small of these it fails to obtain an optimal solution. Thus, our results suggest that quantum annealing, at least as implemented in a D-Wave device, falls in the same pitfalls as simulated annealing and hence provides additional evidences suggesting that there exist polynomial-time problems that such a machine cannot solve efficiently to optimality. Additionally, we investigate the extent to which the qubits interconnection topologies explains these latter experimental results. In particular, we provide evidences that the sparsity of these topologies which, as such, lead to QUBO problems of artificially inflated sizes can partly explain the aforementioned disappointing observations. Therefore, this paper hints that denser interconnection topologies are necessary to unleash the potential of the quantum annealing approach.


2016 ◽  
Vol 38 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Fernanda Barcellos Serralta ◽  
John Stuart Ablon

Abstract Introduction: The Psychotherapy Process Q-Set (PQS) prototype method is used to measure the extent to which ideal processes of different psychotherapies are present in real cases, allowing researchers to examine how adherence to these models relates to or predicts change. Results from studies of short-term psychotherapies suggest that the original psychodynamic prototype is more suitable for studying psychoanalysis and long-term psychodynamic psychotherapy than its time-limited counterparts. Furthermore, culture probably influences how therapies are typically conducted in a given country. Therefore, it seems appropriate to develop Brazilian prototypes on which to base studies of short-term psychodynamic and cognitive-behavioral processes in this country. Objective: To develop prototypes for studying processes of short-term psychotherapies and to examine the degree of adherence of two real psychotherapy cases to these models. Methods: Expert clinicians used the PQS to rate a hypothetical ideal session of either short-term psychodynamic psychotherapy (STPP) or cognitive-behavioral therapy (CBT). Ratings were submitted to Q-type factor analysis to confirm the two groups. Regressive factor scores were rank ordered to describe the prototypes. These ideal models were correlated with ratings of actual therapy processes in two complete psychotherapy cases, one STPP and the other CBT. Results: Agreement levels between expert ratings were high and the two ideal models were confirmed. As expected, the PQS ratings for actual STPP and CBT cases had significant correlations with their respective ideal models, but the STPP case also adhered to the CBT prototype. Conclusion: Overall, the findings reveal the adequacy of the prototypes for time-limited therapies, providing initial support of their validity.


Author(s):  
Aiping Xiong ◽  
Robert W. Proctor ◽  
Weining Yang ◽  
Ninghui Li

Objective: Evaluate the effectiveness of training embedded within security warnings to identify phishing webpages. Background: More than 20 million malware and phishing warnings are shown to users of Google Safe Browsing every week. Substantial click-through rate is still evident, and a common issue reported is that users lack understanding of the warnings. Nevertheless, each warning provides an opportunity to train users about phishing and how to avoid phishing attacks. Method: To test use of phishing-warning instances as opportunities to train users’ phishing webpage detection skills, we conducted an online experiment contrasting the effectiveness of the current Chrome phishing warning with two training-embedded warning interfaces. The experiment consisted of three phases. In Phase 1, participants made login decisions on 10 webpages with the aid of warning. After a distracting task, participants made legitimacy judgments for 10 different login webpages without warnings in Phase 2. To test the long-term effect of the training, participants were invited back a week later to participate in Phase 3, which was conducted similarly as Phase 2. Results: Participants differentiated legitimate and fraudulent webpages better than chance. Performance was similar for all interfaces in Phase 1 for which the warning aid was present. However, training-embedded interfaces provided better protection than the Chrome phishing warning on both subsequent phases. Conclusion: Embedded training is a complementary strategy to compensate for lack of phishing webpage detection skill when phishing warning is absent. Application: Potential applications include development of training-embedded warnings to enable security training at scale.


2002 ◽  
Vol 12 (6) ◽  
pp. 805-811 ◽  
Author(s):  
Iris Shai ◽  
Yaakov Henkin ◽  
Shimon Weitzman ◽  
Itzhak Levi

Author(s):  
Shengkui Cao ◽  
Qi Feng ◽  
Jianhua Si ◽  
Yonghong Su ◽  
Zongqiang Chang ◽  
...  

Foliar d13C values are often used to denote the long-term water use efficiency (WUE) of plants whereas long-term nitrogen use efficiency (NUE) are usually estimated by the ratio of C to N in the leaves. Seasonal variations of d13C values, foliar nitrogen concentration and C/N ratios of Populus euphratica and Tamarix ramosissima grown under five different microhabitats of Ejina desert riparian oasis of northwestern arid regions in China were studied. The results indicated that T. ramosissima had higher d13C value compared with that of P. euphratica. The N concentration and C/N ratios of two species were not significantly different. The seasonal pattern of three indexes in two species was different. The d13C values and N concentration decreased during the plant’s growth period. However, the change of C/N ratios was increased. Among microhabitats, there were higher d13C values and N concentration as well as lower C/N ratios in the Dune and Gobi habitats. Foliar d13C values significantly and positively correlated with N concentration in P. euphratica and T. ramosissima, whereas a significantly negative correlation between d13C values and C/N ratios was found for P. euphratica. This relation in T. ramosissima was weak, but there was a significant quadratic curve relationship between d13C values and C/N ratios, which revealed that there was a trade-off between WUE and NUE for P. euphratica and in natural condition, P. euphratica could not improve WUE and NUE simultaneously. T. ramosissima could simultaneously enhance WUE and NUE. The above characters of WUE and NUE in two plants reflected the different adaptations of desert species to environmental condition.


Author(s):  
Yuan Chen ◽  
Liya Ding ◽  
Sio-Long Lo ◽  
Dickson K.W. Chiu

This article proposes a novel approach that combines user’s instant requirement described in keywords with her or his long-term knowledge background to better serve article selection based on personal preference. The knowledge background is represented as a weighted undirected graph called background net that captures the contextual association of words that appear in the articles recommended by the user through incremental learning. With a background net of user constructed, a keyword from the user is personalized to a fuzzy set that represents contextual association of the given keyword to other words involved in the user’s background net. An article evaluation with personal preference can be achieved by evaluating similarity between personalized keyword set based on user’s background net and a candidate article. The proposed approach makes it possible to construct a search engine optimizer running on the top of search engines to adjust search results, and offer the potential to be integrated with existing search engine techniques to achieve better performance. The target system of personalized article selection can be automatically constructed using Knowware System which is a development tool of KBS for convenient modeling and component reuse.


2019 ◽  
Vol 32 (4) ◽  
pp. 354-377 ◽  
Author(s):  
Katrin Hussinger ◽  
Abdul-Basit Issah

This study elucidates the mixed gamble confronting family firms when considering a related firm acquisition. The socioemotional and financial wealth trade-off associated with related firm acquisitions as well as their long-term horizon turns family firms more likely to undertake a related acquisition than nonfamily firms, especially when they are performing above their aspiration level. Postmerger performance pattern confirms that family firms are able to create long-term value through these acquisitions, and by doing so, they surpass nonfamily firms. These findings stand in contrast to commonly used behavioral agency predictions but can be reconciled with theory through a mixed gamble lens.


2020 ◽  
pp. 1-19
Author(s):  
SAM DESIERE ◽  
LUDO STRUYVEN

Abstract Artificial intelligence (AI) is increasingly popular in the public sector to improve the cost-efficiency of service delivery. One example is AI-based profiling models in public employment services (PES), which predict a jobseeker’s probability of finding work and are used to segment jobseekers in groups. Profiling models hold the potential to improve identification of jobseekers at-risk of becoming long-term unemployed, but also induce discrimination. Using a recently developed AI-based profiling model of the Flemish PES, we assess to what extent AI-based profiling ‘discriminates’ against jobseekers of foreign origin compared to traditional rule-based profiling approaches. At a maximum level of accuracy, jobseekers of foreign origin who ultimately find a job are 2.6 times more likely to be misclassified as ‘high-risk’ jobseekers. We argue that it is critical that policymakers and caseworkers understand the inherent trade-offs of profiling models, and consider the limitations when integrating these models in daily operations. We develop a graphical tool to visualize the accuracy-equity trade-off in order to facilitate policy discussions.


Sign in / Sign up

Export Citation Format

Share Document