Improving On-Demand Learning to Rank through Parallelism

Author(s):  
Daniel Xavier De Sousa ◽  
Thierson Couto Rosa ◽  
Wellington Santos Martins ◽  
Rodrigo Silva ◽  
Marcos André Gonçalves
Author(s):  
Mateus F. Freitas ◽  
Daniel X. Sousa ◽  
Wellington S. Martins ◽  
Thierson C. Rosa ◽  
Rodrigo M. Silva ◽  
...  

2016 ◽  
Author(s):  
Mateus F. e Freitas ◽  
Daniel De Sousa ◽  
Wellington Martins ◽  
Thierson Rosa ◽  
Rodrigo Silva ◽  
...  

Learning to rank (L2R) works by constructing a ranking model from training data so that, given unseen data (query), a somewhat similar ranking is produced. Almost all work in L2R focuses on ranking accuracy leaving performance and scalability overlooked. In this work we present a fast and scalable manycore (GPU) implementation for an on-demand L2R technique that builds ranking models on the fly. Our experiments show that we are able to process a query (build a model and rank) in only a few milliseconds, achieving a speedup of 508x over a serial baseline and 4x over a parallel baseline for the best case. We extend the implementation to work with multiple GPUs, further increasing the speedup over the parallel baseline to approximately x16 when using 4 GPUs.


2020 ◽  
Author(s):  
Jakob Bruhl ◽  
James Ledlie Klosky ◽  
Elizabeth Bristow

2009 ◽  
Author(s):  
Akira Yutani ◽  
Yoshitsugu Manabe ◽  
Hideki Sunahara ◽  
Kunihiro Chihara

Sign in / Sign up

Export Citation Format

Share Document