Parallel rule‐based selective sampling and on‐demand learning to rank

Author(s):  
Mateus F. Freitas ◽  
Daniel X. Sousa ◽  
Wellington S. Martins ◽  
Thierson C. Rosa ◽  
Rodrigo M. Silva ◽  
...  
Author(s):  
Daniel Xavier De Sousa ◽  
Thierson Couto Rosa ◽  
Wellington Santos Martins ◽  
Rodrigo Silva ◽  
Marcos André Gonçalves

2016 ◽  
Author(s):  
Mateus F. e Freitas ◽  
Daniel De Sousa ◽  
Wellington Martins ◽  
Thierson Rosa ◽  
Rodrigo Silva ◽  
...  

Learning to rank (L2R) works by constructing a ranking model from training data so that, given unseen data (query), a somewhat similar ranking is produced. Almost all work in L2R focuses on ranking accuracy leaving performance and scalability overlooked. In this work we present a fast and scalable manycore (GPU) implementation for an on-demand L2R technique that builds ranking models on the fly. Our experiments show that we are able to process a query (build a model and rank) in only a few milliseconds, achieving a speedup of 508x over a serial baseline and 4x over a parallel baseline for the best case. We extend the implementation to work with multiple GPUs, further increasing the speedup over the parallel baseline to approximately x16 when using 4 GPUs.


2020 ◽  
Author(s):  
Jakob Bruhl ◽  
James Ledlie Klosky ◽  
Elizabeth Bristow

2019 ◽  
Vol 10 (1) ◽  
pp. 3-16
Author(s):  
Claudia Schubert ◽  
Marc-Thorsten Hütt

Algorithms are the key instrument for the economy-on-demand using platforms for its clients, workers and self-employed. An effective legal enforcement must not be limited to the control of the outcome of the algorithm but should also focus on the algorithm itself. This article assesses the present capacities of computer science to control and certify rule-based and data-centric (machine learning) algorithms. It discusses the legal instruments for the control of algorithms and their enforcement and institutional pre-conditions. It favours a digital agency that concentrates expertise and bureaucracy for the certification and official calibration of algorithms and promotes an international approach to the regulation of legal standards.


Author(s):  
Jori Bomanson ◽  
Tomi Janhunen ◽  
Antonius Weinzierl

Answer-Set Programming (ASP) is an expressive rule-based knowledge-representation formalism. Lazy grounding is a solving technique that avoids the well-known grounding bottleneck of traditional ASP evaluation but is restricted to normal rules, severely limiting its expressive power. In this work, we introduce a framework to handle aggregates by normalizing them on demand during lazy grounding, hence relieving the restrictions of lazy grounding significantly. We term our approach as lazy normalization and demonstrate its feasibility for different types of aggregates. Asymptotic behavior is analyzed and correctness of the presented lazy normalizations is shown. Benchmark results indicate that lazy normalization can bring up-to exponential gains in space and time as well as enable ASP to be used in new application areas.


Sign in / Sign up

Export Citation Format

Share Document