Online Algorithms for 1-Space Bounded Cube Packing and 2-Space Bounded Hypercube Packing

Author(s):  
Łukasz Zielonka
2021 ◽  
Vol 10 (5) ◽  
pp. 971-975
Author(s):  
Xin Xie ◽  
Heng Wang ◽  
Lei Yu ◽  
Mingjiang Weng

2021 ◽  
Vol 52 (2) ◽  
pp. 71-71
Author(s):  
Rob van Stee

For this issue, Pavel Vesely has contributed a wonderful overview of the ideas that were used in his SODA paper on packet scheduling with Marek Chrobak, Lukasz Jez and Jiri Sgall. This is a problem for which a 2-competitive algorithm as well as a lower bound of ϕ ≈ 1:618 was known already twenty years ago, but which resisted resolution for a long time. It is great that this problem has nally been resolved and that Pavel was willing to explain more of the ideas behind it for this column. He also provides an overview of open problems in this area.


2016 ◽  
Vol 47 (2) ◽  
pp. 40-51 ◽  
Author(s):  
Rob van Stee

2021 ◽  
Vol 68 (4) ◽  
pp. 1-25
Author(s):  
Thodoris Lykouris ◽  
Sergei Vassilvitskii

Traditional online algorithms encapsulate decision making under uncertainty, and give ways to hedge against all possible future events, while guaranteeing a nearly optimal solution, as compared to an offline optimum. On the other hand, machine learning algorithms are in the business of extrapolating patterns found in the data to predict the future, and usually come with strong guarantees on the expected generalization error. In this work, we develop a framework for augmenting online algorithms with a machine learned predictor to achieve competitive ratios that provably improve upon unconditional worst-case lower bounds when the predictor has low error. Our approach treats the predictor as a complete black box and is not dependent on its inner workings or the exact distribution of its errors. We apply this framework to the traditional caching problem—creating an eviction strategy for a cache of size k . We demonstrate that naively following the oracle’s recommendations may lead to very poor performance, even when the average error is quite low. Instead, we show how to modify the Marker algorithm to take into account the predictions and prove that this combined approach achieves a competitive ratio that both (i) decreases as the predictor’s error decreases and (ii) is always capped by O (log k ), which can be achieved without any assistance from the predictor. We complement our results with an empirical evaluation of our algorithm on real-world datasets and show that it performs well empirically even when using simple off-the-shelf predictions.


2012 ◽  
Vol 43 (4) ◽  
pp. 123-129
Author(s):  
Rob van Stee

2015 ◽  
Vol 46 (2) ◽  
pp. 105-112 ◽  
Author(s):  
Rob van Stee

2010 ◽  
Vol 41 (4) ◽  
pp. 114-121 ◽  
Author(s):  
Marek Chrobak

2017 ◽  
Vol 48 (4) ◽  
pp. 100-109 ◽  
Author(s):  
Rob van Stee

2016 ◽  
Vol 47 (3) ◽  
pp. 92-92
Author(s):  
Rob van Stee

2018 ◽  
Vol 49 (4) ◽  
pp. 36-45
Author(s):  
Rob van Stee

Sign in / Sign up

Export Citation Format

Share Document