scholarly journals A Comparison of Performance Measures for Online Algorithms

Author(s):  
Joan Boyar ◽  
Sandy Irani ◽  
Kim S. Larsen
Algorithmica ◽  
2014 ◽  
Vol 72 (4) ◽  
pp. 969-994 ◽  
Author(s):  
Joan Boyar ◽  
Sandy Irani ◽  
Kim S. Larsen

Author(s):  
Lijun Zhang

The usual goal of online learning is to minimize the regret, which measures the performance of online learner against a fixed comparator. However, it is not suitable for changing environments in which the best decision may change over time. To address this limitation, new performance measures, including dynamic regret and adaptive regret have been proposed to guide the design of online algorithms. In dynamic regret, the learner is compared with a sequence of comparators, and in adaptive regret, the learner is required to minimize the regret over every interval. In this paper, we will review the recent developments in this area, and highlight our contributions. Specifically, we have proposed novel algorithms to minimize the dynamic regret and adaptive regret, and investigated the relationship between them.


2015 ◽  
Vol 26 (04) ◽  
pp. 413-439 ◽  
Author(s):  
Joan Boyar ◽  
Kim S. Larsen ◽  
Abyayananda Maiti

This is a contribution to the ongoing study of properties of performance measures for online algorithms. It has long been known that competitive analysis suffers from draw-backs in certain situations, and many alternative measures have been proposed. More systematic comparative studies of performance measures have been initiated recently, and we continue this work, considering competitive analysis, relative interval analysis, and relative worst order analysis on the frequent items problem, a fundamental online streaming problem.


Sign in / Sign up

Export Citation Format

Share Document