How to extend Elo: a Bayesian perspective

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Martin Ingram

Abstract The Elo rating system, originally designed for rating chess players, has since become a popular way to estimate competitors’ time-varying skills in many sports. Though the self-correcting Elo algorithm is simple and intuitive, it lacks a probabilistic justification which can make it hard to extend. In this paper, we present a simple connection between approximate Bayesian posterior mode estimation and Elo. We provide a novel justification of the approximations made by linking Elo to steady-state Kalman filtering. Our second key contribution is to observe that the derivation suggests a straightforward procedure for extending Elo. We use the procedure to derive versions of Elo incorporating margins of victory, correlated skills across different playing surfaces, and differing skills by tournament level in tennis. Combining all these extensions results in the most complete version of Elo presented for the sport yet. We evaluate the derived models on two seasons of men’s professional tennis matches (2018 and 2019). The best-performing model was able to predict matches with higher accuracy than both Elo and Glicko (65.8% compared to 63.7 and 63.5%, respectively) and a higher mean log-likelihood (−0.615 compared to −0.632 and −0.633, respectively), demonstrating the proposed model’s ability to improve predictions.

Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 880
Author(s):  
Mohammad R. Rezaei ◽  
Milos R. Popovic ◽  
Milad Lankarany

The amount of information that differentially correlated spikes in a neural ensemble carry is not the same; the information of different types of spikes is associated with different features of the stimulus. By calculating a neural ensemble’s information in response to a mixed stimulus comprising slow and fast signals, we show that the entropy of synchronous and asynchronous spikes are different, and their probability distributions are distinctively separable. We further show that these spikes carry a different amount of information. We propose a time-varying entropy (TVE) measure to track the dynamics of a neural code in an ensemble of neurons at each time bin. By applying the TVE to a multiplexed code, we show that synchronous and asynchronous spikes carry information in different time scales. Finally, a decoder based on the Kalman filtering approach is developed to reconstruct the stimulus from the spikes. We demonstrate that slow and fast features of the stimulus can be entirely reconstructed when this decoder is applied to asynchronous and synchronous spikes, respectively. The significance of this work is that the TVE can identify different types of information (for example, corresponding to synchronous and asynchronous spikes) that might simultaneously exist in a neural code.


1999 ◽  
Vol 105 (2) ◽  
pp. 1309-1310
Author(s):  
Sang‐Wook Lee ◽  
Jun‐Seok Lim ◽  
Byung‐Doo Jun ◽  
Koeng‐Mo Sung

2020 ◽  
Author(s):  
Babak Tavassoli ◽  
Parisa Joshaghani

Kalman filtering of measurement data from multiple sensors with time-varying delays and missing measurements is considered in this work. Two existing approaches to Kalman filtering with delays are extended by removing some assumptions in order to have equivalent filtering methods and making comparisons between them. The computational loads of the two methods are compared in terms of the average number of floating point operations required by each method for different system dimensionalities and delay upper bounds. The results show that the superiority of the methods over each other depends on the comparison conditions.


Sign in / Sign up

Export Citation Format

Share Document