Mining Player Ranking Dynamics in Team Sports

Author(s):  
Paul Fomenky ◽  
Alfred Noel ◽  
Dan A. Simovici
Keyword(s):  
2016 ◽  
Vol 12 (1) ◽  
Author(s):  
Stephanie Ann Kovalchik

AbstractBill James’ discovery of a Pythagorean formula for win expectation in baseball has been a useful resource to analysts and coaches for over 30 years. Extensions of the Pythagorean model have been developed for all of the major professional team sports but none of the individual sports. The present paper attempts to address this gap by deriving a Pythagorean model for win production in tennis. Using performance data for the top 100 male singles players between 2004 and 2014, this study shows that, among the most commonly reported performance statistics, a model of break points won provides the closest approximation to the Pythagorean formula, explaining 85% of variation in season wins and having the lowest cross-validation prediction error among the models considered. The mid-season projections of the break point model had performance that was comparable to an expanded model that included eight other serve and return statistics as well as player ranking. A simple match prediction algorithm based on a break point model with the previous 9 months of match history had a prediction accuracy of 67% when applied to 2015 match outcomes, whether using the least-squares or Pythagorean power coefficient. By demonstrating the striking similarity between the Pythagorean formula for baseball wins and the break point model for match wins in tennis, this paper has identified a potentially simple yet powerful analytic tool with a wide range of potential uses for player performance evaluation and match forecasting.


Author(s):  
Yudong Luo ◽  
Oliver Schulte ◽  
Pascal Poupart

A major task of sports analytics is to rank players based on the impact of their actions. Recent methods have applied reinforcement learning (RL) to assess the value of actions from a learned action value or Q-function. A fundamental challenge for estimating action values is that explicit reward signals (goals) are very sparse in many team sports, such as ice hockey and soccer. This paper combines Q-function learning with inverse reinforcement learning (IRL) to provide a novel player ranking method. We treat professional play as expert demonstrations for learning an implicit reward function. Our method alternates single-agent IRL to learn a reward function for multiple agents; we provide a theoretical justification for this procedure. Knowledge transfer is used to combine learned rewards and observed rewards from goals. Empirical evaluation, based on 4.5M play-by-play events in the National Hockey League (NHL), indicates that player ranking using the learned rewards achieves high correlations with standard success measures and temporal consistency throughout a season.


2008 ◽  
Author(s):  
Pedro J. M. Passos ◽  
Duarte Araujo ◽  
Keith Davids ◽  
Ana Diniz ◽  
Luis Gouveia ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document