Inverse reinforcement learning for team sports: Valuing actions and players

28Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A major task of sports analytics is to rank players based on the impact of their actions. Recent methods have applied reinforcement learning (RL) to assess the value of actions from a learned action value or Q-function. A fundamental challenge for estimating action values is that explicit reward signals (goals) are very sparse in many team sports, such as ice hockey and soccer. This paper combines Q-function learning with inverse reinforcement learning (IRL) to provide a novel player ranking method. We treat professional play as expert demonstrations for learning an implicit reward function. Our method alternates single-agent IRL to learn a reward function for multiple agents; we provide a theoretical justification for this procedure. Knowledge transfer is used to combine learned rewards and observed rewards from goals. Empirical evaluation, based on 4.5M play-by-play events in the National Hockey League (NHL), indicates that player ranking using the learned rewards achieves high correlations with standard success measures and temporal consistency throughout a season.

Cite

CITATION STYLE

APA

Luo, Y., Schulte, O., & Poupart, P. (2020). Inverse reinforcement learning for team sports: Valuing actions and players. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3356–3363). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/464

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free