Is Reinforcement Learning the Choice of Human Learners?: A Case Study of Taxi Drivers

2Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning to make optimal decisions is a common yet complicated task. While computer agents can learn to make decisions by running reinforcement learning (RL), it remains unclear how human beings learn. In this paper, we perform the first data-driven case study on taxi drivers to validate whether humans mimic RL to learn. We categorize drivers into three groups based on their performance trends and analyze the correlations between human drivers and agents trained using RL. We discover that drivers that become more efficient at earning over time exhibit similar learning patterns to those of agents, whereas drivers that become less efficient tend to do the opposite. Our study (1) provides evidence that some human drivers do adapt RL when learning, (2) enhances the deep understanding of taxi drivers' learning strategies, (3) offers a guideline for taxi drivers to improve their earnings, and (4) develops a generic analytical framework to study and validate human learning strategies.

Cite

CITATION STYLE

APA

Pan, M., Huang, W., Li, Y., Zhou, X., Liu, Z., Bao, J., … Luo, J. (2020). Is Reinforcement Learning the Choice of Human Learners?: A Case Study of Taxi Drivers. In GIS: Proceedings of the ACM International Symposium on Advances in Geographic Information Systems (pp. 357–366). Association for Computing Machinery. https://doi.org/10.1145/3397536.3422246

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free