Analyzing risky choices: Q-learning for Deal-No-Deal

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we derive an optimal strategy for the popular Deal or No Deal game show. To do this, we use Q-learning methods, which quantify the continuation value inherent in sequential decision making in the game. We then analyze two contestants, Frank and Susanne, risky choices from the European version of the game. Given their choices and our optimal strategy, we find what their implied bounds would be on their levels of risk aversion. Previous empirical evidence in risky decision making has suggested that past outcomes affect future choices and that contestants have time-varying risk aversion. We demonstrate that the strategies of Frank and Susanne are consistent with constant risk aversion levels except for their final risk-seeking choice. We conclude with directions for future research. © 2013 John Wiley & Sons, Ltd.

Author supplied keywords

Cite

CITATION STYLE

APA

Korsos, L., & Polson, N. G. (2014). Analyzing risky choices: Q-learning for Deal-No-Deal. Applied Stochastic Models in Business and Industry, 30(3), 258–270. https://doi.org/10.1002/asmb.1971

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free