Tree-based reinforcement learning for estimating optimal dynamic treatment regimes

36Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.

Abstract

Dynamic treatment regimes (DTRs) are sequences of treatment decision rules, in which treatment may be adapted over time in response to the changing course of an individual. Motivated by the substance use disorder (SUD) study, we propose a tree-based reinforcement learning (T-RL) method to directly estimate optimal DTRs in a multi-stage multi-treatment setting. At each stage, T-RL builds an unsupervised decision tree that directly handles the problem of optimization with multiple treatment comparisons, through a purity measure constructed with augmented inverse probability weighted estimators. For the multiple stages, the algorithm is implemented recursively using backward induction. By combining semiparametric regression with flexible tree-based learning, T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs, as shown in the simulation studies. With the proposed method, we identify dynamic SUD treatment regimes for adolescents.

Cite

CITATION STYLE

APA

Tao, Y., Wang, L., & Almirall, D. (2018). Tree-based reinforcement learning for estimating optimal dynamic treatment regimes. Annals of Applied Statistics, 12(3), 1914–1938. https://doi.org/10.1214/18-AOAS1137

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free