Offline Deep Reinforcement Learning for Dynamic Pricing of Consumer Credit

2Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a method for pricing consumer credit using recent advances in offline deep reinforcement learning. This approach relies on a static dataset and as opposed to commonly used pricing approaches it requires no assumptions on the functional form of demand. Using both real and synthetic data on consumer credit applications, we demonstrate that our approach using the conservative Q-Learning algorithm is capable of learning an effective personalized pricing policy without any online interaction or price experimentation. In particular, using historical data on online auto loan applications we estimate an increase in expected profit of 21% with a less than 15% average change in prices relative to the original pricing policy.

Cite

CITATION STYLE

APA

Khraishi, R., & Okhrati, R. (2022). Offline Deep Reinforcement Learning for Dynamic Pricing of Consumer Credit. In Proceedings of the 3rd ACM International Conference on AI in Finance, ICAIF 2022 (pp. 325–333). Association for Computing Machinery, Inc. https://doi.org/10.1145/3533271.3561682

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free