Learning Preferences in a Cognitive Decision Model

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Understanding human decision processes has been a topic of intense study in different disciplines including psychology, economics, and artificial intelligence. Indeed, modeling human decision making plays a fundamental role in the design of intelligent systems capable of rich interactions. Decision Field Theory (DFT) [3] provides a cognitive model of the deliberation process that precedes the selection of an option. DFT is grounded in psychological principles and has been shown to be effective in modeling several behavioral effects involving uncertainty and interactions among alternatives. In this paper, we address the problem of learning the internal DFT model of a decision maker by observing only his final choices. In our setting choices are among several options which are evaluated according to different attributes. Our approach, based on Recurrent Neural Networks, extracts underlying preferences compatible with the observed choice behavior and, thus, provides a method for learning a rich preference model of an individual which encompasses psychological aspects and which can be used as more realistic predictor of future behavior.

Cite

CITATION STYLE

APA

Rahgooy, T., & Venable, K. B. (2019). Learning Preferences in a Cognitive Decision Model. In Communications in Computer and Information Science (Vol. 1072, pp. 181–194). Springer. https://doi.org/10.1007/978-981-15-1398-5_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free