DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks

3Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

As an essential branch of artificial intelligence, recommendation systems have gradually penetrated people’s daily lives. It is the active recommendation of goods or services of potential interest to users based on their preferences. Many recommendation methods have been proposed in both industry and academia. However, there are some limitations of previous recommendation methods: (1) Most of them do not consider the cross-correlation between data. (2) Many treat the recommendation process as a one-time act and do not consider the continuity of the recommendation system. To overcome these limitations, we propose a recommendation framework based on deep reinforcement learning techniques, known as DDRCN: a deep deterministic policy gradient recommendation framework incorporating deep cross networks. We use a Deep network and a Cross network to fit the cross relationships between the data, to obtain a representation of the user interaction data. The Actor-Critic network is designed to simulate the continuous interaction behavior of users through a greedy strategy. A deep deterministic policy gradient network is also used to train the recommendation model. Finally, we conduct experiments with two publicly available datasets and find that our proposed recommendation framework outperforms the baseline approach in the recall and ranking phases of recommendations.

Cite

CITATION STYLE

APA

Gao, T., Gao, S., Xu, J., & Zhao, Q. (2023). DDRCN: Deep Deterministic Policy Gradient Recommendation Framework Fused with Deep Cross Networks. Applied Sciences (Switzerland), 13(4). https://doi.org/10.3390/app13042555

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free