GPU_MF_SGD: A novel GPU-based stochastic gradient descent method for matrix factorization

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recommender systems are used in most of nowadays applications. Providing real-time suggestions with high accuracy is considered as one of the most crucial challenges that face them. Matrix factorization (MF) is an effective technique for recommender systems as it improves the accuracy. Stochastic Gradient Descent (SGD) for MF is the most popular approach used to speed up MF. SGD is a sequential algorithm, which is not trivial to be parallelized, especially for large-scale problems. Recently, many researches have proposed parallel methods for parallelizing SGD. In this research, we propose GPU_MF_SGD, a novel GPU-based method for large-scale recommender systems. GPU_MF_SGD utilizes Graphics Processing Unit (GPU) resources by ensuring load balancing and linear scalability, and achieving coalesced access of global memory without preprocessing phase. Our method demonstrates 3.1X–5.4X speedup over the most state-of-the-art GPU method, CuMF_SGD.

Cite

CITATION STYLE

APA

Nassar, M. A., El-Sayed, L. A. A., & Taha, Y. (2019). GPU_MF_SGD: A novel GPU-based stochastic gradient descent method for matrix factorization. In Advances in Intelligent Systems and Computing (Vol. 887, pp. 271–287). Springer Verlag. https://doi.org/10.1007/978-3-030-03405-4_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free