Stochastic gradient descent (SGD) is an effective algorithm to solve matrix factorization problem. However, the performance of SGD depends critically on how learning rates are tuned over time. In this paper, we propose a novel per-dimension learning rate schedule called AALRSMF. This schedule relies on local gradients, requires no manual tunning of a global learning rate, and shows to be robust to the selection of hyper-parameters. The extensive experiments demonstrate that the proposed schedule shows promising results compared to existing ones on matrix factorization.
CITATION STYLE
Wei, F., Guo, H., Cheng, S., & Jiang, F. (2016). AALRSMF: An adaptive learning rate schedule for matrix factorization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9932 LNCS, pp. 410–413). Springer Verlag. https://doi.org/10.1007/978-3-319-45817-5_36
Mendeley helps you to discover research relevant for your work.