Online learning for recommendations at grubhub

7Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a method to easily modify existing offline Recommender Systems to run online using Transfer Learning. Online Learning for Recommender Systems has two main advantages: quality and scale. Like many Machine Learning algorithms in production if not regularly retrained will suffer from Concept Drift. A policy that is updated frequently online can adapt to drift faster than a batch system. This is especially true for user-interaction systems like recommenders where the underlying distribution can shift drastically to follow user behaviour. As a platform grows rapidly like Grubhub, the cost of running batch training jobs becomes material. A shift from stateless batch learning offline to stateful incremental learning online can recover, for example, at Grubhub, up to a 45x cost savings and a +20% metrics increase. There are a few challenges to overcome with the transition to online stateful learning, namely convergence, non-stationary embeddings and off-policy evaluation, which we explore from our experiences running this system in production.

Cite

CITATION STYLE

APA

Egg, A. (2021). Online learning for recommendations at grubhub. In RecSys 2021 - 15th ACM Conference on Recommender Systems (pp. 569–571). Association for Computing Machinery, Inc. https://doi.org/10.1145/3460231.3474599

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free