Recipe Representation Learning with Networks

14Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Learning effective representations for recipes is essential in food studies for recommendation, classification, and other applications. Unlike what has been developed for learning textual or cross-modal embeddings for recipes, the structural relationship among recipes and food items are less explored. In this paper, we formalize the problem recipe representation learning with networks to involve both the textual feature and the structural relational feature into recipe representations. Specifically, we first present RecipeNet, a new and large-scale corpus of recipe data to facilitate network based food studies and recipe representation learning research. We then propose a novel heterogeneous recipe network embedding model, rn2vec, to learn recipe representations. The proposed model is able to capture textual, structural, and nutritional information through several neural network modules, including textual CNN, inner-ingredients transformer, and a graph neural network with hierarchical attention. We further design a combined objective function of node classification and link prediction to jointly optimize the model. The extensive experiments show that our model outperforms state-of-the-art baselines on two classic food study tasks. Dataset and codes are available at https://github.com/meettyj/rn2vec.

Cite

CITATION STYLE

APA

Tian, Y., Zhang, C., Metoyer, R., & Chawla, N. V. (2021). Recipe Representation Learning with Networks. In International Conference on Information and Knowledge Management, Proceedings (pp. 1824–1833). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482468

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free