The primary challenge of applying machine learning in graph theory is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph. However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. In this chapter, we will look at a review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph convolutional networks. We will also look at methods to embed individual nodes as well as approaches to embed entire (sub)graphs.
CITATION STYLE
Raj P. M., K., Mohan, A., & Srinivasa, K. G. (2018). Representation Learning on Graphs (pp. 301–317). https://doi.org/10.1007/978-3-319-96746-2_15
Mendeley helps you to discover research relevant for your work.