Graph Representation Learning

3Citations
Citations of this article
1.0kReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Graph structure, which can represent objects and their relationships, is ubiquitous in big data including natural languages. Besides original text as a sequence of word tokens, massive additional information in NLP is in the graph structure, such as syntactic relations between words in a sentence, hyperlink relations between documents, and semantic relations between entities. Hence, it is critical for NLP to encode these graph data with graph representation learning. Graph representation learning, also known as network embedding, has been exten-sively studied in AI and data mining. In this chapter, we introduce a variety of graph representation learning methods that embed graph data into vectors with shallow or deep neural models. After that, we introduce how graph representation learning helps NLP tasks.

Cite

CITATION STYLE

APA

Yang, C., Lin, Y., Liu, Z., & Sun, M. (2023). Graph Representation Learning. In Representation Learning for Natural Language Processing, Second Edition (pp. 169–210). Springer. https://doi.org/10.1007/978-981-99-1600-9_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free