Representation Learning for Natural Language Processing

56Citations
Citations of this article
337Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This open access book provides an overview of the recent advances in representation learning theory, algorithms and applications for natural language processing (NLP). It is divided into three parts. Part I presents the representation learning techniques for multiple language entries, including words, phrases, sentences and documents. Part II then introduces the representation techniques for those objects that are closely related to NLP, including entity-based world knowledge, sememe-based linguistic knowledge, networks, and cross-modal entries. Lastly, Part III provides open resource tools for representation learning techniques, and discusses the remaining challenges and future research directions. The theories and algorithms of representation learning presented can also benefit other related domains such as machine learning, social network analysis, semantic Web, information retrieval, data mining and computational biology. This book is intended for advanced undergraduate and graduate students, post-doctoral fellows, researchers, lecturers, and industrial engineers, as well as anyone interested in representation learning and natural language processing.

Cite

CITATION STYLE

APA

Liu, Z., Lin, Y., & Su, M. (2020). Representation Learning for Natural Language Processing. Representation Learning for Natural Language Processing (pp. 1–334). Springer Singapore. https://doi.org/10.1007/978-981-15-5573-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free