Semantic Hilbert space for text representation learning

41Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Capturing the meaning of sentences has long been a challenging task. Current models tend to apply linear combinations of word features to conduct semantic composition for bigger-granularity units e.g. phrases, sentences, and documents. However, the semantic linearity does not always hold in human language. For instance, the meaning of the phrase “ivory tower” cannot be deduced by linearly combining the meanings of “ivory” and “tower”. To address this issue, we propose a new framework that models different levels of semantic units (e.g. sememe, word, sentence, and semantic abstraction) on a single Semantic Hilbert Space, which naturally admits a non-linear semantic composition by means of a complex-valued vector word representation. An end-to-end neural network 1 is proposed to implement the framework in the text classification task, and evaluation results on six benchmarking text classification datasets demonstrate the effectiveness, robustness and self-explanation power of the proposed model. Furthermore, intuitive case studies are conducted to help end users to understand how the framework works.

Cite

CITATION STYLE

APA

Wang, B., Li, Q., Melucci, M., & Song, D. (2019). Semantic Hilbert space for text representation learning. In The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 (pp. 3293–3299). Association for Computing Machinery, Inc. https://doi.org/10.1145/3308558.3313516

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free