A neural network component for knowledge-based semantic representations of text

2Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents Semantic Neural Networks (SNNs), a knowledge-aware component based on deep learning. SNNs can be trained to encode explicit semantic knowledge from an arbitrary knowledge base, and can subsequently be combined with other deep learning architectures. At prediction time, SNNs provide a semantic encoding extracted from the input data, which can be exploited by other neural network components to build extended representation models that can face alternative problems. The SNN architecture is defined in terms of the concepts and relations present in a knowledge base. Based on this architecture, a training procedure is developed. Finally, an experimental setup is presented to illustrate the behaviour and performance of a SNN for a specific NLP problem, in this case, opinion mining for the classification of movie reviews.

Cite

CITATION STYLE

APA

Piad-Morffis, A., Muñoz, R., Almeida-Cruz, Y., Gutiérrez, Y., Estevez-Velarde, S., & Montoyo, A. (2019). A neural network component for knowledge-based semantic representations of text. In International Conference Recent Advances in Natural Language Processing, RANLP (Vol. 2019-September, pp. 904–911). Incoma Ltd. https://doi.org/10.26615/978-954-452-056-4_105

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free