A Word Embeddings Model for Sentence Similarity

  • Mijangos V
  • Sierra G
  • Herrera A
N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Currently, word embeddings (Bengio et al, 2003; Mikolov et al, 2013) have had a major boom due to its performance in dierent Natural Language Processing tasks. This technique has overpassed many conventional methods in the literature. From the obtained embedding vectors, we can make a good grouping of words and surface elements. It is common to represent top-level elements such as sentences, using the idea of composition (Baroni et al, 2014) through vectors sum, vectors product or through dening a linear operator representing the composition. Here, we propose the representation of sentences through a matrix containing the word embedding vectors of such sentence. However, this involves obtaining a distance between matrices. To solve this, we use a Frobenius inner product. We show that this sentence representation overtakes traditional composition methods.

Cite

CITATION STYLE

APA

Mijangos, V., Sierra, G., & Herrera, A. (2016). A Word Embeddings Model for Sentence Similarity. Research in Computing Science, 117(1), 63–74. https://doi.org/10.13053/rcs-117-1-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free