Representation learning is an essential process in the text similarity task. The methods based on neural variational inference first learn the semantic representation of the texts, and then measure the similar degree of these texts by calculating the cosine of their representations. However, it is not generally desirable that using the neural network simply to learn semantic representation as it cannot capture the rich semantic information completely. Considering that the similarity of context information reflects the similarity of text pairs in most cases, we integrate the topic information into a stacked variational autoencoder in process of text representation learning. The improved text representations are used in text similarity calculation. Experiment shows that our approach obtains the state-of-art performance.
CITATION STYLE
Su, X., Yan, R., Gong, Z., Fu, Y., & Xu, H. (2018). Integrating topic information into VAE for text semantic similarity. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11305 LNCS, pp. 546–557). Springer Verlag. https://doi.org/10.1007/978-3-030-04221-9_49
Mendeley helps you to discover research relevant for your work.