NaCTeM at SemEval-2016 task 1: Inferring sentence-level semantic similarity from an ensemble of complementary lexical and sentence-level features

5Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

We present a description of the system submitted to the Semantic Textual Similarity (STS) shared task at SemEval 2016. The task is to assess the degree to which two sentences carry the same meaning. We have designed two different methods to automatically compute a similarity score between sentences. The first method combines a variety of semantic similarity measures as features in a machine learning model. In our second approach, we employ training data from the Interpretable Similarity subtask to create a combined word-similarity measure and assess the importance of both aligned and unaligned words. Finally, we combine the two methods into a single hybrid model. Our best-performing run attains a score of 0.7732 on the 2015 STS evaluation data and 0.7488 on the 2016 STS evaluation data.

Cite

CITATION STYLE

APA

Przybyła, P., Nguyen, N. T. H., Shardlow, M., Kontonatsios, G., & Ananiadou, S. (2016). NaCTeM at SemEval-2016 task 1: Inferring sentence-level semantic similarity from an ensemble of complementary lexical and sentence-level features. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings (pp. 614–620). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s16-1093

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free