SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation

572Citations
Citations of this article
253Readers
Mendeley users who have this article in their library.

Abstract

Semantic Textual Similarity (STS) seeks to measure the degree of semantic equivalence between two snippets of text. Similarity is expressed on an ordinal scale that spans from semantic equivalence to complete unrelated-ness. Intermediate values capture specifically defined levels of partial similarity. While prior evaluations constrained themselves to just monolingual snippets of text, the 2016 shared task includes a pilot subtask on computing semantic similarity on cross-lingual text snippets. This year's traditional monolingual subtask involves the evaluation of English text snippets from the following four domains: Plagiarism Detection, Post-Edited Machine Translations, Question-Answering and News Article Headlines. From the question-answering domain, we include both question-question and answer-answer pairs. The cross-lingual subtask provides paired Spanish-English text snippets drawn from the same sources as the English data as well as independently sampled news data. The English sub-task attracted 43 participating teams producing 119 system submissions, while the cross-lingual Spanish-English pilot subtask attracted 10 teams resulting in 26 systems.

Cite

CITATION STYLE

APA

Agirre, E., Banea, C., Cer, D., Diab, M., Gonzalez-Agirre, A., Mihalcea, R., … Wiebe, J. (2016). SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings (pp. 497–511). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s16-1081

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free