Inspire at SemEval-2016 task 2: Interpretable semantic textual similarity alignment based on answer set programming

7Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.

Abstract

In this paper we present our system developed for the SemEval 2016 Task 2 - Interpretable Semantic Textual Similarity along with the results obtained for our submitted runs. Our system participated in the subtasks predicting chunk similarity alignments for gold chunks as well as for predicted chunks. The Inspire system extends the basic ideas from last years participant NeRoSim, however we realize the rules in logic programming and obtain the result with an Answer Set Solver. To prepare the input for the logic program, we use the PunktTokenizer, Word2Vec, and WordNet APIs of NLTK, and the POS- and NER-taggers from Stanford CoreNLP. For chunking we use a joint POS-tagger and dependency parser and based on that determine chunks with an Answer Set Program. Our system ranked third place overall and first place in the Headlines gold chunk subtask.

Cite

CITATION STYLE

APA

Kazmi, M., & Schüller, P. (2016). Inspire at SemEval-2016 task 2: Interpretable semantic textual similarity alignment based on answer set programming. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings (pp. 1109–1115). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s16-1171

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free