This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Lee, I. T., Goindani, M., Li, C., Jin, D., Johnson, K. M., Zhang, X., … Goldwasser, D. (2017). PurdueNLP at SemEval-2017 Task 1: Predicting Semantic Textual Similarity with Paraphrase and Event Embeddings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 198–202). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s17-2029