Abstract
In this paper we present a deep-learning system that competed at SemEval-2017 Task 6 “#HashtagWars: Learning a Sense of Humor”. We participated in Subtask A, in which the goal was, given two Twitter messages, to identify which one is funnier. We propose a Siamese architecture with bidirectional Long Short-Term Memory (LSTM) networks, augmented with an attention mechanism. Our system works on the token-level, leveraging word embeddings trained on a big collection of unlabeled Twitter messages. We ranked 2nd in 7 teams. A post-completion improvement of our model, achieves state-of-the-art results on #HashtagWars dataset.
Cite
CITATION STYLE
Baziotis, C., Pelekis, N., & Doulkeridis, C. (2017). DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 390–395). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s17-2065
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.