UIC-NLP at SemEval-2020 Task 10: Exploring an Alternate Perspective on Evaluation

2Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

In this work we describe and analyze a supervised learning system for word emphasis selection in phrases drawn from visual media as a part of the Semeval 2020 Shared Task 10. More specifically, we begin by briefly introducing the shared task problem and provide an analysis of interesting and relevant features present in the training dataset. We then introduce our LSTM-based model and describe its structure, input features, and limitations. Our model ultimately failed to beat the benchmark score, achieving an average match() score of 0.704 on the validation data (0.659 on the test data) but predicted 84.8% of words correctly considering a 0.5 threshold. We conclude with a thorough analysis and discussion of erroneous predictions with many examples and visualizations.

Cite

CITATION STYLE

APA

Hossu, P., & Parde, N. (2020). UIC-NLP at SemEval-2020 Task 10: Exploring an Alternate Perspective on Evaluation. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 1704–1709). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.223

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free