Sabrina Spellman at SemEval-2023 Task 5: Discover the Shocking Truth Behind this Composite Approach to Clickbait Spoiling!

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes an approach to automatically close the knowledge gap of Clickbait-Posts via a transformer model trained for Question-Answering, augmented by a task-specific post-processing step. This was part of the SemEval 2023 Clickbait shared task (Fröbe et al., 2023a) - specifically task 5. We devised strategies to improve the existing model to fit the task better, e.g. with different special models and a post-processor tailored to different inherent challenges of the task. Furthermore, we explored the possibility of expanding the original training data by using strategies from Heuristic Labeling and Semi-Supervised Learning. With those adjustments, we were able to improve the baseline by 9.8 percentage points to a BLEU-4 score of 48.0%.

Cite

CITATION STYLE

APA

Birkenheuer, S., Drechsel, J., Justen, P., Pöhlmann, J., Gonsior, J., & Reusch, A. (2023). Sabrina Spellman at SemEval-2023 Task 5: Discover the Shocking Truth Behind this Composite Approach to Clickbait Spoiling! In 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop (pp. 969–977). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.semeval-1.134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free