Automated Event Detection and Classification in Soccer: The Potential of Using Multiple Modalities

22Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

Detecting events in videos is a complex task, and many different approaches, aimed at a large variety of use-cases, have been proposed in the literature. Most approaches, however, are unimodal and only consider the visual information in the videos. This paper presents and evaluates different approaches based on neural networks where we combine visual features with audio features to detect (spot) and classify events in soccer videos. We employ model fusion to combine different modalities such as video and audio, and test these combinations against different state-of-the-art models on the SoccerNet dataset. The results show that a multimodal approach is beneficial. We also analyze how the tolerance for delays in classification and spotting time, and the tolerance for prediction accuracy, influence the results. Our experiments show that using multiple modalities improves event detection performance for certain types of events.

Cite

CITATION STYLE

APA

Nergård Rongved, O. A., Stige, M., Hicks, S. A., Thambawita, V. L., Midoglu, C., Zouganeli, E., … Halvorsen, P. (2021). Automated Event Detection and Classification in Soccer: The Potential of Using Multiple Modalities. Machine Learning and Knowledge Extraction, 3(4), 1030–1054. https://doi.org/10.3390/make3040051

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free