Automatic baseball commentary generation using deep learning

15Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video captioning is known as a method to summarize or explain a video. However, it is very difficult to create sports commentaries, which are running scene-by-scene descriptions, for sports videos by using conventional video captioning. This is the case because sports commentary requires not only specific and varied information about every scene, such as player action descriptions in baseball, but also background knowledge and dynamic at-bats statistics that are not found in the video. We propose a new system to automatically generate commentary for baseball games. In our system, given real-time baseball videos, suitable descriptions are relayed using four deep-learning models (i.e., a scene classifier, player detector, motion recognizer, and pitching result recognizer) integrated with domain ontology. Using these four deep-learning models, pieces of information about "who is doing what in which area of the field" and "what results are expected" are obtained. This approach is used to select an appropriate template, which is combined with baseball ontology knowledge for the generation of commentary. We evaluate our system using baseball games from the KBO (Korea Baseball Organization) League's 2018 season. As a result of the evaluation, we identify that our system1 can act as a commentator for baseball games.

Cite

CITATION STYLE

APA

Kim, B. J., & Choi, Y. S. (2020). Automatic baseball commentary generation using deep learning. In Proceedings of the ACM Symposium on Applied Computing (pp. 1056–1065). Association for Computing Machinery. https://doi.org/10.1145/3341105.3374063

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free