In this paper, we consider the task of detecting the players and sports balls in real-world handball images, as a building block for action recognition. Detecting the ball is still a challenge because it is a very small object that takes only a few pixels in the image but carries a lot of information relevant to the interpretation of scenes. Balls can vary greatly regarding color and appearance due to various distances to the camera and motion blur. Occlusion is also present, especially as handball players carry the ball in their hands during the game and it is understood that the player with the ball is a key player for the current action. Handball players are located at different distances from the camera, often occluded and have a posture that differs from ordinary activities for which most object detectors are commonly learned. We compare the performance of 6 models based on the YOLOv2 object detector, trained on an image dataset of publicly available sports images and images from custom handball recordings. The performance of a person and ball detection is measured on the whole dataset and the custom part regarding mean average precision metric.
CITATION STYLE
Burić, M., Pobar, M., & Ivašić-Kos, M. (2019). Adapting YOLO Network for Ball and Player Detection. In International Conference on Pattern Recognition Applications and Methods (Vol. 1, pp. 845–851). Science and Technology Publications, Lda. https://doi.org/10.5220/0007582008450851
Mendeley helps you to discover research relevant for your work.