Malicious or Benign? Towards Effective Content Moderation for Children’s Videos

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Online video platforms receive hundreds of hours of uploads every minute, making manual content moderation impossible. Unfortunately, the most vulnerable consumers of malicious video content are children from ages 1-5 whose attention is easily captured by bursts of color and sound. Scammers attempting to monetize their content may craft malicious children’s videos that are superficially similar to educational videos, but include scary and disgusting characters, violent motions, loud music, and disturbing noises. Prominent video hosting platforms like YouTube have taken measures to mitigate malicious content on their platform, but these videos often go undetected by current content moderation tools that are focused on removing pornographic or copyrighted content. This paper introduces our toolkit (Malicious or Benign) for promoting research on automated content moderation of children’s videos. We present 1) a customizable annotation tool for videos, 2) a new dataset with difficult to detect test cases of malicious content and 3) a benchmark suite of state-of-the-art video classification models.

Cite

CITATION STYLE

APA

Ahmed, S. H., Khan, M. J., Umer Qaisar, H. M., & Sukthankar, G. (2023). Malicious or Benign? Towards Effective Content Moderation for Children’s Videos. In Proceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS (Vol. 36). Florida Online Journals, University of Florida. https://doi.org/10.32473/flairs.36.133315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free