Domain-Adaptive Self-Supervised Face & Body Detection in Drawings

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Drawings are powerful means of pictorial abstraction and communication. Understanding diverse forms of drawings, including digital arts, cartoons, and comics, has been a major problem of interest for the computer vision and computer graphics communities. Although there are large amounts of digitized drawings from comic books and cartoons, they contain vast stylistic variations, which necessitate expensive manual labeling for training domain-specific recognizers. In this work, we show how self-supervised learning, based on a teacher-student network with a modified student network update design, can be used to build face and body detectors. Our setup allows exploiting large amounts of unlabeled data from the target domain when labels are provided for only a small subset of it. We further demonstrate that style transfer can be incorporated into our learning pipeline to bootstrap detectors using a vast amount of out-of-domain labeled images from natural images (i.e., images from the real world). Our combined architecture yields detectors with state-of-the-art (SOTA) and near-SOTA performance using minimal annotation effort. Our code can be accessed from https://github.com/barisbatuhan/DASS_Detector.

Cite

CITATION STYLE

APA

Topal, B. B., Yuret, D., & Sezgin, T. M. (2023). Domain-Adaptive Self-Supervised Face & Body Detection in Drawings. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2023-August, pp. 1432–1439). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2023/159

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free