Forget and Diversify: Regularized Refinement for Weakly Supervised Object Detection

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We study weakly supervised learning for object detectors, where training images have image-level class labels only. This problem is often addressed by multiple instance learning, where pseudo-labels of proposals are constructed from image-level weak labels and detectors are learned from the potentially noisy labels. Since existing methods train models in a discriminative manner, they typically suffer from collapsing into salient parts and also fail in localizing multiple instances within an image. To alleviate such limitations, we propose simple yet effective regularization techniques, weight reinitialization and labeling perturbations, which prevent overfitting to noisy labels by forgetting biased weights. We also introduce a graph-based mode-seeking technique that identifies multiple object instances in a principled way. The combination of the two proposed techniques reduces overfitting observed frequently in weakly supervised setting, and greatly improves object localization performance in standard benchmarks.

Cite

CITATION STYLE

APA

Son, J., Kim, D., Lee, S., Kwak, S., Cho, M., & Han, B. (2019). Forget and Diversify: Regularized Refinement for Weakly Supervised Object Detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11364 LNCS, pp. 632–648). Springer Verlag. https://doi.org/10.1007/978-3-030-20870-7_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free