Performing object detection on partially occluded objects is a challenging task due to the amount of variation in location, scale, and ratio present in real-world occlusion. A typical solution to this problem is to provide a large enough dataset with ample occluded samples for feature learning. However, this is rather costly given the amount of time and effort involved in the data collection process. In addition, even with such a dataset, there is no guarantee that it covers all possible cases of common occlusion in the real world. In this paper, we propose an alternate approach that utilizes the power of adversarial learning to reinforce the training of common object detection models. More specifically, we propose a Generative Adversarial Occlusion Network (GAON) capable of generating partially shaded training samples that are challenging for the object detector to classify. We demonstrate the efficacy of such an approach by conducting experiments on the Faster R-CNN detector, and the results indicate the superiority of our approach in improving the model's performance on occluded inputs.
CITATION STYLE
Li, F., Li, J., & Deng, Y. (2022). Faster R-CNN with Generative Adversarial Occlusion Network for Object Detection. In ACM International Conference Proceeding Series (pp. 526–531). Association for Computing Machinery. https://doi.org/10.1145/3529836.3529854
Mendeley helps you to discover research relevant for your work.