Perspective Transformation Data Augmentation for Object Detection

36Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

One major reason for the success of convolutional neural networks (CNNs) is the availability of large-scale labeled data. Effective training of CNNs relies on large annotated data. Unfortunately, large amounts of data with corresponding annotations are too expensive to obtain in some real-world applications. One reasonable alternative is to use data augmentation techniques to automatically generate annotated samples. In this paper, a novel data augmentation framework based on perspective transformation is proposed. This method automatically generates new annotated data without extra manual labeling, thus effectively extends the inadequate dataset. Perspective transformation can produce new images captured from any cameras viewpoints. Therefore, our method can mimic images taken at the angle that the camera cannot reach. Extensive experimental results on several datasets have demonstrated that our perspective transformation data augmentation strategy is an effective tool when using deep CNNs on small or imbalance datasets.

Cite

CITATION STYLE

APA

Wang, K., Fang, B., Qian, J., Yang, S., Zhou, X., & Zhou, J. (2020). Perspective Transformation Data Augmentation for Object Detection. IEEE Access, 8, 4935–4943. https://doi.org/10.1109/ACCESS.2019.2962572

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free