Deep convolutional neural networks are highly efficient for computer vision tasks using plenty of training data. However, there remains a problem of small training datasets. For addressing this problem the training pipeline which handles rare object types and an overall lack of training data to build well-performing models that provide stable predictions is required. This article reports on the comprehensive framework XtremeAugment which provides an easy, reliable, and scalable way to collect image datasets and to efficiently label and augment collected data. The presented framework consists of two augmentation techniques that can be used independently and complement each other when applied together. These are Hardware Dataset Augmentation (HDA) and Object-Based Augmentation (OBA). HDA allows the users to collect more data and spend less time on manual data labeling. OBA significantly increases the training data variability and remains the distribution of the augmented images being close to the original dataset. We assess the proposed approach for the apple spoil segmentation scenario. Our results demonstrate a substantial increase in the model accuracy reaching 0.91 F1-score and outperforming the baseline model for up to 0.62 F1-score for a few-shot learning case in the wild data. The highest benefit of applying XtremeAugment is achieved for the cases where we collect images in the controlled indoor environment, but have to use the model in the wild.
CITATION STYLE
Nesteruk, S., Illarionova, S., Akhtyamov, T., Shadrin, D., Somov, A., Pukalchik, M., & Oseledets, I. (2022). XtremeAugment: Getting More from Your Data Through Combination of Image Collection and Image Augmentation. IEEE Access, 10, 24010–24028. https://doi.org/10.1109/ACCESS.2022.3154709
Mendeley helps you to discover research relevant for your work.