Overlapped Data Processing Scheme for Accelerating Training and Validation in Machine Learning

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

For several years, machine learning (ML) technologies open up new opportunities which solve traditional problems based on a rich set of hardware resources. Unfortunately, ML technologies sometimes waste available hardware resources (e.g., CPU and GPU) because they spend a lot of time waiting for a previous step inside ML procedure. In this paper, we first study data flows of the ML procedure in detail to find avoidable performance bottlenecks. Then, we propose ol.data, the first software-based data processing scheme that aims to (1) overlap training and validation steps in one epoch or two adjacent epochs, and (2) perform validation steps in parallel, which helps to significantly improve not only the computation time but also the resource utilization. To confirm the positive effectiveness of ol.data, we implemented a convolution neural network (CNN) model with ol.data and compared it with the traditional approaches, Numpy (i.e., baseline) and tf.data on three different datasets. As a result, we confirmed that ol.data reduces the inference time by up to 41.8% and increases the utilization of CPU and GPU resources by up to 75.7% and 38.7%, respectively.

Cite

CITATION STYLE

APA

Choi, J., & Kang, D. (2022). Overlapped Data Processing Scheme for Accelerating Training and Validation in Machine Learning. IEEE Access, 10, 72015–72023. https://doi.org/10.1109/ACCESS.2022.3189373

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free