Predicting GPU Failures With High Precision Under Deep Learning Workloads

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Graphics processing units (GPUs) are the de facto standard for processing deep learning (DL) tasks. In large-scale GPU clusters, GPU failures are inevitable and may cause severe consequences. For example, GPU failures disrupt distributed training, crash inference services, and result in service level agreement violations. In this paper, we study the problem of predicting GPU failures using machine learning (ML) models to mitigate their damages.We train prediction models on a four-month production dataset with 350 million entries at ByteDance. We observe that classic prediction models (GBDT, MLP, LSTM, and 1D-CNN) do not perform well - -they are inaccurate for predictions and unstable over time. We propose several techniques to improve the precision and stability of predictions, including parallel and cascade model-ensemble mechanisms and a sliding training method. We evaluate the performance of our proposed techniques. The results show that our proposed techniques improve the prediction precision from 46.3% to 85.4% on production workloads.

Cite

CITATION STYLE

APA

Liu, H., Li, Z., Tan, C., Yang, R., Cao, G., Liu, Z., & Guo, C. (2023). Predicting GPU Failures With High Precision Under Deep Learning Workloads. In Proceedings of the 16th ACM International Conference on Systems and Storage, SYSTOR 2023 (pp. 124–135). Association for Computing Machinery, Inc. https://doi.org/10.1145/3579370.3594777

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free