DeepScheduling: Grid Computing Job Scheduler Based on Deep Reinforcement Learning

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Grid systems are large-scale platforms which consume a considerable amount of energy. Several efficient resource/power management strategies were proposed by the specialized literature. However, most of the proposed strategies are rule-based policies which do not exploit workload patterns. Deploying the same set of rules on systems using different usage patterns, and platform settings, may lead to a sub-optimized setup. Due to the complex nature of grid systems, tailoring such a system-specific policy is not a straightforward task. In this paper, we explore a Deep Reinforcement Learning (DRL) method to build an adaptive energy-aware scheduling policy. We trained our algorithm using real workload traces from Grid’5000 platform. Our experiments pointed out an energy setup saving up to 7%, as well as average requests waiting time reduction of 27%. Finally, the resuslts clarify the importance of explore the workload to build system-specific policies.

Cite

CITATION STYLE

APA

Casagrande, L. C., Koslovski, G. P., Miers, C. C., & Pillon, M. A. (2020). DeepScheduling: Grid Computing Job Scheduler Based on Deep Reinforcement Learning. In Advances in Intelligent Systems and Computing (Vol. 1151 AISC, pp. 1032–1044). Springer. https://doi.org/10.1007/978-3-030-44041-1_89

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free