High-performance computing (HPC) environments are used by many technology and research organizations to facilitate large-scale computations. HPC systems typically use a job scheduling software which prioritizes jobs for submissions, manages the computational resources, and initiates submitted jobs to maximize efficiency. To assist the scheduler, data on execution times were collected and are used to classify new jobs into one of four classes (very fast, fast, moderate or long). In this chapter we illustrate the model tuning and evaluation process in this context. Here we present the data splitting and modeling strategy (Section 17.1), model results (Section 17.2), and corresponding computing code (Section 17.3).
CITATION STYLE
Kuhn, M., & Johnson, K. (2013). Case Study: Job Scheduling. In Applied Predictive Modeling (pp. 445–460). Springer New York. https://doi.org/10.1007/978-1-4614-6849-3_17
Mendeley helps you to discover research relevant for your work.