Reinforcement learning (RL) is emerging as a powerful technique for solving complex code optimization tasks with an ample search space. While promising, existing solutions require a painstaking manual process to tune the right task-specific RL architecture, for which compiler developers need to determine the composition of the RL exploration algorithm, its supporting components like state, reward, and transition functions, and the hyperparameters of these models. This paper introduces SuperSonic, a new open-source framework to allow compiler developers to integrate RL into compilers easily, regardless of their RL expertise. SuperSonic supports customizable RL architecture compositions to target a wide range of optimization tasks. A key feature of SuperSonic is the use of deep RL and multi-Task learning techniques to develop a meta-optimizer to automatically find and tune the right RL architecture from training benchmarks. The tuned RL can then be deployed to optimize new programs. We demonstrate the efficacy and generality of SuperSonic by applying it to four code optimization problems and comparing it against eight auto-Tuning frameworks. Experimental results show that SuperSonic consistently improves hand-Tuned methods by delivering better overall performance, accelerating the deployment-stage search by 1.75x on average (up to 100x).
CITATION STYLE
Wang, H., Tang, Z., Zhang, C., Zhao, J., Cummins, C., Leather, H., & Wang, Z. (2022). Automating Reinforcement Learning Architecture Design for Code Optimization. In CC 2022 - Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction (pp. 129–143). Association for Computing Machinery, Inc. https://doi.org/10.1145/3497776.3517769
Mendeley helps you to discover research relevant for your work.