Network planning with deep reinforcement learning

70Citations
Citations of this article
106Readers
Mendeley users who have this article in their library.

Abstract

Network planning is critical to the performance, reliability and cost of web services. This problem is typically formulated as an Integer Linear Programming (ILP) problem. Today's practice relies on hand-tuned heuristics from human experts to address the scalability challenge of ILP solvers. In this paper, we propose NeuroPlan, a deep reinforcement learning (RL) approach to solve the network planning problem. This problem involves multi-step decision making and cost minimization, which can be naturally cast as a deep RL problem. We develop two important domain-specific techniques. First, we use a graph neural network (GNN) and a novel domain-specific node-link transformation for state encoding, in order to handle the dynamic nature of the evolving network topology during planning decision making. Second, we leverage a two-stage hybrid approach that first uses deep RL to prune the search space and then uses an ILP solver to find the optimal solution. This approach resembles today's practice, but avoids human experts with an RL agent in the first stage. Evaluation on real topologies and setups from large production networks demonstrates that NeuroPlan scales to large topologies beyond the capability of ILP solvers, and reduces the cost by up to 17% compared to hand-tuned heuristics.

Cite

CITATION STYLE

APA

Zhu, H., Gupta, V., Ahuja, S. S., Tian, Y., Zhang, Y., & Jin, X. (2021). Network planning with deep reinforcement learning. In SIGCOMM 2021 - Proceedings of the ACM SIGCOMM 2021 Conference (pp. 258–271). Association for Computing Machinery, Inc. https://doi.org/10.1145/3452296.3472902

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free