The abiding attempt of automation has also pervaded computer networks, with the ability to measure, analyze, and control themselves in an automated manner, by reacting to changes in the environment (e.g., demand) while exploiting existing flexibilities. When provided with these features, networks are often referred to as "self-driving". Network virtualization and machine learning are the drivers. In this regard, the provision and orchestration of physical or virtual resources are crucial for both Quality of Service guarantees and cost management in the edge/cloud computing ecosystem. Auto-scaling mechanisms are hence essential to effectively manage the lifecycle of network resources. In this poster, we propose Relevant, a distributed reinforcement learning approach to enable distributed automation for network orchestrators. Our solution aims at solving the congestion control problem within Software-Defined Network infrastructures, while being mindful of the energy consumption, helping resources to scale up and down as traffic demands fluctuate and energy optimization opportunities arise.
CITATION STYLE
Sacco, A., Esposito, F., & Marchetto, G. (2020). A distributed reinforcement learning approach for energy and congestion-aware edge networks. In CoNEXT 2020 - Proceedings of the 16th International Conference on Emerging Networking EXperiments and Technologies (pp. 546–547). Association for Computing Machinery, Inc. https://doi.org/10.1145/3386367.3431670
Mendeley helps you to discover research relevant for your work.