PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems

22Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present the PowerGridworld open source software package to provide users with a lightweight, modular, and customizable framework for creating power-systems-focused, multi-Agent Gym environments that readily integrate with existing training frameworks for reinforcement learning (RL). Although many frameworks exist for training multi-Agent RL (MARL) policies, none can rapidly prototype and develop the environments themselves, especially in the context of heterogeneous (composite, multi-device) power systems where power flow solutions are required to define grid-level variables and costs. PowerGridworld helps to fill this gap. To highlight PowerGridworld's key features, we present two case studies and demonstrate learning MARL policies using both OpenAI's multi-Agent deep deterministic policy gradient (MADDPG) and RL-Lib's proximal policy optimization (PPO) algorithms. In both cases, at least some subset of agents incorporates elements of the power flow solution at each time step as part of their reward (negative cost) structures.

Cite

CITATION STYLE

APA

Biagioni, D., Zhang, X., Wald, D., Vaidhynathan, D., Chintala, R., King, J., & Zamzam, A. S. (2022). PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems. In e-Energy 2022 - Proceedings of the 2022 13th ACM International Conference on Future Energy Systems (pp. 565–570). Association for Computing Machinery, Inc. https://doi.org/10.1145/3538637.3539616

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free