{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T04:17:27Z","timestamp":1773202647250,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":25,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,6,28]],"date-time":"2022-06-28T00:00:00Z","timestamp":1656374400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100006233","name":"National Renewable Energy Laboratory","doi-asserted-by":"publisher","award":["LDRD-AUMC"],"award-info":[{"award-number":["LDRD-AUMC"]}],"id":[{"id":"10.13039\/100006233","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,6,28]]},"DOI":"10.1145\/3538637.3539616","type":"proceedings-article","created":{"date-parts":[[2022,6,22]],"date-time":"2022-06-22T16:33:05Z","timestamp":1655915585000},"page":"565-570","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":24,"title":["PowerGridworld"],"prefix":"10.1145","author":[{"given":"David","family":"Biagioni","sequence":"first","affiliation":[{"name":"National Renewable Energy Laboratory"}]},{"given":"Xiangyu","family":"Zhang","sequence":"additional","affiliation":[{"name":"National Renewable Energy Laboratory"}]},{"given":"Dylan","family":"Wald","sequence":"additional","affiliation":[{"name":"Colorado School of Mines"}]},{"given":"Deepthi","family":"Vaidhynathan","sequence":"additional","affiliation":[{"name":"National Renewable Energy Laboratory"}]},{"given":"Rohit","family":"Chintala","sequence":"additional","affiliation":[{"name":"National Renewable Energy Laboratory"}]},{"given":"Jennifer","family":"King","sequence":"additional","affiliation":[{"name":"National Renewable Energy Laboratory"}]},{"given":"Ahmed S.","family":"Zamzam","sequence":"additional","affiliation":[{"name":"National Renewable Energy Laboratory"}]}],"member":"320","published-online":{"date-parts":[[2022,6,28]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"Anyscale. 2021. RLLib: Scalable Reinforcement Learning. https:\/\/docs.ray.io\/en\/latest\/rllib.htm Accessed: 2021-11-03.  Anyscale. 2021. RLLib: Scalable Reinforcement Learning. https:\/\/docs.ray.io\/en\/latest\/rllib.htm Accessed: 2021-11-03."},{"key":"e_1_3_2_1_2_1","volume-title":"arXiv preprint arXiv:1606.01540","author":"Brockman Greg","year":"2016","unstructured":"Greg Brockman , Vicki Cheung , Ludwig Pettersson , Jonas Schneider , John Schulman , Jie Tang , and Wojciech Zaremba . 2016. Open AI Gym . arXiv preprint arXiv:1606.01540 ( 2016 ). Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. arXiv preprint arXiv:1606.01540 (2016)."},{"key":"e_1_3_2_1_3_1","volume-title":"PowerNet: Multi-agent Deep Reinforcement Learning for Scalable Powergrid Control","author":"Chen Dong","year":"2021","unstructured":"Dong Chen , Kaian Chen , Zhaojian Li , Tianshu Chu , Rui Yao , Feng Qiu , and Kaixiang Lin . 2021. PowerNet: Multi-agent Deep Reinforcement Learning for Scalable Powergrid Control . IEEE Transactions on Power Systems ( 2021 ). Dong Chen, Kaian Chen, Zhaojian Li, Tianshu Chu, Rui Yao, Feng Qiu, and Kaixiang Lin. 2021. PowerNet: Multi-agent Deep Reinforcement Learning for Scalable Powergrid Control. IEEE Transactions on Power Systems (2021)."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSG.2016.2629450"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0378-7788(00)00114-6"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPWRS.2019.2941134"},{"key":"e_1_3_2_1_7_1","unstructured":"Electric Power Research Institute (EPRI). 2021. OpenDSS: EPRI Distribution System Simulator. https:\/\/sourceforge.net\/projects\/electricdss Accessed: 2021-11-04.  Electric Power Research Institute (EPRI). 2021. OpenDSS: EPRI Distribution System Simulator. https:\/\/sourceforge.net\/projects\/electricdss Accessed: 2021-11-04."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSG.2021.3058996"},{"key":"e_1_3_2_1_9_1","volume-title":"International Conference on Machine Learning. PMLR, 3053--3062","author":"Liang Eric","year":"2018","unstructured":"Eric Liang , Richard Liaw , Robert Nishihara , Philipp Moritz , Roy Fox , Ken Goldberg , Joseph Gonzalez , Michael Jordan , and Ion Stoica . 2018 . RLlib: Abstractions for distributed reinforcement learning . In International Conference on Machine Learning. PMLR, 3053--3062 . Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. 2018. RLlib: Abstractions for distributed reinforcement learning. In International Conference on Machine Learning. PMLR, 3053--3062."},{"key":"e_1_3_2_1_10_1","volume-title":"Multi-agent actor-critic for mixed cooperative-competitive environments. arXiv preprint arXiv:1706.02275","author":"Lowe Ryan","year":"2017","unstructured":"Ryan Lowe , Yi Wu , Aviv Tamar , Jean Harb , Pieter Abbeel , and Igor Mordatch . 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. arXiv preprint arXiv:1706.02275 ( 2017 ). Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. arXiv preprint arXiv:1706.02275 (2017)."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"crossref","unstructured":"Volodymyr Mnih Koray Kavukcuoglu David Silver Andrei A Rusu Joel Veness Marc G Bellemare Alex Graves Martin Riedmiller Andreas K Fidjeland Georg Ostrovski etal 2015. Human-level control through deep reinforcement learning. Nature 518 7540 (2015) 529--533.  Volodymyr Mnih Koray Kavukcuoglu David Silver Andrei A Rusu Joel Veness Marc G Bellemare Alex Graves Martin Riedmiller Andreas K Fidjeland Georg Ostrovski et al. 2015. Human-level control through deep reinforcement learning. Nature 518 7540 (2015) 529--533.","DOI":"10.1038\/nature14236"},{"key":"e_1_3_2_1_12_1","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Moritz Philipp","year":"2018","unstructured":"Philipp Moritz , Robert Nishihara , Stephanie Wang , Alexey Tumanov , Richard Liaw , Eric Liang , Melih Elibol , Zongheng Yang , William Paul , Michael I Jordan , 2018 . Ray: A distributed framework for emerging AI applications . In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18) . 561--577. Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I Jordan, et al. 2018. Ray: A distributed framework for emerging AI applications. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 561--577."},{"key":"e_1_3_2_1_13_1","unstructured":"OpenAI. 2021. OpenAI. https:\/\/openai.com\/about\/ Accessed: 2021-11-04.  OpenAI. 2021. OpenAI. https:\/\/openai.com\/about\/ Accessed: 2021-11-04."},{"key":"e_1_3_2_1_14_1","unstructured":"OpenAI. 2021. OpenAI's MADDPG Implementation. https:\/\/github.com\/openai\/maddpg Accessed: 2021-11-03.  OpenAI. 2021. OpenAI's MADDPG Implementation. https:\/\/github.com\/openai\/maddpg Accessed: 2021-11-03."},{"key":"e_1_3_2_1_15_1","volume-title":"GridLearn: Multiagent Reinforcement Learning for Grid-Aware Building Energy Management. arXiv preprint arXiv:2110.06396","author":"Pigott Aisling","year":"2021","unstructured":"Aisling Pigott , Constance Crozier , Kyri Baker , and Zoltan Nagy . 2021. GridLearn: Multiagent Reinforcement Learning for Grid-Aware Building Energy Management. arXiv preprint arXiv:2110.06396 ( 2021 ). Aisling Pigott, Constance Crozier, Kyri Baker, and Zoltan Nagy. 2021. GridLearn: Multiagent Reinforcement Learning for Grid-Aware Building Energy Management. arXiv preprint arXiv:2110.06396 (2021)."},{"key":"e_1_3_2_1_16_1","unstructured":"Antonin Raffin Ashley Hill Maximilian Ernestus Adam Gleave Anssi Kanervisto and Noah Dormann. 2019. Stable baselines3. https:\/\/github.com\/hill-a\/stable-baselines. Accessed: 2021-11-02.  Antonin Raffin Ashley Hill Maximilian Ernestus Adam Gleave Anssi Kanervisto and Noah Dormann. 2019. Stable baselines3. https:\/\/github.com\/hill-a\/stable-baselines. Accessed: 2021-11-02."},{"key":"e_1_3_2_1_17_1","volume-title":"Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347","author":"Schulman John","year":"2017","unstructured":"John Schulman , Filip Wolski , Prafulla Dhariwal , Alec Radford , and Oleg Klimov . 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 ( 2017 ). John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)."},{"key":"e_1_3_2_1_18_1","volume-title":"multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295","author":"Shalev-Shwartz Shai","year":"2016","unstructured":"Shai Shalev-Shwartz , Shaked Shammah , and Amnon Shashua . 2016. Safe , multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295 ( 2016 ). Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. 2016. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295 (2016)."},{"key":"e_1_3_2_1_19_1","volume-title":"Pettingzoo: Gym for multi-agent reinforcement learning. arXiv preprint arXiv:2009.14471","author":"Terry Justin K","year":"2020","unstructured":"Justin K Terry , Benjamin Black , Nathaniel Grammel , Mario Jayakumar , Ananth Hari , Ryan Sullivan , Luis Santos , Rodrigo Perez , Caroline Horsch , Clemens Dieffendahl , 2020 . Pettingzoo: Gym for multi-agent reinforcement learning. arXiv preprint arXiv:2009.14471 (2020). Justin K Terry, Benjamin Black, Nathaniel Grammel, Mario Jayakumar, Ananth Hari, Ryan Sullivan, Luis Santos, Rodrigo Perez, Caroline Horsch, Clemens Dieffendahl, et al. 2020. Pettingzoo: Gym for multi-agent reinforcement learning. arXiv preprint arXiv:2009.14471 (2020)."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPWRS.2018.2829021"},{"key":"e_1_3_2_1_21_1","volume-title":"CityLearn: Standardizing Research in Multi-Agent Reinforcement Learning for Demand Response and Urban Energy Management. arXiv preprint arXiv:2012.10504","author":"Vazquez-Canteli Jose R","year":"2020","unstructured":"Jose R Vazquez-Canteli , Sourav Dey , Gregor Henze , and Zoltan Nagy . 2020. CityLearn: Standardizing Research in Multi-Agent Reinforcement Learning for Demand Response and Urban Energy Management. arXiv preprint arXiv:2012.10504 ( 2020 ). Jose R Vazquez-Canteli, Sourav Dey, Gregor Henze, and Zoltan Nagy. 2020. CityLearn: Standardizing Research in Multi-Agent Reinforcement Learning for Demand Response and Urban Energy Management. arXiv preprint arXiv:2012.10504 (2020)."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3360322.3360998"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"crossref","unstructured":"Oriol Vinyals Igor Babuschkin Wojciech M Czarnecki Micha\u00ebl Mathieu Andrew Dudzik Junyoung Chung David H Choi Richard Powell Timo Ewalds Petko Georgiev etal 2019. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575 7782 (2019) 350--354.  Oriol Vinyals Igor Babuschkin Wojciech M Czarnecki Micha\u00ebl Mathieu Andrew Dudzik Junyoung Chung David H Choi Richard Powell Timo Ewalds Petko Georgiev et al. 2019. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575 7782 (2019) 350--354.","DOI":"10.1038\/s41586-019-1724-z"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPWRS.2019.2948132"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSG.2019.2951769"}],"event":{"name":"e-Energy '22: The Thirteenth ACM International Conference on Future Energy Systems","location":"Virtual Event","acronym":"e-Energy '22","sponsor":["SIGEnergy ACM Special Interest Group on Energy Systems and Informatics"]},"container-title":["Proceedings of the Thirteenth ACM International Conference on Future Energy Systems"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3538637.3539616","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3538637.3539616","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3538637.3539616","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:03:02Z","timestamp":1750186982000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3538637.3539616"}},"subtitle":["a framework for multi-agent reinforcement learning in power systems"],"short-title":[],"issued":{"date-parts":[[2022,6,28]]},"references-count":25,"alternative-id":["10.1145\/3538637.3539616","10.1145\/3538637"],"URL":"https:\/\/doi.org\/10.1145\/3538637.3539616","relation":{},"subject":[],"published":{"date-parts":[[2022,6,28]]},"assertion":[{"value":"2022-06-28","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}