{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,23]],"date-time":"2026-02-23T05:55:29Z","timestamp":1771826129927,"version":"3.50.1"},"reference-count":18,"publisher":"Springer Science and Business Media LLC","issue":"30","license":[{"start":{"date-parts":[[2025,1,6]],"date-time":"2025-01-06T00:00:00Z","timestamp":1736121600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,1,6]],"date-time":"2025-01-06T00:00:00Z","timestamp":1736121600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100009917","name":"U.S. Naval Research Laboratory","doi-asserted-by":"publisher","award":["N00014-21-1-2175"],"award-info":[{"award-number":["N00014-21-1-2175"]}],"id":[{"id":"10.13039\/100009917","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Neural Comput &amp; Applic"],"published-print":{"date-parts":[[2025,10]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>In recent years, multi-agent deep reinforcement learning (MADRL) has made significant strides in power system decision-making and control. However, there is a scarcity of high-fidelity, real-time platforms for testing various DRL control algorithms in detailed power systems. Motivated by EtherCAT communication and DRL features, this study presents a MADRL online testing platform for distributed real-time dynamic control of power systems. The platform utilizes the Opal-RT real-time simulator for real-time simulation of dynamic power system environments and uses multiple AI workstations for the implementation of MADRL control algorithms. The proposed platform facilitates real-time interaction among AI workstations and the Opal-RT real-time simulator by leveraging the EtherCAT communication protocol to transmit system information and control signals. It enables the online and real-time training of distributed MADRL algorithms for power system dynamic control. The effectiveness and advantages of the proposed platform have been validated through detailed case studies of testing distributed MADRL algorithms for classical power system control problems.<\/jats:p>","DOI":"10.1007\/s00521-024-10488-5","type":"journal-article","created":{"date-parts":[[2025,1,6]],"date-time":"2025-01-06T12:36:09Z","timestamp":1736166969000},"page":"24561-24574","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Online multi-agent deep reinforcement learning platform for distributed real-time dynamic control of power systems"],"prefix":"10.1007","volume":"37","author":[{"given":"Fan","family":"Zhen","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2456-5618","authenticated-orcid":false,"given":"Tu","family":"Zhenghong","sequence":"additional","affiliation":[]},{"given":"Liu","family":"Wenxin","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,1,6]]},"reference":[{"key":"10488_CR1","doi-asserted-by":"publisher","unstructured":"Bohm P, Pounds P, Chapman AC (2022) Non-blocking asynchronous training for reinforcement learning in real-world environments. In: 2022 IEEE\/RSJ international conference on intelligent robots and systems (IROS), pp 10927\u201310934. https:\/\/doi.org\/10.1109\/IROS47612.2022.9981333","DOI":"10.1109\/IROS47612.2022.9981333"},{"issue":"1","key":"10488_CR2","doi-asserted-by":"publisher","first-page":"239","DOI":"10.1109\/TSG.2022.3198401","volume":"14","author":"AA Amer","year":"2023","unstructured":"Amer AA, Shaban K, Massoud AM (2023) Drl-hems: Deep reinforcement learn- ing agent for demand response in home energy management systems considering customers and operators perspectives. IEEE Trans Smart Grid 14(1):239\u2013250. https:\/\/doi.org\/10.1109\/TSG.2022.3198401","journal-title":"IEEE Trans Smart Grid"},{"issue":"4","key":"10488_CR3","doi-asserted-by":"publisher","first-page":"3270","DOI":"10.1109\/TPWRS.2020.2987292","volume":"35","author":"Z Yan","year":"2020","unstructured":"Yan Z, Xu Y (2020) Real-time optimal power flow: a lagrangian based deep reinforcement learning approach. IEEE Trans Power Syst 35(4):3270\u20133273. https:\/\/doi.org\/10.1109\/TPWRS.2020.2987292","journal-title":"IEEE Trans Power Syst"},{"issue":"8","key":"10488_CR4","doi-asserted-by":"publisher","first-page":"6849","DOI":"10.1109\/TIE.2020.3005071","volume":"68","author":"M Gheisarnejad","year":"2021","unstructured":"Gheisarnejad M, Farsizadeh H, Khooban MH (2021) A novel nonlinear deep reinforcement learning controller for dc\u2013dc power buck converters. IEEE Trans Ind Electron 68(8):6849\u20136858. https:\/\/doi.org\/10.1109\/TIE.2020.3005071","journal-title":"IEEE Trans Ind Electron"},{"key":"10488_CR5","doi-asserted-by":"publisher","DOI":"10.1109\/TSG.2023.3237200","author":"Z Fan","year":"2023","unstructured":"Fan Z, Zhang W, Liu W (2023) Multi-agent deep reinforcement learning based distributed optimal generation control of dc microgrids. IEEE Trans Smart Grid. https:\/\/doi.org\/10.1109\/TSG.2023.3237200","journal-title":"IEEE Trans Smart Grid"},{"key":"10488_CR6","doi-asserted-by":"crossref","unstructured":"Brown T, Horsch J, Schlachtberger D (2017) Pypsa: Python for power system analysis. arXiv preprint arXiv:1707.09913","DOI":"10.5334\/jors.188"},{"issue":"4","key":"10488_CR7","doi-asserted-by":"publisher","first-page":"3216","DOI":"10.1109\/TPWRS.2020.3045102","volume":"36","author":"C Li","year":"2020","unstructured":"Li C, Wu Y, Zhang H, Ye H, Liu Y, Liu Y (2020) Steps: a portable dynamic simulation toolkit for electrical power system studies. IEEE Trans Power Syst 36(4):3216\u20133226","journal-title":"IEEE Trans Power Syst"},{"key":"10488_CR8","doi-asserted-by":"crossref","unstructured":"Wu D, Kalathil D, Begovic M, Xie L (2021) Pyprod: a machine learning friendly platform for protection analytics in distribution systems. arXiv preprint arXiv:2109.05802","DOI":"10.24251\/HICSS.2022.440"},{"key":"10488_CR9","doi-asserted-by":"crossref","unstructured":"Ding Z, Wu J, Wang Y, Shan S, Yuan K, Zhang K (2022) Drl-based frequency response of wind turbine generators adapting their variable regulation capabilities. IET Renewable Power Generation","DOI":"10.1049\/rpg2.12534"},{"key":"10488_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/j.epsr.2022.108315","volume":"212","author":"P Barbalho","year":"2022","unstructured":"Barbalho P, Lacerda V, Fernandes R, Coury D (2022) Deep reinforcement learning- based secondary control for microgrids in islanded mode. Electric Power Syst Res 212:108315","journal-title":"Electric Power Syst Res"},{"key":"10488_CR11","doi-asserted-by":"publisher","unstructured":"Oshnoei A, Sadeghian O, Mohammadi-Ivatloo B, Blaabjerg F, Anvari- Moghaddam A (2021) Data-driven coordinated control of avr and pss in power systems: a deep reinforcement learning method. In: 2021 IEEE International conference on environment and electrical engineering and 2021 IEEE industrial and commercial power systems Europe (EEEIC\/ICPS Europe), pp 1\u20136. https:\/\/doi.org\/10.1109\/EEEIC\/ICPSEurope51590.2021.9584640","DOI":"10.1109\/EEEIC\/ICPSEurope51590.2021.9584640"},{"issue":"11","key":"10488_CR12","doi-asserted-by":"publisher","first-page":"845","DOI":"10.3390\/jmse8110845","volume":"8","author":"E Anderlini","year":"2020","unstructured":"Anderlini E, Husain S, Parker GG, Abusara M, Thomas G (2020) Towards real-time reinforcement learning control of a wave energy converter. J Mar Sci Eng 8(11):845","journal-title":"J Mar Sci Eng"},{"issue":"7","key":"10488_CR13","doi-asserted-by":"publisher","first-page":"7171","DOI":"10.1109\/TVT.2022.3168870","volume":"71","author":"Z Fu","year":"2022","unstructured":"Fu Z, Wang H, Tao F, Ji B, Dong Y, Song S (2022) Energy management strategy for fuel cell\/battery\/ultracapacitor hybrid electric vehicles using deep reinforcement learning with action trimming. IEEE Trans Veh Technol 71(7):7171\u20137185","journal-title":"IEEE Trans Veh Technol"},{"issue":"1","key":"10488_CR14","doi-asserted-by":"publisher","first-page":"29","DOI":"10.1109\/TSG.2022.3195681","volume":"14","author":"Z Tu","year":"2023","unstructured":"Tu Z, Zhang W, Liu W (2023) Deep reinforcement learning-based optimal control of dc shipboard power systems for pulsed power load accommodation. IEEE Trans Smart Grid 14(1):29\u201340. https:\/\/doi.org\/10.1109\/TSG.2022.3195681","journal-title":"IEEE Trans Smart Grid"},{"key":"10488_CR15","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2022.3195701","author":"W Zhang","year":"2022","unstructured":"Zhang W, Tu Z, Liu W (2022) Optimal charging control of energy storage systems for pulse power load using deep reinforcement learning in shipboard integrated power systems. IEEE Trans Ind Inform. https:\/\/doi.org\/10.1109\/TII.2022.3195701","journal-title":"IEEE Trans Ind Inform"},{"key":"10488_CR16","doi-asserted-by":"publisher","DOI":"10.1109\/TPWRS.2022.3201370","author":"Z Tu","year":"2022","unstructured":"Tu Z, Zhang W, Liu W (2022) Deep reinforcement learning control for pulsed power load online deployment in dc shipboard integrated power system. IEEE Trans Power Syst. https:\/\/doi.org\/10.1109\/TPWRS.2022.3201370","journal-title":"IEEE Trans Power Syst"},{"key":"10488_CR17","doi-asserted-by":"publisher","unstructured":"Nguyen VQ, Jeon JW (2020) Develop an ethercat and devicenet gateway for a smart factory. In: 2020 IEEE international conference on consumer electronics - Asia (ICCE-Asia), pp 1\u20134. https:\/\/doi.org\/10.1109\/ICCE-Asia49877.2020.9277185","DOI":"10.1109\/ICCE-Asia49877.2020.9277185"},{"key":"10488_CR18","unstructured":"Fujimoto S, Hoof H, Meger D (2018) Addressing function approximation error in actor-critic methods. In: International conference on machine learning, pp 1587\u20131596. PMLR"}],"container-title":["Neural Computing and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-024-10488-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00521-024-10488-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-024-10488-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T12:37:20Z","timestamp":1760013440000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00521-024-10488-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,6]]},"references-count":18,"journal-issue":{"issue":"30","published-print":{"date-parts":[[2025,10]]}},"alternative-id":["10488"],"URL":"https:\/\/doi.org\/10.1007\/s00521-024-10488-5","relation":{},"ISSN":["0941-0643","1433-3058"],"issn-type":[{"value":"0941-0643","type":"print"},{"value":"1433-3058","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,6]]},"assertion":[{"value":"3 July 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 September 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 January 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"There are no any conflicts of interest from the authors.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}