{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,18]],"date-time":"2025-12-18T14:13:35Z","timestamp":1766067215402,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":19,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T00:00:00Z","timestamp":1603065600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Science Foundation Ireland","award":["12\/RC\/2289-P2"],"award-info":[{"award-number":["12\/RC\/2289-P2"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,10,20]]},"DOI":"10.1145\/3383652.3423868","type":"proceedings-article","created":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T22:10:31Z","timestamp":1603145431000},"page":"1-8","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":11,"title":["Go with the Flow"],"prefix":"10.1145","author":[{"given":"Elinga","family":"Pagalyte","sequence":"first","affiliation":[{"name":"School of Computer Science and Information Technology, University College Cork, Ireland"}]},{"given":"Maurizio","family":"Mancini","sequence":"additional","affiliation":[{"name":"Insight Centre for Data Analytics, School of Computer Science &amp; IT, University College Cork, Ireland"}]},{"given":"Laura","family":"Climent","sequence":"additional","affiliation":[{"name":"Insight Centre for Data Analytics, School of Computer Science &amp; IT, University College Cork, Ireland"}]}],"member":"320","published-online":{"date-parts":[[2020,10,19]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems","volume":"1","author":"Amato Christopher","year":"2010","unstructured":"Christopher Amato and Guy Shani . 2010 . High-level reinforcement learning in strategy games . In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems : volume 1-Volume 1 . International Foundation for Autonomous Agents and Multiagent Systems, 75--82. Christopher Amato and Guy Shani. 2010. High-level reinforcement learning in strategy games. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1. International Foundation for Autonomous Agents and Multiagent Systems, 75--82."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1037\/10518-188"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-34644-7_13"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/1178477.1178573"},{"key":"e_1_3_2_1_5_1","volume-title":"AAAI Workshop on Challenges in Game Artificial Intelligence. 91--96","author":"Hunicke Robin","year":"2004","unstructured":"Robin Hunicke and Vernell Chapman . 2004 . AI for Dynamic Difficulty Adjustment in Games. 2004 . In AAAI Workshop on Challenges in Game Artificial Intelligence. 91--96 . Robin Hunicke and Vernell Chapman. 2004. AI for Dynamic Difficulty Adjustment in Games. 2004. In AAAI Workshop on Challenges in Game Artificial Intelligence. 91--96."},{"key":"e_1_3_2_1_6_1","unstructured":"Jarno Hyrk\u00e4s et al. 2015. Reinforcement learning in a turn-based strategy game. (2015).  Jarno Hyrk\u00e4s et al. 2015. Reinforcement learning in a turn-based strategy game. (2015)."},{"key":"e_1_3_2_1_7_1","volume-title":"Yvonne AW de Kort, and Karolien Poels","author":"Jsselsteijn Wijnand A","year":"2013","unstructured":"Wijnand A I Jsselsteijn , Yvonne AW de Kort, and Karolien Poels . 2013 . The game experience questionnaire. Eindhoven : Technische Universiteit Eindhoven ( 2013), 3--9. Wijnand A IJsselsteijn, Yvonne AW de Kort, and Karolien Poels. 2013. The game experience questionnaire. Eindhoven: Technische Universiteit Eindhoven (2013), 3--9."},{"key":"e_1_3_2_1_8_1","volume-title":"Optimal battle strategy in pokemon using reinforcement learning. Web: https:\/\/web.stanford.edu\/class\/aa228\/reports\/2018\/final151. pdf","author":"Kalose Akshay","year":"2018","unstructured":"Akshay Kalose , Kris Kaya , and Alvin Kim . 2018. Optimal battle strategy in pokemon using reinforcement learning. Web: https:\/\/web.stanford.edu\/class\/aa228\/reports\/2018\/final151. pdf ( 2018 ). Akshay Kalose, Kris Kaya, and Alvin Kim. 2018. Optimal battle strategy in pokemon using reinforcement learning. Web: https:\/\/web.stanford.edu\/class\/aa228\/reports\/2018\/final151. pdf (2018)."},{"key":"e_1_3_2_1_9_1","volume-title":"The Twenty-Ninth International Flairs Conference.","author":"Lora Diana","year":"2016","unstructured":"Diana Lora , Antonio A S\u00e1nchez-Ruiz , Pedro A Gonz\u00e1lez-Calero , and Marco A G\u00f3mez-Mart\u00edn . 2016 . Dynamic difficulty adjustment in tetris . In The Twenty-Ninth International Flairs Conference. Diana Lora, Antonio A S\u00e1nchez-Ruiz, Pedro A Gonz\u00e1lez-Calero, and Marco A G\u00f3mez-Mart\u00edn. 2016. Dynamic difficulty adjustment in tetris. In The Twenty-Ninth International Flairs Conference."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.5220\/0007395406930700"},{"key":"e_1_3_2_1_11_1","unstructured":"Rodrigo Rill-Garc\u0131a. [n.d.]. Reinforcement Learning for a Turn-Based Small Scale Attrition Game. ([n.d.]).  Rodrigo Rill-Garc\u0131a. [n.d.]. Reinforcement Learning for a Turn-Based Small Scale Attrition Game. ([n.d.])."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICoICT.2014.6914068"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/122344.122377"},{"volume-title":"Reinforcement learning: An introduction","author":"Sutton Richard S","key":"e_1_3_2_1_14_1","unstructured":"Richard S Sutton and Andrew G Barto . 2018. Reinforcement learning: An introduction . MIT press . Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press."},{"key":"e_1_3_2_1_15_1","volume-title":"International Workshop on Multi-Agent Systems and Agent-Based Simulation. Springer, 159--172","author":"Takadama Keiki","year":"2004","unstructured":"Keiki Takadama and Hironori Fujita . 2004 . Toward guidelines for modeling learning agents in multiagent-based simulation: Implications from q-learning and sarsa agents . In International Workshop on Multi-Agent Systems and Agent-Based Simulation. Springer, 159--172 . Keiki Takadama and Hironori Fujita. 2004. Toward guidelines for modeling learning agents in multiagent-based simulation: Implications from q-learning and sarsa agents. In International Workshop on Multi-Agent Systems and Agent-Based Simulation. Springer, 159--172."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2008.5035664"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2012.6374183"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3041021.3054170"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1155\/2018\/5681652"}],"event":{"name":"IVA '20: ACM International Conference on Intelligent Virtual Agents","sponsor":["SIGAI ACM Special Interest Group on Artificial Intelligence"],"location":"Virtual Event Scotland UK","acronym":"IVA '20"},"container-title":["Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3383652.3423868","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3383652.3423868","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:31:57Z","timestamp":1750195917000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3383652.3423868"}},"subtitle":["Reinforcement Learning in Turn-based Battle Video Games"],"short-title":[],"issued":{"date-parts":[[2020,10,19]]},"references-count":19,"alternative-id":["10.1145\/3383652.3423868","10.1145\/3383652"],"URL":"https:\/\/doi.org\/10.1145\/3383652.3423868","relation":{},"subject":[],"published":{"date-parts":[[2020,10,19]]},"assertion":[{"value":"2020-10-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}