{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,11]],"date-time":"2025-07-11T10:50:39Z","timestamp":1752231039811,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":61,"publisher":"ACM","license":[{"start":{"date-parts":[[2018,12,8]],"date-time":"2018-12-08T00:00:00Z","timestamp":1544227200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2018,12,8]]},"DOI":"10.1145\/3297156.3297188","type":"proceedings-article","created":{"date-parts":[[2019,2,28]],"date-time":"2019-02-28T13:07:04Z","timestamp":1551359224000},"page":"11-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Artificial Intelligence Techniques on Real-time Strategy Games"],"prefix":"10.1145","author":[{"given":"Yang","family":"Zhen","sequence":"first","affiliation":[{"name":"National University of Defense Technology, Changsha, China"}]},{"given":"Zhang","family":"Wanpeng","sequence":"additional","affiliation":[{"name":"National University of Defense Technology, Changsha, China"}]},{"given":"Liu","family":"Hongfu","sequence":"additional","affiliation":[{"name":"National University of Defense Technology, Changsha, China"}]}],"member":"320","published-online":{"date-parts":[[2018,12,8]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"crossref","unstructured":"Wender S. & Watson I. (2014). Combining Case-Based Reasoning and Reinforcement Learning for Unit Navigation in Real-Time Strategy Game AI. Case-Based Reasoning Research and Development. Springer International Publishing.  Wender S. & Watson I. (2014). Combining Case-Based Reasoning and Reinforcement Learning for Unit Navigation in Real-Time Strategy Game AI. Case-Based Reasoning Research and Development. Springer International Publishing.","DOI":"10.1007\/978-3-319-11209-1_36"},{"key":"e_1_3_2_1_2_1","unstructured":"Foerster J. Nardelli N. Farquhar G. Afouras T. Torr P. H. S. & Kohli P. etal (2018). Stabilising experience replay for deep multi-agent reinforcement learning.   Foerster J. Nardelli N. Farquhar G. Afouras T. Torr P. H. S. & Kohli P. et al. (2018). Stabilising experience replay for deep multi-agent reinforcement learning."},{"key":"e_1_3_2_1_3_1","unstructured":"Barriga N. A. Stanescu M. & Buro M. (2015). Puppet Search: Enhancing Scripted Behavior by Look-Ahead Search with Applications to Real-Time Strategy Games. Aiide.  Barriga N. A. Stanescu M. & Buro M. (2015). Puppet Search: Enhancing Scripted Behavior by Look-Ahead Search with Applications to Real-Time Strategy Games. Aiide."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1038\/531284a"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v35i4.2478"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCIAIG.2013.2286295"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"crossref","unstructured":"Lara-Cabrera R. Cotta C. & Fern\u00e1ndez-Leiva A. J. (2013). A review of computational intelligence in RTS games. Foundations of Computational Intelligence (Vol.7979 pp.114--121). IEEE.  Lara-Cabrera R. Cotta C. & Fern\u00e1ndez-Leiva A. J. (2013). A review of computational intelligence in RTS games. Foundations of Computational Intelligence (Vol.7979 pp.114--121). IEEE.","DOI":"10.1109\/FOCI.2013.6602463"},{"key":"e_1_3_2_1_8_1","unstructured":"Vinyals O. Ewalds T. Bartunov S. Georgiev P. Vezhnevets A. S. & Yeo M. etal (2017). Starcraft ii: a new challenge for reinforcement learning.  Vinyals O. Ewalds T. Bartunov S. Georgiev P. Vezhnevets A. S. & Yeo M. et al. (2017). Starcraft ii: a new challenge for reinforcement learning."},{"key":"e_1_3_2_1_9_1","unstructured":"Kong X. Xin B. Liu F. & Wang Y. (2017). Revisiting the master-slave architecture in multi-agent deep reinforcement learning.  Kong X. Xin B. Liu F. & Wang Y. (2017). Revisiting the master-slave architecture in multi-agent deep reinforcement learning."},{"key":"e_1_3_2_1_10_1","unstructured":"Wender S. (2015). A Multi-Layer Case-Based & Reinforcement Learning Approach to Adaptive Tactical Real-Time Strategy Game AI (Doctoral dissertation ResearchSpace@ Auckland).  Wender S. (2015). A Multi-Layer Case-Based & Reinforcement Learning Approach to Adaptive Tactical Real-Time Strategy Game AI (Doctoral dissertation ResearchSpace@ Auckland)."},{"key":"e_1_3_2_1_11_1","unstructured":"Uriarte A. & Ontanon S. (2014). Automatic Learning of Combat Models for RTS Games. AIIDE.  Uriarte A. & Ontanon S. (2014). Automatic Learning of Combat Models for RTS Games. AIIDE."},{"key":"e_1_3_2_1_12_1","article-title":"Starcraft micromanagement with reinforcement learning and curriculum transfer learning","author":"Shao K.","year":"2018","journal-title":"IEEE Transactions on Emerging Topics in Computational Intelligence, PP(99), 1--12."},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCIAIG.2015.2487743"},{"key":"e_1_3_2_1_14_1","unstructured":"Brood War API: code.google.com\/p\/bwapi.  Brood War API: code.google.com\/p\/bwapi."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Churchill D. Preuss M. Richoux F. Synnaeve G. Uriarte A. & Onta\u00f1n\u00f3n S. etal (2016). StarCraft Bots and Competitions. Encyclopedia of Computer Graphics and Games.  Churchill D. Preuss M. Richoux F. Synnaeve G. Uriarte A. & Onta\u00f1n\u00f3n S. et al. (2016). StarCraft Bots and Competitions. Encyclopedia of Computer Graphics and Games.","DOI":"10.1007\/978-3-319-08234-9_18-1"},{"key":"e_1_3_2_1_16_1","unstructured":"Wilson A. R. & Company T. T. (2012). Masters of war history's greatest strategic thinkers.  Wilson A. R. & Company T. T. (2012). Masters of war history's greatest strategic thinkers."},{"volume-title":"Aaai Fall Symposium.","year":"2011","author":"Weber B. G.","key":"e_1_3_2_1_17_1"},{"volume-title":"Stratified Strategy Selection for Unit Control in Real-Time Strategy Games. Twenty-Sixth International Joint Conference on Artificial Intelligence (pp.3735--3741)","year":"2017","author":"Lelis L. H. S.","key":"e_1_3_2_1_18_1"},{"volume-title":"Stratified Strategy Selection for Unit Control in Real-Time Strategy Games. Twenty-Sixth International Joint Conference on Artificial Intelligence (pp.3735--3741)","year":"2017","author":"Lelis L. H. S.","key":"e_1_3_2_1_19_1"},{"volume":"3","volume-title":"Proceedings of the 17th conference on Innovative applications of artificial intelligence -","author":"Marc J. V.","key":"e_1_3_2_1_20_1"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1007\/11536406_4"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-85502-6_24"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-25324-9_10"},{"volume-title":"Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI'15)","year":"2015","author":"Santiago Onta\u00f1\u00f3n","key":"e_1_3_2_1_24_1"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.3390\/app7090872"},{"volume-title":"Proceedings of the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'14)","year":"2014","author":"Graham Erickson","key":"e_1_3_2_1_26_1"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"crossref","unstructured":"Justesen N. & Risi S. (2017). Learning macromanagement in starcraft from replays using deep learning.  Justesen N. & Risi S. (2017). Learning macromanagement in starcraft from replays using deep learning.","DOI":"10.1109\/CIG.2017.8080430"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"crossref","unstructured":"Churchill D. & Buro M. (2013). Portfolio greedy search and simulation for large-scale combat in starcraft. 1--8.  Churchill D. & Buro M. (2013). Portfolio greedy search and simulation for large-scale combat in starcraft. 1--8.","DOI":"10.1109\/CIG.2013.6633643"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"crossref","unstructured":"Uriarte A. & Onta\u00f1\u00f3n S. (2014). Game-tree search over high-level game states in RTS games. AIIDE (pp.73--79).   Uriarte A. & Onta\u00f1\u00f3n S. (2014). Game-tree search over high-level game states in RTS games. AIIDE (pp.73--79).","DOI":"10.1609\/aiide.v10i1.12706"},{"key":"e_1_3_2_1_30_1","article-title":"Game tree search based on non-deterministic action scripts in real-time strategy games","author":"Barriga N. A.","year":"2017","journal-title":"IEEE Transactions on Computational Intelligence & Ai in Games, PP(99), 1--1."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-03680-9_28"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3071178.3071210"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"crossref","unstructured":"Gajurel A. Louis S. J. Mendez D. J. & Liu S. (2018). Neuroevolution for rts micro.  Gajurel A. Louis S. J. Mendez D. J. & Liu S. (2018). Neuroevolution for rts micro.","DOI":"10.1109\/CIG.2018.8490457"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAAI.2011.63"},{"issue":"1","key":"e_1_3_2_1_35_1","first-page":"4","article-title":"Combining genetic algorithm and swarm intelligence for task allocation in a real time strategy game","volume":"8","author":"Tavares A. R.","year":"2017","journal-title":"SBC Journal on Interactive Systems"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-37798-3_38"},{"key":"e_1_3_2_1_37_1","unstructured":"Hostetler J. Dereszynski E. W. Dietterich T. G. & Fern A. (2012). Inferring strategies from limited reconnaissance in real-time strategy games. 367--376.   Hostetler J. Dereszynski E. W. Dietterich T. G. & Fern A. (2012). Inferring strategies from limited reconnaissance in real-time strategy games. 367--376."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"crossref","unstructured":"Wender S. & Watson I. (2014). Combining Case-Based Reasoning and Reinforcement Learning for Unit Navigation in Real-Time Strategy Game AI. Case-Based Reasoning Research and Development. Springer International Publishing.  Wender S. & Watson I. (2014). Combining Case-Based Reasoning and Reinforcement Learning for Unit Navigation in Real-Time Strategy Game AI. Case-Based Reasoning Research and Development. Springer International Publishing.","DOI":"10.1007\/978-3-319-11209-1_36"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"crossref","unstructured":"Uriarte A. & Onta\u00f1\u00f3n S. (2014). Game-tree search over high-level game states in RTS games. AIIDE (pp.73--79).   Uriarte A. & Onta\u00f1\u00f3n S. (2014). Game-tree search over high-level game states in RTS games. AIIDE (pp.73--79).","DOI":"10.1609\/aiide.v10i1.12706"},{"issue":"2","key":"e_1_3_2_1_40_1","first-page":"185","article-title":"The combinatorial multi-armed bandit problem and its application to real-time strategy games","volume":"18","author":"Onta\u00f1\u00f3n S.","year":"2013","journal-title":"Journal of Essential Oil Research Jeor"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"crossref","unstructured":"Uriarte A. & Onta\u00f1\u00f3n S. (2017). Single believe state generation for partially observable real-time strategy games. Computational Intelligence and Games (pp.296--303). IEEE.  Uriarte A. & Onta\u00f1\u00f3n S. (2017). Single believe state generation for partially observable real-time strategy games. Computational Intelligence and Games (pp.296--303). IEEE.","DOI":"10.1109\/CIG.2017.8080449"},{"key":"e_1_3_2_1_42_1","unstructured":"Uriarte A. & Ontanon S. (2016). Improving Monte Carlo Tree Search Policies in StarCraft via Probabilistic Models Learned from Replay Data. AIIDE.  Uriarte A. & Ontanon S. (2016). Improving Monte Carlo Tree Search Policies in StarCraft via Probabilistic Models Learned from Replay Data. AIIDE."},{"key":"e_1_3_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2011.6033442"},{"volume-title":"Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI'10)","author":"Kshitij Judah","key":"e_1_3_2_1_44_1"},{"key":"e_1_3_2_1_45_1","unstructured":"Usunier N. Synnaeve G. Lin Z. & Chintala S. (2016). Episodic exploration for deep deterministic policies: an application to starcraft micromanagement tasks.  Usunier N. Synnaeve G. Lin Z. & Chintala S. (2016). Episodic exploration for deep deterministic policies: an application to starcraft micromanagement tasks."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"crossref","unstructured":"Foerster J. Farquhar G. Afouras T. Nardelli N. & Whiteson S. (2017). Counterfactual multi-agent policy gradients.  Foerster J. Farquhar G. Afouras T. Nardelli N. & Whiteson S. (2017). Counterfactual multi-agent policy gradients.","DOI":"10.1609\/aaai.v32i1.11794"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2015.06.030"},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"crossref","unstructured":"Stanescu M. Barriga N. A. Hess A. & Buro M. (2017). Evaluating real-time strategy game states using convolutional neural networks. Computational Intelligence and Games. IEEE.  Stanescu M. Barriga N. A. Hess A. & Buro M. (2017). Evaluating real-time strategy game states using convolutional neural networks. Computational Intelligence and Games. IEEE.","DOI":"10.1109\/CIG.2016.7860439"},{"key":"e_1_3_2_1_49_1","unstructured":"Barriga N. A. Stanescu M. & Buro M. (2017). Combining strategic learning and tactical search in real-time strategy games.  Barriga N. A. Stanescu M. & Buro M. (2017). Combining strategic learning and tactical search in real-time strategy games."},{"key":"e_1_3_2_1_50_1","unstructured":"Shleyfman A. Komenda A. & Domshlak C. (2014). On Combinatorial Actions and CMABs with Linear Side Information. ECAI (Vol.263 pp.825--830).   Shleyfman A. Komenda A. & Domshlak C. (2014). On Combinatorial Actions and CMABs with Linear Side Information. ECAI (Vol.263 pp.825--830)."},{"key":"e_1_3_2_1_51_1","unstructured":"Tian Y. Gong Q. Shang W. Wu Y. & Zitnick L. (2017). Elf: an extensive lightweight and flexible research platform for real-time strategy games.   Tian Y. Gong Q. Shang W. Wu Y. & Zitnick L. (2017). Elf: an extensive lightweight and flexible research platform for real-time strategy games."},{"key":"e_1_3_2_1_52_1","unstructured":"Brockman G. Cheung V. Pettersson L. Schneider J. Schulman J. & Tang J. etal (2016). Openai gym.  Brockman G. Cheung V. Pettersson L. Schneider J. Schulman J. & Tang J. et al. (2016). Openai gym."},{"volume-title":"Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference.","year":"2017","author":"Churchill D.","key":"e_1_3_2_1_53_1"},{"key":"e_1_3_2_1_54_1","unstructured":"Peng P. Wen Y. Yang Y. Yuan Q. Tang Z. & Long H. etal (2017). Multiagent bidirectionally-coordinated nets: emergenc89e of human-level coordination in learning to play starcraft combat games.  Peng P. Wen Y. Yang Y. Yuan Q. Tang Z. & Long H. et al. (2017). Multiagent bidirectionally-coordinated nets: emergenc89e of human-level coordination in learning to play starcraft combat games."},{"key":"e_1_3_2_1_55_1","unstructured":"Lin Z. Gehring J. Khalidov V. & Synnaeve G. (2017). Stardata: a starcraft ai research dataset.  Lin Z. Gehring J. Khalidov V. & Synnaeve G. (2017). Stardata: a starcraft ai research dataset."},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"crossref","unstructured":"Andersen P. A. Goodwin M. & Granmo O. C. (2018). Deep RTS: A Game Environment for Deep Reinforcement Learning in Real-Time Strategy Games. arXiv preprint arXiv:1808.05032.  Andersen P. A. Goodwin M. & Granmo O. C. (2018). Deep RTS: A Game Environment for Deep Reinforcement Learning in Real-Time Strategy Games. arXiv preprint arXiv:1808.05032.","DOI":"10.1109\/CIG.2018.8490409"},{"key":"e_1_3_2_1_57_1","unstructured":"Nair L. & Chernova S. (2018). Action categorization for computationally improved task learning and planning.   Nair L. & Chernova S. (2018). Action categorization for computationally improved task learning and planning."},{"key":"e_1_3_2_1_58_1","unstructured":"Zambaldi V. Raposo D. Santoro A. Bapst V. Li Y. & Babuschkin I. etal (2018). Relational deep reinforcement learning.  Zambaldi V. Raposo D. Santoro A. Bapst V. Li Y. & Babuschkin I. et al. (2018). Relational deep reinforcement learning."},{"key":"e_1_3_2_1_59_1","unstructured":"Alghanem B. & G K. P. (2018). Asynchronous advantage actor-critic agent for starcraft ii.  Alghanem B. & G K. P. (2018). Asynchronous advantage actor-critic agent for starcraft ii."},{"key":"e_1_3_2_1_60_1","unstructured":"Sukhbaatar S. Szlam A. & Fergus R. (2016). Learning multiagent communication with backpropagation.   Sukhbaatar S. Szlam A. & Fergus R. (2016). Learning multiagent communication with backpropagation."},{"key":"e_1_3_2_1_61_1","unstructured":"Rashid T. Samvelyan M. Witt C. S. D. Farquhar G. Foerster J. & Whiteson S. (2018). Qmix: monotonic value function factorisation for deep multi-agent reinforcement learning.  Rashid T. Samvelyan M. Witt C. S. D. Farquhar G. Foerster J. & Whiteson S. (2018). Qmix: monotonic value function factorisation for deep multi-agent reinforcement learning."}],"event":{"name":"CSAI '18: 2018 2nd International Conference on Computer Science and Artificial Intelligence","sponsor":["Shenzhen University Shenzhen University"],"location":"Shenzhen China","acronym":"CSAI '18"},"container-title":["Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3297156.3297188","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3297156.3297188","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T01:39:09Z","timestamp":1750210749000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3297156.3297188"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,12,8]]},"references-count":61,"alternative-id":["10.1145\/3297156.3297188","10.1145\/3297156"],"URL":"https:\/\/doi.org\/10.1145\/3297156.3297188","relation":{},"subject":[],"published":{"date-parts":[[2018,12,8]]},"assertion":[{"value":"2018-12-08","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}