{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T11:08:12Z","timestamp":1775041692059,"version":"3.50.1"},"reference-count":64,"publisher":"Association for Computing Machinery (ACM)","issue":"CHI PLAY","license":[{"start":{"date-parts":[[2021,10,5]],"date-time":"2021-10-05T00:00:00Z","timestamp":1633392000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100002341","name":"Academy of Finland","doi-asserted-by":"publisher","award":["(Finnish Center for Artificial Intelligence)"],"award-info":[{"award-number":["(Finnish Center for Artificial Intelligence)"]}],"id":[{"id":"10.13039\/501100002341","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2021,10,5]]},"abstract":"<jats:p>This paper presents a novel approach to automated playtesting for the prediction of human player behavior and experience. We have previously demonstrated that Deep Reinforcement Learning (DRL) game-playing agents can predict both game difficulty and player engagement, operationalized as average pass and churn rates. We improve this approach by enhancing DRL with Monte Carlo Tree Search (MCTS). We also motivate an enhanced selection strategy for predictor features, based on the observation that an AI agent's best-case performance can yield stronger correlations with human data than the agent's average performance. Both additions consistently improve the prediction accuracy, and the DRL-enhanced MCTS outperforms both DRL and vanilla MCTS in the hardest levels. We conclude that player modelling via automated playtesting can benefit from combining DRL and MCTS. Moreover, it can be worthwhile to investigate a subset of repeated best AI agent runs, if AI gameplay does not yield good predictions on average.<\/jats:p>","DOI":"10.1145\/3474658","type":"journal-article","created":{"date-parts":[[2021,10,6]],"date-time":"2021-10-06T22:59:48Z","timestamp":1633561188000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["Predicting Game Difficulty and Engagement Using AI Players"],"prefix":"10.1145","volume":"5","author":[{"given":"Shaghayegh","family":"Roohi","sequence":"first","affiliation":[{"name":"Aalto University, Espoo, Finland"}]},{"given":"Christian","family":"Guckelsberger","sequence":"additional","affiliation":[{"name":"Aalto University, Espoo, Finland"}]},{"given":"Asko","family":"Relas","sequence":"additional","affiliation":[{"name":"Rovio Entertainment, Espoo, Finland"}]},{"given":"Henri","family":"Heiskanen","sequence":"additional","affiliation":[{"name":"Rovio Entertainment, Espoo, Finland"}]},{"given":"Jari","family":"Takatalo","sequence":"additional","affiliation":[{"name":"Rovio Entertainment, Espoo, Finland"}]},{"given":"Perttu","family":"H\u00e4m\u00e4l\u00e4inen","sequence":"additional","affiliation":[{"name":"Aalto University, Espoo, Finland"}]}],"member":"320","published-online":{"date-parts":[[2021,10,6]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/TG.2020.3032796"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TG.2019.2947597"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/CoG47356.2020.9231670"},{"key":"e_1_2_1_4_1","first-page":"679","article-title":"A Markovian Decision Process","volume":"6","author":"Bellman Richard","year":"1957","unstructured":"Richard Bellman. 1957. A Markovian Decision Process. Journal of Mathematics and Mechanics , Vol. 6, 5 (1957), 679--684.","journal-title":"Journal of Mathematics and Mechanics"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/CoG47356.2020.9231552"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2019.8848038"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173615"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2011.11.020"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCIAIG.2012.2186810"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2019.8848091"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376701"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/1232743.1232769"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2793107.2793147"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/2207676.2207689"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290688.3290748"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2019.102383"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3027063.3053209"},{"key":"e_1_2_1_18_1","unstructured":"Rovio Entertainment. 2018. Angry Birds Dream Blast . Game [Android iOS]."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.aac6076"},{"key":"e_1_2_1_20_1","volume-title":"Deep learning","author":"Goodfellow Ian","unstructured":"Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning. Vol. 1. MIT press Cambridge."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3116595.3116631"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2018.8490442"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2017.8080424"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/CoG47356.2020.9231762"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/TG.2018.2808198"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2017.8080427"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025576"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2018.8490363"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1007\/11871842_29"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/CoG47356.2020.9231581"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1111\/tops.12086"},{"key":"e_1_2_1_32_1","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment","volume":"16","author":"Ling Carlos","year":"2020","unstructured":"Carlos Ling, Konrad Tollmar, and Linus Gissl\u00e9n. 2020. Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Vol. 16. 66--73. https:\/\/ojs.aaai.org\/index.php\/AIIDE\/article\/view\/7409"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2470654.2470668"},{"key":"e_1_2_1_34_1","volume-title":"Proceedings of The 33rd International Conference on Machine Learning (Proceedings of Machine Learning Research","volume":"1937","author":"Mnih Volodymyr","year":"2016","unstructured":"Volodymyr Mnih, Adri\u00e0 Puigdom\u00e8nech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of The 33rd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 48), , Maria Florina Balcan and Kilian Q. Weinberger (Eds.). PMLR, New York, USA, 1928--1937. http:\/\/proceedings.mlr.press\/v48\/mniha16.html"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1038\/nature14236"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--94-017--9088--8_16"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3--319--16549--3_30"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1002\/asi.20801"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3330340"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2017.70"},{"key":"e_1_2_1_41_1","volume-title":"Challenges and Opportunities. In Proceedings of the AAAI Conference on Artificial Intelligence","volume":"30","author":"Perez-Liebana Diego","year":"2016","unstructured":"Diego Perez-Liebana, Spyridon Samothrakis, Julian Togelius, Tom Schaul, and Simon Lucas. 2016. General Video Game AI: Competition, Challenges and Opportunities. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30. AAAI press, 4335--4337."},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/CoG47356.2020.9231958"},{"key":"e_1_2_1_43_1","unstructured":"Erik Ragnar Poromaa. 2017. Crushing candy crush: predicting human success rate in a mobile game using Monte-Carlo tree search."},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1002\/9780470316887"},{"key":"e_1_2_1_45_1","volume-title":"Iv\u00e1n Encinas, Manel Sort, Jan Wedekind, and Mari\u00e1n Bogun\u00e1.","author":"Reguera David","year":"2019","unstructured":"David Reguera, Pol Colomer-de Sim\u00f3n, Iv\u00e1n Encinas, Manel Sort, Jan Wedekind, and Mari\u00e1n Bogun\u00e1. 2019. The Physics of Fun: Quantifying Human Engagement into Playful Activities. arXiv preprint arXiv:1911.01864 (2019)."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3410404.3414235"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/TG.2020.2992282"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1006\/ceps.1999.1020"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11031-006--9051--8"},{"key":"e_1_2_1_50_1","volume-title":"The Art of Game Design: A Book of Lenses","author":"Schell Jesse","unstructured":"Jesse Schell. 2008. The Art of Game Design: A Book of Lenses .Morgan Kaufmann Publishers Inc., San Francisco, CA, USA."},{"key":"e_1_2_1_51_1","volume-title":"Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347","author":"Schulman John","year":"2017","unstructured":"John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)."},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2020"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.2980380"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1038\/nature16961"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCIAIG.2011.2161310"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3410404.3414249"},{"key":"e_1_2_1_57_1","volume-title":"Reinforcement learning: An introduction","author":"Sutton Richard S","unstructured":"Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction .MIT press."},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/1077246.1077253"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2008.5035629"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376723"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1109\/CIG.2015.7317913"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1080\/08839510701527580"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/T-AFFC.2011.6"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3--319--63519--4"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3474658","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3474658","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:42Z","timestamp":1750191522000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3474658"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,5]]},"references-count":64,"journal-issue":{"issue":"CHI PLAY","published-print":{"date-parts":[[2021,10,5]]}},"alternative-id":["10.1145\/3474658"],"URL":"https:\/\/doi.org\/10.1145\/3474658","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,10,5]]},"assertion":[{"value":"2021-10-06","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}