{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,4]],"date-time":"2026-03-04T03:37:33Z","timestamp":1772595453085,"version":"3.50.1"},"reference-count":63,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2020,5,30]],"date-time":"2020-05-30T00:00:00Z","timestamp":1590796800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000185","name":"Defense Advanced Research Projects Agency","doi-asserted-by":"publisher","award":["#N66001-17-2-4"],"award-info":[{"award-number":["#N66001-17-2-4"]}],"id":[{"id":"10.13039\/100000185","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Interact. Intell. Syst."],"published-print":{"date-parts":[[2020,6,30]]},"abstract":"<jats:p>\n            How should reinforcement learning (RL) agents explain themselves to humans not trained in AI? To gain insights into this question, we conducted a 124-participant, four-treatment experiment to compare participants\u2019 mental models of an RL agent in the context of a simple Real-Time Strategy (RTS) game. The four treatments isolated two types of explanations vs. neither vs. both together. The two types of explanations were as follows: (1) saliency maps (an \u201cInput Intelligibility Type\u201d that explains the AI\u2019s focus of attention) and (2) reward-decomposition bars (an \u201cOutput Intelligibility Type\u201d that explains the AI\u2019s predictions of future types of rewards). Our results show that a combined explanation that included saliency and reward bars was needed to achieve a statistically significant difference in participants\u2019 mental model scores over the no-explanation treatment. However, this combined explanation was far from a panacea: It exacted disproportionately high cognitive loads from the participants who received the combined explanation. Further, in some situations, participants who saw both explanations predicted the agent\u2019s next action\n            <jats:italic>worse<\/jats:italic>\n            than all other treatments\u2019 participants.\n          <\/jats:p>","DOI":"10.1145\/3366485","type":"journal-article","created":{"date-parts":[[2020,5,31]],"date-time":"2020-05-31T04:09:47Z","timestamp":1590898187000},"page":"1-37","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":37,"title":["Mental Models of Mere Mortals with Explanations of Reinforcement Learning"],"prefix":"10.1145","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4964-6059","authenticated-orcid":false,"given":"Andrew","family":"Anderson","sequence":"first","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Jonathan","family":"Dodge","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Amrita","family":"Sadarangani","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Zoe","family":"Juozapaitis","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Evan","family":"Newman","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Jed","family":"Irvine","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Souti","family":"Chattopadhyay","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Matthew","family":"Olson","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Alan","family":"Fern","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]},{"given":"Margaret","family":"Burnett","sequence":"additional","affiliation":[{"name":"Oregon State University, SW Jefferson Way, Corvallis, OR"}]}],"member":"320","published-online":{"date-parts":[[2020,5,30]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Advances in Neural Information Processing Systems","volume":"31","author":"Adebayo Julius","year":"2018","unstructured":"Julius Adebayo , Justin Gilmer , Michael Muelly , Ian Goodfellow , Moritz Hardt , and Been Kim . 2018 . Sanity checks for saliency maps . In Advances in Neural Information Processing Systems , Vol. 31 , S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Curran Associates, Inc., 9505--9515. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems, Vol. 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Curran Associates, Inc., 9505--9515."},{"key":"e_1_2_1_2_1","volume-title":"Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1168--1176","author":"Amir Dan","year":"2018","unstructured":"Dan Amir and Ofra Amir . 2018 . HIGHLIGHTS: Summarizing agent behavior to people . In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1168--1176 . Dan Amir and Ofra Amir. 2018. HIGHLIGHTS: Summarizing agent behavior to people. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1168--1176."},{"key":"e_1_2_1_3_1","volume-title":"Proceedings of the International Conference on Learning Representations.","author":"Ancona Marco","year":"2018","unstructured":"Marco Ancona , Enea Ceolini , Cengiz \u00d6ztireli , and Markus Gross . 2018 . Towards better understanding of gradient-based attribution methods for deep neural networks . In Proceedings of the International Conference on Learning Representations. Marco Ancona, Enea Ceolini, Cengiz \u00d6ztireli, and Markus Gross. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2365952.2365964"},{"key":"e_1_2_1_5_1","volume-title":"Self-explanations: How students study and use examples in learning to solve problems. Cogn. Sci. 13, 2 (4","author":"Chi Michelene T. H.","year":"1989","unstructured":"Michelene T. H. Chi , Miriam Bassok , Matthew W. Lewis , Peter Reimann , and Robert Glaser . 1989 . Self-explanations: How students study and use examples in learning to solve problems. Cogn. Sci. 13, 2 (4 1989), 145--182. DOI:https:\/\/doi.org\/10.1207\/s15516709cog1302_1 Michelene T. H. Chi, Miriam Bassok, Matthew W. Lewis, Peter Reimann, and Robert Glaser. 1989. Self-explanations: How students study and use examples in learning to solve problems. Cogn. Sci. 13, 2 (4 1989), 145--182. DOI:https:\/\/doi.org\/10.1207\/s15516709cog1302_1"},{"key":"e_1_2_1_6_1","volume-title":"The Sage Dictionary of Statistics: A Practical Resource for Students in the Social Sciences","author":"Cramer Duncan","unstructured":"Duncan Cramer and Dennis Howitt . 2004. The Sage Dictionary of Statistics: A Practical Resource for Students in the Social Sciences . Sage . Duncan Cramer and Dennis Howitt. 2004. The Sage Dictionary of Statistics: A Practical Resource for Students in the Social Sciences. Sage."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174136"},{"key":"e_1_2_1_8_1","volume-title":"Proceedings of the IJCAI Workshop on Explainable Artificial Intelligence. 40--44","author":"Erwig Martin","year":"2018","unstructured":"Martin Erwig , Alan Fern , Magesh Murali , and Anurag Koul . 2018 . Explaining deep adaptive programs via reward decomposition . In Proceedings of the IJCAI Workshop on Explainable Artificial Intelligence. 40--44 . Martin Erwig, Alan Fern, Magesh Murali, and Anurag Koul. 2018. Explaining deep adaptive programs via reward decomposition. In Proceedings of the IJCAI Workshop on Explainable Artificial Intelligence. 40--44."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.371"},{"key":"e_1_2_1_10_1","volume-title":"Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.)","volume":"80","author":"Greydanus Sam","year":"2018","unstructured":"Sam Greydanus , Anurag Koul , Jonathan Dodge , and Alan Fern . 2018 . Visualizing and understanding Atari agents . In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.) , Vol. 80 . PMLR, Stockholm Sweden, 1792-- 1801. Sam Greydanus, Anurag Koul, Jonathan Dodge, and Alan Fern. 2018. Visualizing and understanding Atari agents. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholm Sweden, 1792--1801."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1177\/154193120605000909"},{"key":"e_1_2_1_12_1","volume-title":"Staveland","author":"Hart Sandra G.","year":"1988","unstructured":"Sandra G. Hart and Lowell E . Staveland . 1988 . Development of NASA-TLX (task load index): Results of empirical and theoretical research. In Advances in Psychology. Vol. 52 . Elsevier , 139--183. Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (task load index): Results of empirical and theoretical research. In Advances in Psychology. Vol. 52. Elsevier, 139--183."},{"key":"e_1_2_1_13_1","volume-title":"Proceedings of the ACM\/IEEE International Conference on Human-Robot Interaction. ACM, 303--312","author":"Hayes Bradley","unstructured":"Bradley Hayes and Julie A. Shah . 2017. Improving robot controller transparency through autonomous policy explanation . In Proceedings of the ACM\/IEEE International Conference on Human-Robot Interaction. ACM, 303--312 . Bradley Hayes and Julie A. Shah. 2017. Improving robot controller transparency through autonomous policy explanation. In Proceedings of the ACM\/IEEE International Conference on Human-Robot Interaction. ACM, 303--312."},{"key":"e_1_2_1_14_1","volume-title":"Metrics for explainable AI: Challenges and prospects. arXiv:1812.04608","author":"Hoffman Robert","year":"2018","unstructured":"Robert Hoffman , Shane Mueller , Gary Klein , and Jordan Litman . 2018. Metrics for explainable AI: Challenges and prospects. arXiv:1812.04608 ( 2018 ). Robert Hoffman, Shane Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv:1812.04608 (2018)."},{"key":"e_1_2_1_15_1","volume-title":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 579","author":"Hohman Fred","unstructured":"Fred Hohman , Andrew Head , Rich Caruana , Robert DeLine , and Steven M. Drucker . 2019. Gamut: A design probe to understand how data scientists understand machine learning models . In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 579 . Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M. Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 579."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1177\/1049732305276687"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10514-018-9771-0"},{"key":"e_1_2_1_18_1","first-page":"223","article-title":"Nouvelles recherches sur la distribution florale","volume":"44","author":"Jaccard Paul","year":"1908","unstructured":"Paul Jaccard . 1908 . Nouvelles recherches sur la distribution florale . Bull. Soc. Vaud. Sci. Nat. 44 (1908), 223 -- 270 . Paul Jaccard. 1908. Nouvelles recherches sur la distribution florale. Bull. Soc. Vaud. Sci. Nat. 44 (1908), 223--270.","journal-title":"Bull. Soc. Vaud. Sci. Nat."},{"key":"e_1_2_1_19_1","volume-title":"Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines","author":"Johnson Jeff","unstructured":"Jeff Johnson . 2013. Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines . Elsevier . Jeff Johnson. 2013. Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines. Elsevier."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300487"},{"key":"e_1_2_1_21_1","volume-title":"Proceedings of the International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.)","volume":"80","author":"Kim Been","year":"2018","unstructured":"Been Kim , Martin Wattenberg , Justin Gilmer , Carrie Cai , James Wexler , Fernanda Viegas , and Rory Sayres . 2018 . Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV) . In Proceedings of the International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.) , Vol. 80 . PMLR, Stockholm Sweden, 2668--2677. http:\/\/proceedings.mlr.press\/v80\/kim18d.html Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In Proceedings of the International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholm Sweden, 2668--2677. http:\/\/proceedings.mlr.press\/v80\/kim18d.html"},{"key":"e_1_2_1_22_1","volume-title":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM","author":"Kim Man-Je","unstructured":"Man-Je Kim , Kyung-Joong Kim , SeungJun Kim , and Anind K. Dey . 2016. Evaluation of starcraft artificial intelligence competition bots by experienced human players . In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM , 1915--1921. Man-Je Kim, Kyung-Joong Kim, SeungJun Kim, and Anind K. Dey. 2016. Evaluation of starcraft artificial intelligence competition bots by experienced human players. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1915--1921."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2800016"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1207\/s15326985ep4102_1"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/2678025.2701399"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/VLHCC.2013.6645235"},{"key":"e_1_2_1_28_1","volume-title":"Proceedings of the 11th International Conference on Ubiquitous Computing. ACM, 195--204","author":"Brian","unstructured":"Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications . In Proceedings of the 11th International Conference on Ubiquitous Computing. ACM, 195--204 . Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing. ACM, 195--204."},{"key":"e_1_2_1_29_1","volume-title":"Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI) Workshops.","author":"Lim Brian Y.","year":"2019","unstructured":"Brian Y. Lim , Qian Yang , Ashraf M Abdul , and Danding Wang . 2019 . Why these explanations? Selecting intelligibility types for explanation goals . In Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI) Workshops. Brian Y. Lim, Qian Yang, Ashraf M Abdul, and Danding Wang. 2019. Why these explanations? Selecting intelligibility types for explanation goals. In Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI) Workshops."},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1518\/001872008X250601"},{"key":"e_1_2_1_31_1","volume-title":"Proceedings of the ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201912)","author":"Lomas M.","unstructured":"M. Lomas , R. Chevalier , E. V. Cross , R. C. Garrett , J. Hoare , and M. Kopack . 2012. Explaining robot actions . In Proceedings of the ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201912) . 187--188. DOI:https:\/\/doi.org\/10.1145\/2157689.2157748 M. Lomas, R. Chevalier, E. V. Cross, R. C. Garrett, J. Hoare, and M. Kopack. 2012. Explaining robot actions. In Proceedings of the ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201912). 187--188. DOI:https:\/\/doi.org\/10.1145\/2157689.2157748"},{"key":"e_1_2_1_32_1","volume-title":"Explainable reinforcement learning through a causal lens. CoRR abs\/1905.10958","author":"Madumal Prashan","year":"2019","unstructured":"Prashan Madumal , Tim Miller , Liz Sonenberg , and Frank Vetere . 2019. Explainable reinforcement learning through a causal lens. CoRR abs\/1905.10958 ( 2019 ). arxiv:1905.10958 http:\/\/arxiv.org\/abs\/1905.10958 Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. Explainable reinforcement learning through a causal lens. CoRR abs\/1905.10958 (2019). arxiv:1905.10958 http:\/\/arxiv.org\/abs\/1905.10958"},{"key":"e_1_2_1_33_1","volume-title":"Explanation in artificial intelligence: Insights from the social sciences. CoRR abs\/1706.07269","author":"Miller Tim","year":"2017","unstructured":"Tim Miller . 2017. Explanation in artificial intelligence: Insights from the social sciences. CoRR abs\/1706.07269 ( 2017 ). arxiv:1706.07269 http:\/\/arxiv.org\/abs\/1706.07269 Tim Miller. 2017. Explanation in artificial intelligence: Insights from the social sciences. CoRR abs\/1706.07269 (2017). arxiv:1706.07269 http:\/\/arxiv.org\/abs\/1706.07269"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/383952.383991"},{"key":"e_1_2_1_35_1","unstructured":"NASA. [n.d.]. NASA TLX: Task Load Index. Retrieved from https:\/\/humansystems.arc.nasa.gov\/groups\/TLX\/.  NASA. [n.d.]. NASA TLX: Task Load Index. Retrieved from https:\/\/humansystems.arc.nasa.gov\/groups\/TLX\/."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3116595.3116624"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/2968120.2987740"},{"key":"e_1_2_1_38_1","volume-title":"Mental models","author":"Norman Donald","year":"1983","unstructured":"Donald Norman and Dedra Gentner . 1983. Mental models . Lawrence Erlbaum Associates , Hillsdale, NJ , 1983 . Donald Norman and Dedra Gentner. 1983. Mental models. Lawrence Erlbaum Associates, Hillsdale, NJ,1983."},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCIAIG.2013.2286295"},{"key":"e_1_2_1_40_1","volume-title":"Proceedings of the 2016 International Conference on Autonomous Agents 8 Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 957--965","author":"Peng Bei","unstructured":"Bei Peng , James MacGlashan , Robert Loftin , Michael L. Littman , David L. Roberts , and Matthew E. Taylor . 2016. A need for speed: Adapting agent action speed to improve task learning from non-expert humans . In Proceedings of the 2016 International Conference on Autonomous Agents 8 Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 957--965 . Bei Peng, James MacGlashan, Robert Loftin, Michael L. Littman, David L. Roberts, and Matthew E. Taylor. 2016. A need for speed: Adapting agent action speed to improve task learning from non-expert humans. In Proceedings of the 2016 International Conference on Autonomous Agents 8 Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 957--965."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3172944.3172946"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.147"},{"key":"e_1_2_1_43_1","volume-title":"Leveraging human knowledge in tabular reinforcement learning: A study of human subjects. Knowl. Eng. Rev. 33","author":"Rosenfeld Ariel","year":"2018","unstructured":"Ariel Rosenfeld , Moshe Cohen , Matthew E. Taylor , and Sarit Kraus . 2018. Leveraging human knowledge in tabular reinforcement learning: A study of human subjects. Knowl. Eng. Rev. 33 ( 2018 ). Ariel Rosenfeld, Moshe Cohen, Matthew E. Taylor, and Sarit Kraus. 2018. Leveraging human knowledge in tabular reinforcement learning: A study of human subjects. Knowl. Eng. Rev. 33 (2018)."},{"key":"e_1_2_1_44_1","volume-title":"Proceedings of the International Conference on Machine Learning. 656--663","author":"Russell Stuart","year":"2003","unstructured":"Stuart Russell and Andrew Zimdars . 2003 . Q-decomposition for reinforcement learning agents . In Proceedings of the International Conference on Machine Learning. 656--663 . Stuart Russell and Andrew Zimdars. 2003. Q-decomposition for reinforcement learning agents. In Proceedings of the International Conference on Machine Learning. 656--663."},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.22237\/jmasm\/1257035100"},{"key":"e_1_2_1_46_1","volume-title":"Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR abs\/1312.6034","author":"Simonyan Karen","year":"2013","unstructured":"Karen Simonyan , Andrea Vedaldi , and Andrew Zisserman . 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR abs\/1312.6034 ( 2013 ). Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR abs\/1312.6034 (2013)."},{"key":"e_1_2_1_47_1","volume-title":"Riedmiller","author":"Springenberg Jost","year":"2014","unstructured":"Jost Springenberg , Alexey Dosovitskiy , Thomas Brox , and Martin A . Riedmiller . 2014 . Striving for simplicity: The all convolutional net. CoRR abs\/1412.6806 (2014). Jost Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2014. Striving for simplicity: The all convolutional net. CoRR abs\/1412.6806 (2014)."},{"key":"e_1_2_1_48_1","volume-title":"Barto","author":"Sutton Richard S.","year":"2018","unstructured":"Richard S. Sutton and Andrew G . Barto . 2018 . Reinforcement Learning : An Introduction. MIT Press . Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction. MIT Press."},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1037\/0022-0663.80.4.424"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/ROMAN.2006.314459"},{"key":"e_1_2_1_51_1","volume-title":"Proceedings of the ACM Conference on Human Factors in Computing Systems. ACM, 31--40","author":"Tullio J.","unstructured":"J. Tullio , A. Dey , J. Chalecki , and J. Fogarty . 2007. How it works: A field study of non-technical users interacting with an intelligent system . In Proceedings of the ACM Conference on Human Factors in Computing Systems. ACM, 31--40 . J. Tullio, A. Dey, J. Chalecki, and J. Fogarty. 2007. How it works: A field study of non-technical users interacting with an intelligent system. In Proceedings of the ACM Conference on Human Factors in Computing Systems. ACM, 31--40."},{"key":"e_1_2_1_52_1","unstructured":"Jasper van der Waa Jurriaan van Diggelen Karel van den Bosch and Mark Neerincx. 2018. Contrastive explanations for reinforcement learning in terms of expected consequences. (2018).  Jasper van der Waa Jurriaan van Diggelen Karel van den Bosch and Mark Neerincx. 2018. Contrastive explanations for reinforcement learning in terms of expected consequences. (2018)."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10648-005-3951-0"},{"key":"e_1_2_1_54_1","unstructured":"Harm Van Seijen Mehdi Fatemi Joshua Romoff Romain Laroche Tavian Barnes and Jeffrey Tsang. 2017. Hybrid reward architecture for reinforcement learning. In Advances in Neural Information Processing Systems. 5392--5402.  Harm Van Seijen Mehdi Fatemi Joshua Romoff Romain Laroche Tavian Barnes and Jeffrey Tsang. 2017. Hybrid reward architecture for reinforcement learning. In Advances in Neural Information Processing Systems. 5392--5402."},{"key":"e_1_2_1_55_1","unstructured":"Oriol Vinyals David Silver etal 2019. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. Retrieved from shorturl.at\/dinL3.  Oriol Vinyals David Silver et al. 2019. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. Retrieved from shorturl.at\/dinL3."},{"key":"e_1_2_1_56_1","volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI\u201919)","volume":"19","author":"Wang Danding","unstructured":"Danding Wang , Qian Yang , Ashraf Abdul , and Brian Y. Lim . 2019. Designing theory-driven user-centric explainable AI . In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI\u201919) , Vol. 19 . Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI\u201919), Vol. 19."},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3282486"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ergon.2012.11.003"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.5555\/330775"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/ROMAN.2017.8172491"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/2993901.2993908"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10590-1_53"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-017-1059-x"},{"key":"e_1_2_1_64_1","volume-title":"The Construction of a Scale to Measure Perceived Effort","author":"Zijlstra F. R. H.","unstructured":"F. R. H. Zijlstra and L. Van Doorn . 1985. The Construction of a Scale to Measure Perceived Effort . University of Technology . F. R. H. Zijlstra and L. Van Doorn. 1985. The Construction of a Scale to Measure Perceived Effort. University of Technology."}],"container-title":["ACM Transactions on Interactive Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3366485","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3366485","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3366485","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:32:54Z","timestamp":1750199574000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3366485"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,5,30]]},"references-count":63,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2020,6,30]]}},"alternative-id":["10.1145\/3366485"],"URL":"https:\/\/doi.org\/10.1145\/3366485","relation":{},"ISSN":["2160-6455","2160-6463"],"issn-type":[{"value":"2160-6455","type":"print"},{"value":"2160-6463","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,5,30]]},"assertion":[{"value":"2019-07-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-02-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-05-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}