{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T15:32:14Z","timestamp":1776094334393,"version":"3.50.1"},"reference-count":70,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T00:00:00Z","timestamp":1717545600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T00:00:00Z","timestamp":1717545600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100014013","name":"UK Research and Innovation","doi-asserted-by":"publisher","award":["EP\/S023356\/1"],"award-info":[{"award-number":["EP\/S023356\/1"]}],"id":[{"id":"10.13039\/100014013","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100014013","name":"UK Research and Innovation","doi-asserted-by":"publisher","award":["EP\/V062506\/1"],"award-info":[{"award-number":["EP\/V062506\/1"]}],"id":[{"id":"10.13039\/100014013","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100014013","name":"UK Research and Innovation","doi-asserted-by":"publisher","award":["EP\/V010875\/1"],"award-info":[{"award-number":["EP\/V010875\/1"]}],"id":[{"id":"10.13039\/100014013","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J of Soc Robotics"],"published-print":{"date-parts":[[2024,7]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In recent years, explanations have become a pressing matter in AI research. This development was caused by the increased use of black-box models and a realization of the importance of trustworthy AI. In particular, explanations are necessary for human\u2013agent interactions to ensure that the user can trust the agent and that collaborations are effective. Human\u2013agent interactions are complex social scenarios involving a user, an autonomous agent, and an environment or task with its own distinct properties. Thus, such interactions require a wide variety of explanations, which are not covered by the methods of a single AI discipline, such as computer vision or natural language processing. In this paper, we map out what types of explanations are important for human\u2013agent interactions, surveying the field via a scoping review. In addition to the typical introspective explanation tackled by explainability researchers, we look at assistive explanations, aiming to support the user with their task. Secondly, we survey what causes the need for an explanation in the first place. We identify a variety of human\u2013agent interaction-specific causes and categorize them by whether they are centered on the agent\u2019s behavior, the user\u2019s mental state, or an external entity. Our overview aims to guide robotics practitioners in designing agents with more comprehensive explanation-related capacities, considering different explanation types and the concrete times when explanations should be given.<\/jats:p>","DOI":"10.1007\/s12369-024-01148-8","type":"journal-article","created":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T08:02:15Z","timestamp":1717574535000},"page":"1681-1692","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["A Taxonomy of Explanation Types and Need Indicators in Human\u2013Agent Collaborations"],"prefix":"10.1007","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6105-609X","authenticated-orcid":false,"given":"Lennart","family":"Wachowiak","sequence":"first","affiliation":[]},{"given":"Andrew","family":"Coles","sequence":"additional","affiliation":[]},{"given":"Gerard","family":"Canal","sequence":"additional","affiliation":[]},{"given":"Oya","family":"Celiktutan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,6,5]]},"reference":[{"key":"1148_CR1","doi-asserted-by":"crossref","unstructured":"Rosenfeld A, Richardson A (2019) Explainability in human\u2013agent systems. AAMAS 33","DOI":"10.1007\/s10458-019-09408-y"},{"key":"1148_CR2","unstructured":"Anjomshoae S, Najjar A, Calvaresi D, Fr\u00e4mling K (2019) Explainable agents and robots: results from a systematic literature review. In: AAMAS"},{"key":"1148_CR3","doi-asserted-by":"crossref","unstructured":"Sakai T, Nagai T (2022) Explainable autonomous robots: a survey and perspective. Adv Robot 36","DOI":"10.1080\/01691864.2022.2029720"},{"key":"1148_CR4","doi-asserted-by":"crossref","unstructured":"Setchi R, Dehkordi MB, Khan JS (2020) Explainable robotics in human\u2013robot interactions. Procedia Comput Sci","DOI":"10.1016\/j.procs.2020.09.198"},{"key":"1148_CR5","doi-asserted-by":"crossref","unstructured":"Sado F, Loo CK, Liew WS, Kerzel M, Wermter S (2022) Explainable goal-driven agents and robots - a comprehensive review. ACM Comput Surv","DOI":"10.1145\/3564240"},{"key":"1148_CR6","doi-asserted-by":"crossref","unstructured":"Papagni G, Koeszegi S (2021) Understandable and trustworthy explainable robots: a sensemaking perspective. Paladyn","DOI":"10.1515\/pjbr-2021-0002"},{"key":"1148_CR7","doi-asserted-by":"crossref","unstructured":"Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"1148_CR8","volume-title":"Humans and automation: system design and research issues","author":"TB Sheridan","year":"2002","unstructured":"Sheridan TB (2002) Humans and automation: system design and research issues. Wiley, Hoboken"},{"key":"1148_CR9","doi-asserted-by":"crossref","unstructured":"Soni U, Sreedharan S, Kambhampati S (2021) Not all users are the same: providing personalized explanations for sequential decision making problems. In: IROS","DOI":"10.1109\/IROS51168.2021.9636331"},{"key":"1148_CR10","unstructured":"Kopecka H, Such J (2020) Explainable ai for cultural minds. In: Workshop on Dialogue, Expl. and Argumentation for HAI"},{"key":"1148_CR11","doi-asserted-by":"crossref","unstructured":"Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"1148_CR12","unstructured":"Marr D (1982) Vision: a computational investigation into the human representation and processing of visual information"},{"key":"1148_CR13","doi-asserted-by":"crossref","unstructured":"Srinivasan R, Chander A (2021) Explanation perspectives from the cognitive sciences-a survey. In: IJCAI","DOI":"10.24963\/ijcai.2020\/670"},{"key":"1148_CR14","volume-title":"How the mind explains behavior: folk explanations, meaning, and social interaction","author":"BF Malle","year":"2006","unstructured":"Malle BF (2006) How the mind explains behavior: folk explanations, meaning, and social interaction. MIT Press, London"},{"key":"1148_CR15","unstructured":"Malle B (2014) A coding scheme for folk explanations of behavior"},{"key":"1148_CR16","doi-asserted-by":"crossref","unstructured":"Panisson AR, Engelmann DC, Bordini RH (2022) Engineering explainable agents: an argumentation-based approach. In: Engineering multi-agent systems: 9th international workshop. Springer","DOI":"10.1007\/978-3-030-97457-2_16"},{"key":"1148_CR17","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9780511802034","volume-title":"Argumentation schemes","author":"D Walton","year":"2008","unstructured":"Walton D, Reed C, Macagno F (2008) Argumentation schemes. Cambridge University Press, London"},{"key":"1148_CR18","doi-asserted-by":"crossref","unstructured":"Lombrozo T (2006) The structure and function of explanations. Trends Cogn Sci 10","DOI":"10.1016\/j.tics.2006.08.004"},{"key":"1148_CR19","unstructured":"Explanation, Cambridge Dictionary. Cambridge Dictionary. Accessed Nov 2022. https:\/\/dictionary.cambridge.org\/dictionary\/english\/explanation"},{"key":"1148_CR20","unstructured":"Explaining, Merriam Webster. Accessed Nov 2022. https:\/\/www.merriam-webster.com\/dictionary\/explaining"},{"key":"1148_CR21","doi-asserted-by":"crossref","unstructured":"Ginet C (2016) Reasons explanation: further defense of a non-causal account. J Ethics 20","DOI":"10.1007\/s10892-016-9232-y"},{"key":"1148_CR22","unstructured":"Faye J (2007) The pragmatic-rhetorical theory of explanation. In: Rethinking explanation. Springer, Dordrecht"},{"key":"1148_CR23","unstructured":"Ehsan U, Riedl MO (2022) Social construction of XAI: Do we need one definition to rule them all? Preprint arXiv:2211.06499"},{"key":"1148_CR24","doi-asserted-by":"publisher","DOI":"10.1145\/3457188","author":"S Wallk\u00f6tter","year":"2021","unstructured":"Wallk\u00f6tter S, Tulli S, Castellano G, Paiva A, Chetouani M (2021) Explainable embodied agents through social cues: a review. J Hum-Robot Interact. https:\/\/doi.org\/10.1145\/3457188","journal-title":"J Hum-Robot Interact"},{"key":"1148_CR25","doi-asserted-by":"crossref","unstructured":"Wilkinson S (2014) Levels and kinds of explanation: lessons from neuropsychiatry. Front Psychol","DOI":"10.3389\/fpsyg.2014.00373"},{"key":"1148_CR26","doi-asserted-by":"crossref","unstructured":"Keil FC (2006) Explanation and understanding. Annu Rev Psychol","DOI":"10.1146\/annurev.psych.57.102904.190100"},{"key":"1148_CR27","unstructured":"Liquin E, Lombrozo T (2018) Determinants and consequences of the need for explanation. In: CogSci"},{"key":"1148_CR28","unstructured":"Krause L, Vossen P (2020) When to explain: identifying explanation triggers in human\u2013agent interaction. In: INLT for XAI"},{"issue":"7","key":"1148_CR29","doi-asserted-by":"publisher","first-page":"467","DOI":"10.7326\/M18-0850","volume":"169","author":"AC Tricco","year":"2018","unstructured":"Tricco AC, Lillie E, Zarin W, O\u2019Brien KK, Colquhoun H, Levac D, Moher D, Peters MD, Horsley T, Weeks L et al (2018) Prisma extension for scoping reviews (prisma-scr): checklist and explanation. Ann Intern Med 169(7):467\u2013473","journal-title":"Ann Intern Med"},{"key":"1148_CR30","doi-asserted-by":"crossref","unstructured":"Chari S, Seneviratne O, Gruen DM, Foreman MA, Das AK, McGuinness DL (2020) Explanation ontology: a model of explanations for user-centered ai. In: International Semantic Web Conference","DOI":"10.1007\/978-3-030-62466-8_15"},{"key":"1148_CR31","unstructured":"Wilson JR, Aung PT, Boucher I (2021) Enabling a social robot to process social cues to detect when to help a user. arXiv:2110.11075"},{"key":"1148_CR32","unstructured":"Raymond A, Gunes H, Prorok A (2020) Culture-based explainable human\u2013agent deconfliction. AAMAS"},{"key":"1148_CR33","doi-asserted-by":"publisher","unstructured":"Gao X, Gong R, Zhao Y, Wang S, Shu T, Zhu S-C (2020) Joint mind modeling for explanation generation in complex human\u2013robot collaborative tasks. In: IEEE international conference on robot and human interactive communication (RO-MAN), pp 1119\u20131126. https:\/\/doi.org\/10.1109\/RO-MAN47096.2020.9223595","DOI":"10.1109\/RO-MAN47096.2020.9223595"},{"key":"1148_CR34","doi-asserted-by":"crossref","unstructured":"Hu S, Chew E (2020) The investigation and novel trinity modeling for museum robots. In: Eighth international conference on technological ecosystems for enhancing multiculturality","DOI":"10.1145\/3434780.3436541"},{"key":"1148_CR35","doi-asserted-by":"crossref","unstructured":"Schodde T, Hoffmann L, Stange S, Kopp S (2019) Adapt, explain, engage-a study on how social robots can scaffold second-language learning of children. ACM THRI","DOI":"10.1145\/3366422"},{"key":"1148_CR36","doi-asserted-by":"crossref","unstructured":"Gunning D, Aha D (2019) DARPA\u2019s Explainable Artificial Intelligence (XAI) Program. AI Magazine","DOI":"10.1145\/3301275.3308446"},{"key":"1148_CR37","unstructured":"Fox M, Long D, Magazzeni D (2017) Explainable planning. In: IJCAI workshop on explainable planning"},{"key":"1148_CR38","doi-asserted-by":"crossref","unstructured":"Puiutta E, Veith EM (2020) Explainable reinforcement learning: a survey. In: ML and knowledge extraction. Springer","DOI":"10.1007\/978-3-030-57321-8_5"},{"key":"1148_CR39","unstructured":"Brandao M, Mansouri M, Mohammed A, Luff P, Coles A (2022) Explainability in multi-agent path\/motion planning: user-study-driven taxonomy and requirements. In: AAMAS"},{"key":"1148_CR40","doi-asserted-by":"publisher","unstructured":"Brand\u00e3o M, Canal G, Krivi\u0107 S, Magazzeni D (2021) Towards providing explanations for robot motion planning. In: ICRA. https:\/\/doi.org\/10.1109\/ICRA48506.2021.9562003","DOI":"10.1109\/ICRA48506.2021.9562003"},{"key":"1148_CR41","doi-asserted-by":"crossref","unstructured":"Buhrmester V, M\u00fcnch D, Arens M (2021) Analysis of explainers of black box deep neural networks for computer vision: a survey. ML and knowledge extraction","DOI":"10.3390\/make3040048"},{"key":"1148_CR42","unstructured":"Tan H (2023) Fractual projection forest: fast and explainable point cloud classifier. In: Winter conference on applications of computer vision"},{"key":"1148_CR43","doi-asserted-by":"crossref","unstructured":"Gao R, Tian T, Lin Z, Wu Y (2021) On explainability and sensor-adaptability of a robot tactile texture representation using a two-stage recurrent networks. In: IROS. IEEE","DOI":"10.1109\/IROS51168.2021.9636380"},{"key":"1148_CR44","doi-asserted-by":"crossref","unstructured":"Antonucci A, Papini GPR, Bevilacqua P, Palopoli L, Fontanelli D (2021) Efficient prediction of human motion for real-time robotics applications with physics-inspired neural networks. IEEE Access","DOI":"10.1109\/ACCESS.2021.3138614"},{"key":"1148_CR45","doi-asserted-by":"crossref","unstructured":"Vice J, Khan MM (2022) Toward accountable and explainable artificial intelligence part two: The framework implementation. IEEE Access","DOI":"10.36227\/techrxiv.19102094"},{"key":"1148_CR46","doi-asserted-by":"crossref","unstructured":"Bharadhwaj H (2018) Layer-wise relevance propagation for explainable deep learning based speech recognition. In: ISSPIT","DOI":"10.1109\/ISSPIT.2018.8642691"},{"key":"1148_CR47","unstructured":"Danilevsky M, Qian K, Aharonov R, Katsis Y, Kawas B, Sen P (2020) A survey of the state of explainable ai for natural language processing. In: AACL-IJCNLP"},{"issue":"4","key":"1148_CR48","doi-asserted-by":"publisher","first-page":"242","DOI":"10.1007\/s42979-021-00573-0","volume":"2","author":"T Mota","year":"2021","unstructured":"Mota T, Sridharan M, Leonardis A (2021) Integrated commonsense reasoning and deep learning for transparent decision making in robotics. SN Comput Sci 2(4):242","journal-title":"SN Comput Sci"},{"issue":"4","key":"1148_CR49","doi-asserted-by":"publisher","first-page":"2495","DOI":"10.1109\/TRO.2021.3123840","volume":"38","author":"G Coruhlu","year":"2021","unstructured":"Coruhlu G, Erdem E, Patoglu V (2021) Explainable robotic plan execution monitoring under partial observability. IEEE Trans Robot 38(4):2495\u20132515","journal-title":"IEEE Trans Robot"},{"key":"1148_CR50","doi-asserted-by":"crossref","unstructured":"Alvanpour A, Das SK, Robinson CK, Nasraoui O, Popa D (2020) Robot failure mode prediction with explainable machine learning. In: CASE. IEEE","DOI":"10.1109\/CASE48305.2020.9216965"},{"key":"1148_CR51","doi-asserted-by":"crossref","unstructured":"Kaptein F, Broekens J, Hindriks K, Neerincx M (2019) Evaluating cognitive and affective intelligent agent explanations in a long-term health-support application for children with type 1 diabetes. In: 2019 8th international conference on affective computing and intelligent interaction (ACII). IEEE, pp 1\u20137","DOI":"10.1109\/ACII.2019.8925526"},{"key":"1148_CR52","doi-asserted-by":"crossref","unstructured":"Abdulrahman A, Richards D (2019) Modelling therapeutic alliance using a user-aware explainable embodied conversational agent to promote treatment adherence. In: Proceedings of the 19th ACM international conference on intelligent virtual agents, pp 248\u2013251","DOI":"10.1145\/3308532.3329413"},{"key":"1148_CR53","unstructured":"Pal P, Clark G, Williams T (2021) Givenness hierarchy theoretic referential choice in situated contexts. In: Proceedings of the Annual Meeting of the Cognitive Science Society"},{"key":"1148_CR54","doi-asserted-by":"crossref","unstructured":"Kontogiorgos D, van Waveren S, Wallberg O, Pereira A, Leite I, Gustafson J (2020) Embodiment effects in interactions with failing robots. In: Conference on human factors in computing systems","DOI":"10.1145\/3313831.3376372"},{"key":"1148_CR55","doi-asserted-by":"crossref","unstructured":"Wachowiak L, Tisnikar P, Canal G, Coles A, Leonetti M, Celiktutan O (2022) Analysing eye gaze patterns during confusion and errors in human\u2013agent collaborations. In: RO-MAN. IEEE","DOI":"10.1109\/RO-MAN53752.2022.9900589"},{"key":"1148_CR56","doi-asserted-by":"crossref","unstructured":"Mirnig N, Stollnberger G, Miksch M, Stadler S, Giuliani M, Tscheligi M (2017) To err is robot: how humans assess and act toward an erroneous social robot. Front Robot AI","DOI":"10.3389\/frobt.2017.00021"},{"key":"1148_CR57","doi-asserted-by":"crossref","unstructured":"Kim T, Hinds P (2006) Who should I blame? Effects of autonomy and transparency on attributions in human\u2013robot interaction. In: RO-MAN","DOI":"10.1109\/ROMAN.2006.314398"},{"key":"1148_CR58","doi-asserted-by":"crossref","unstructured":"Das D, Banerjee S, Chernova S (2021) Explainable ai for robot failures: generating explanations that improve user assistance in fault recovery. In: HRI","DOI":"10.1145\/3434073.3444657"},{"key":"1148_CR59","doi-asserted-by":"crossref","unstructured":"Sreedharan S, Srivastava S, Smith D, Kambhampati S (2019) Why can\u2019t you do that HAL? Explaining unsolvability of planning tasks. In: IJCAI","DOI":"10.24963\/ijcai.2019\/197"},{"key":"1148_CR60","doi-asserted-by":"crossref","unstructured":"Han Z, Phillips E, Yanco HA (2021) The need for verbal robot explanations and how people would like a robot to explain itself. Trans Human-Robot Interact","DOI":"10.1145\/3469652"},{"key":"1148_CR61","doi-asserted-by":"crossref","unstructured":"Molineaux M, Klenk M, Aha D (2010) Goal-driven autonomy in a navy strategy simulation. In: 24th conference on AI","DOI":"10.1609\/aaai.v24i1.7576"},{"key":"1148_CR62","doi-asserted-by":"crossref","unstructured":"Mirnig N, Giuliani M, Stollnberger G, Stadler S, Buchner R, Tscheligi M (2015) Impact of robot actions on social signals and reaction times in HRI error situations. In: Social robotics","DOI":"10.1007\/978-3-319-25554-5_46"},{"key":"1148_CR63","doi-asserted-by":"publisher","DOI":"10.4324\/9780203781036","volume-title":"Scripts, plans, goals, and understanding: an inquiry into human knowledge structures","author":"R Schank","year":"2013","unstructured":"Schank R, Abelson R (2013) Scripts, plans, goals, and understanding: an inquiry into human knowledge structures. Psychology Press, New York"},{"key":"1148_CR64","doi-asserted-by":"crossref","unstructured":"Pelikan H, Hofstetter E (2022) Managing delays in human\u2013robot interaction. ACM Trans Comput\u2013Human Interact","DOI":"10.1145\/3569890"},{"key":"1148_CR65","unstructured":"Rosenfeld A, Kraus S (2016) Strategical argumentative agent for human persuasion. In: 22nd European conference on artificial intelligence"},{"key":"1148_CR66","doi-asserted-by":"crossref","unstructured":"Sreedharan S, Chakraborti T, Kambhampati S (2021) Foundations of explanations as model reconciliation. Artif Intell","DOI":"10.1016\/j.artint.2021.103558"},{"key":"1148_CR67","unstructured":"V\u00e4\u00e4n\u00e4nen K, Pohjola H, Ahtinen H-LA (2019) Exploring the user experience of artificial intelligence applications: user survey and human-ai relationship model. In: CHI\u201919 workshop where is the human? Bridging the gap between AI and HCI"},{"key":"1148_CR68","doi-asserted-by":"crossref","unstructured":"Roque A, Damodaran SK (2022) Explainable ai for security of human\u2013interactive robots. Int J Human\u2013Comput Interact 1789\u20131807","DOI":"10.1080\/10447318.2022.2066246"},{"key":"1148_CR69","unstructured":"Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. NeurIPS"},{"key":"1148_CR70","doi-asserted-by":"publisher","unstructured":"Wachowiak L, Fenn A, Kamran H, Coles A, Celiktutan O, Canal G (2024) When do people want an explanation from a robot? In: Proceedings of the 2024 ACM\/IEEE international conference on human\u2013robot interaction. HRI \u201924. Association for Computing Machinery, New York, NY, USA, pp 752\u2013761. https:\/\/doi.org\/10.1145\/3610977.3634990","DOI":"10.1145\/3610977.3634990"}],"container-title":["International Journal of Social Robotics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12369-024-01148-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s12369-024-01148-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12369-024-01148-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,20]],"date-time":"2024-07-20T17:13:16Z","timestamp":1721495596000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s12369-024-01148-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,5]]},"references-count":70,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2024,7]]}},"alternative-id":["1148"],"URL":"https:\/\/doi.org\/10.1007\/s12369-024-01148-8","relation":{},"ISSN":["1875-4791","1875-4805"],"issn-type":[{"value":"1875-4791","type":"print"},{"value":"1875-4805","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,5]]},"assertion":[{"value":"8 May 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 June 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declaration"}},{"value":"There are no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}