{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,12]],"date-time":"2026-05-12T16:29:01Z","timestamp":1778603341427,"version":"3.51.4"},"reference-count":90,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2022,10,4]],"date-time":"2022-10-04T00:00:00Z","timestamp":1664841600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,10,4]],"date-time":"2022-10-04T00:00:00Z","timestamp":1664841600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100014438","name":"Business Finland","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100014438","id-type":"DOI","asserted-by":"crossref"}]},{"name":"University of Turku (UTU) including Turku University Central Hospital"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["DISO"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Artificial intelligence (AI), which refers to both a research field and a set of technologies, is rapidly growing and has already spread to application areas ranging from policing to healthcare and transport. The increasing AI capabilities bring novel risks and potential harms to individuals and societies, which auditing of AI seeks to address. However, traditional periodic or cyclical auditing is challenged by the learning and adaptive nature of AI systems. Meanwhile, continuous auditing (CA) has been discussed since the 1980s but has not been explicitly connected to auditing of AI. In this paper, we connect the research on auditing of AI and CA to introduce CA of AI (CAAI). We define CAAI as a (nearly) real-time electronic support system for auditors that continuously and automatically audits an AI system to assess its consistency with relevant norms and standards. We adopt a bottom-up approach and investigate the CAAI tools and methods found in the academic and grey literature. The suitability of tools and methods for CA is assessed based on criteria derived from CA definitions. Our study findings indicate that few existing frameworks are directly suitable for CAAI and that many have limited scope within a particular sector or problem area. Hence, further work on CAAI frameworks is needed, and researchers can draw lessons from existing CA frameworks; however, this requires consideration of the scope of CAAI, the human\u2013machine division of labour, and the emerging institutional landscape in AI governance. Our work also lays the foundation for continued research and practical applications within the field of CAAI.<\/jats:p>","DOI":"10.1007\/s44206-022-00022-2","type":"journal-article","created":{"date-parts":[[2022,10,4]],"date-time":"2022-10-04T20:26:04Z","timestamp":1664915164000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":55,"title":["Continuous Auditing of Artificial Intelligence: a Conceptualization and Assessment of Tools and Frameworks"],"prefix":"10.1007","volume":"1","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6406-8093","authenticated-orcid":false,"given":"Matti","family":"Minkkinen","sequence":"first","affiliation":[]},{"given":"Joakim","family":"Laine","sequence":"additional","affiliation":[]},{"given":"Matti","family":"M\u00e4ntym\u00e4ki","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,10,4]]},"reference":[{"key":"22_CR1","doi-asserted-by":"publisher","unstructured":"AI Ethics Impact Group. (2020). From principles to practice\u2014An interdisciplinary framework to operationalise AI ethics. AI Ethics Impact Group, VDE Association for Electrical Electronic & Information Technologies e.V., Bertelsmann Stiftung, 1\u201356. https:\/\/doi.org\/10.11586\/2020013","DOI":"10.11586\/2020013"},{"key":"22_CR2","unstructured":"American Institute of Certified Public Accountants. (1999). Continuous auditing research report. American Institute of Certified Public Accountants."},{"key":"22_CR3","doi-asserted-by":"publisher","unstructured":"Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovi\u0107, A., Nagar, S., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., & Zhang, Y. (2019). AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development, 63(4\/5), 4:1\u20134:15. https:\/\/doi.org\/10.1147\/JRD.2019.2942287","DOI":"10.1147\/JRD.2019.2942287"},{"issue":"1","key":"22_CR4","doi-asserted-by":"publisher","first-page":"49","DOI":"10.1007\/s43681-020-00012-5","volume":"1","author":"R Benjamins","year":"2021","unstructured":"Benjamins, R. (2021). A choices framework for the responsible use of AI. AI and Ethics, 1(1), 49\u201353. https:\/\/doi.org\/10.1007\/s43681-020-00012-5","journal-title":"AI and Ethics"},{"key":"22_CR5","unstructured":"Bird, S., Dud\u00edk, M., Edgar, R., Horn, B., Lutz, R., Milan, V., Sameki, M., Wallach, H., Walker, K., & Design, A. (2020). Fairlearn: A toolkit for assessing and improving fairness in AI. 7."},{"key":"22_CR6","doi-asserted-by":"publisher","unstructured":"Black, E., Yeom, S., & Fredrikson, M. (2020). FlipTest: Fairness testing via optimal transport. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 111\u2013121. https:\/\/doi.org\/10.1145\/3351095.3372845","DOI":"10.1145\/3351095.3372845"},{"issue":"1","key":"22_CR7","doi-asserted-by":"publisher","first-page":"205395172098386","DOI":"10.1177\/2053951720983865","volume":"8","author":"S Brown","year":"2021","unstructured":"Brown, S., Davidovic, J., & Hasan, A. (2021). The algorithm audit: Scoring the algorithms that score us. Big Data & Society, 8(1), 2053951720983865. https:\/\/doi.org\/10.1177\/2053951720983865","journal-title":"Big Data & Society"},{"key":"22_CR8","doi-asserted-by":"publisher","unstructured":"Byrnes, P. E., Al-Awadhi, A., Gullvist, B., Brown-Liburd, H., Teeter, R., Warren, J. D., & Vasarhelyi, M. (2018). Evolution of auditing: From the traditional approach to the future audit. In D. Y. Chan, V. Chiu, & M. A. Vasarhelyi (Eds.), Continuous Auditing (pp. 285\u2013297). Emerald Publishing Limited. https:\/\/doi.org\/10.1108\/978-1-78743-413-420181014","DOI":"10.1108\/978-1-78743-413-420181014"},{"key":"22_CR9","doi-asserted-by":"publisher","unstructured":"Cabrera, \u00c1. A., Epperson, W., Hohman, F., Kahng, M., Morgenstern, J., & Chau, D. H. (2019). FairVis: Visual analytics for discovering intersectional bias in machine learning. https:\/\/doi.org\/10.48550\/ARXIV.1904.05419","DOI":"10.48550\/ARXIV.1904.05419"},{"key":"22_CR10","doi-asserted-by":"publisher","unstructured":"Cobbe, J., Lee, M. S. A., & Singh, J. (2021). Reviewable automated decision-making: A framework for accountable algorithmic systems. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 598\u2013609. https:\/\/doi.org\/10.1145\/3442188.3445921","DOI":"10.1145\/3442188.3445921"},{"key":"22_CR11","unstructured":"Coderre, D. (2005). Continuous auditing: Implications for assurance, monitoring, and risk assessment. Global technology audit guide. The Institute of Internal Auditors."},{"key":"22_CR12","doi-asserted-by":"publisher","unstructured":"D\u2019Amour, A., Srinivasan, H., Atwood, J., Baljekar, P., Sculley, D., & Halpern, Y. (2020). Fairness is not static: Deeper understanding of long term fairness via simulation studies. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 525\u2013534. https:\/\/doi.org\/10.1145\/3351095.3372878","DOI":"10.1145\/3351095.3372878"},{"key":"22_CR13","unstructured":"Dawson, D., & Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G., Scowcroft, J., & Hajkowicz, S. (2019). Artificial Intelligence: Australia\u2019s Ethics Framework. Data61 CSIRO, Australia. Retrieved February 11, 2021, from https:\/\/www.csiro.au\/en\/research\/technology-space\/ai\/AIEthics-Framework"},{"key":"22_CR14","doi-asserted-by":"publisher","unstructured":"Dignum, V. (2020). Responsibility and artificial intelligence. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI (pp. 213\u2013231). Oxford University Press. https:\/\/doi.org\/10.1093\/oxfordhb\/9780190067397.013.12","DOI":"10.1093\/oxfordhb\/9780190067397.013.12"},{"issue":"4","key":"22_CR15","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1109\/MIC.2016.79","volume":"20","author":"D Doneda","year":"2016","unstructured":"Doneda, D., & Almeida, V. A. F. (2016). What is algorithm governance? IEEE Internet Computing, 20(4), 60\u201363. https:\/\/doi.org\/10.1109\/MIC.2016.79","journal-title":"IEEE Internet Computing"},{"key":"22_CR16","doi-asserted-by":"publisher","unstructured":"Drakonakis, K., Ioannidis, S., & Polakis, J. (2020). The Cookie Hunter: Automated black-box auditing for web authentication and authorization flaws. Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 1953\u20131970. https:\/\/doi.org\/10.1145\/3372297.3417869","DOI":"10.1145\/3372297.3417869"},{"key":"22_CR17","unstructured":"ECP. (2018). Artificial Intelligence Impact Assessment (English version).\u00a0Retrieved February 20, 2021, from https:\/\/ecp.nl\/publicatie\/artificial-intelligence-impactassessment-english-version\/"},{"key":"22_CR18","doi-asserted-by":"publisher","unstructured":"Epstein, Z., Payne, B. H., Shen, J. H., Hong, C. J., Felbo, B., Dubey, A., Groh, M., Obradovich, N., Cebrian, M., & Rahwan, I. (2018). TuringBox: An experimental platform for the evaluation of AI systems. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 5826\u20135828. https:\/\/doi.org\/10.24963\/ijcai.2018\/851","DOI":"10.24963\/ijcai.2018\/851"},{"issue":"3","key":"22_CR19","doi-asserted-by":"publisher","first-page":"31","DOI":"10.2308\/isys-51813","volume":"32","author":"M Eulerich","year":"2018","unstructured":"Eulerich, M., & Kalinichenko, A. (2018). The current state and future directions of continuous auditing research: An analysis of the existing literature. Journal of Information Systems, 32(3), 31\u201351. https:\/\/doi.org\/10.2308\/isys-51813","journal-title":"Journal of Information Systems"},{"key":"22_CR20","doi-asserted-by":"publisher","unstructured":"Eulerich, M., Pawlowski, J., Waddoups, N. J., & Wood, D. A. (2022). A framework for using robotic process automation for audit tasks. Contemporary Accounting Research, 39(1), 691\u2013720. https:\/\/doi.org\/10.1111\/1911-3846.12723","DOI":"10.1111\/1911-3846.12723"},{"key":"22_CR21","unstructured":"European Commission. (2021). Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts com\/2021\/206 final. Retrieved August 1, 2022, from\u00a0https:\/\/digital-strategy.ec.europa.eu\/en\/library\/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence"},{"issue":"7","key":"22_CR22","doi-asserted-by":"publisher","first-page":"566","DOI":"10.1038\/s42256-021-00370-7","volume":"3","author":"G Falco","year":"2021","unstructured":"Falco, G., Shneiderman, B., Badger, J., Carrier, R., Dahbura, A., Danks, D., Eling, M., Goodloe, A., Gupta, J., Hart, C., Jirotka, M., Johnson, H., LaPointe, C., Llorens, A. J., Mackworth, A. K., Maple, C., P\u00e1lsson, S. E., Pasquale, F., Winfield, A., & Yeong, Z. K. (2021). Governing AI safety through independent audits. Nature Machine Intelligence, 3(7), 566\u2013571. https:\/\/doi.org\/10.1038\/s42256-021-00370-7","journal-title":"Nature Machine Intelligence"},{"key":"22_CR23","unstructured":"Financial Services Agency of Japan. (2021). Principles for model risk management. https:\/\/www.fsa.go.jp\/common\/law\/ginkou\/pdf_03.pdf"},{"issue":"4","key":"22_CR24","doi-asserted-by":"publisher","first-page":"689","DOI":"10.1007\/s11023-018-9482-5","volume":"28","author":"L Floridi","year":"2018","unstructured":"Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People\u2014An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689\u2013707. https:\/\/doi.org\/10.1007\/s11023-018-9482-5","journal-title":"Minds and Machines"},{"key":"22_CR25","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.4064091","author":"L Floridi","year":"2022","unstructured":"Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., M\u00f6kander, J., & Wen, Y. (2022). CapAI\u2014A procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act (SSRN Scholarly Paper ID 4064091). Social Science Research Network. https:\/\/doi.org\/10.2139\/ssrn.4064091","journal-title":"Social Science Research Network"},{"key":"22_CR26","doi-asserted-by":"publisher","unstructured":"Galdon Clavell, G., Mart\u00edn Zamorano, M., Castillo, C., Smith, O., & Matic, A. (2020, February). Auditing algorithms: On lessons learned and the risks of data minimization. In\u00a0Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society,\u00a0265-271.\u00a0https:\/\/doi.org\/10.1145\/3375627.3375852","DOI":"10.1145\/3375627.3375852"},{"key":"22_CR27","doi-asserted-by":"publisher","unstructured":"Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. MIS Quarterly, 37(2), 337\u2013355. https:\/\/doi.org\/10.25300\/MISQ\/2013\/37.2.01","DOI":"10.25300\/MISQ\/2013\/37.2.01"},{"issue":"2","key":"22_CR28","first-page":"53","volume":"3","author":"SM Groomer","year":"1989","unstructured":"Groomer, S. M., & Murthy, U. S. (1989). Continuous auditing of database applications: An embedded audit module approach. Journal of Information Systems, 3(2), 53.","journal-title":"Journal of Information Systems"},{"key":"22_CR29","unstructured":"High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. Retrieved September 10, 2020, from\u00a0https:\/\/ec.europa.eu\/newsroom\/dae\/document.cfm?doc_id=60419"},{"key":"22_CR30","unstructured":"Information Commissioner\u2019s Office. (2020). Guidance on the AI auditing framework: Draft guidance for consultation. Retrieved February 11, 2021, from https:\/\/ico.org.uk\/media\/about-the-ico\/consultations\/2617219\/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf"},{"key":"22_CR31","unstructured":"Institute of Internal Auditors. (2020). The IIA\u2019s three lines model: An update of the three lines of defense. Retrieved August 1, 2022, from\u00a0https:\/\/www.theiia.org\/globalassets\/site\/about-us\/advocacy\/three-lines-model-updated.pdf"},{"key":"22_CR32","unstructured":"Institute of Internal Auditors. (2022). About internal audit. Retrieved August 22, 2022, from\u00a0https:\/\/www.theiia.org\/en\/about-us\/about-internal-audit\/"},{"key":"22_CR33","doi-asserted-by":"publisher","unstructured":"Javadi, S. A., Cloete, R., Cobbe, J., Lee, M. S. A., & Singh, J. (2020). Monitoring Misuse for Accountable'Artificial Intelligence as a Service'. In\u00a0Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society,\u00a0300-306.\u00a0https:\/\/doi.org\/10.1145\/3375627.3375873","DOI":"10.1145\/3375627.3375873"},{"issue":"1","key":"22_CR34","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1016\/j.bushor.2018.08.004","volume":"62","author":"A Kaplan","year":"2019","unstructured":"Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who\u2019s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15\u201325. https:\/\/doi.org\/10.1016\/j.bushor.2018.08.004","journal-title":"Business Horizons"},{"key":"22_CR35","doi-asserted-by":"publisher","unstructured":"Katell, M., Young, M., Dailey, D., Herman, B., Guetler, V., Tam, A., Bintz, C., Raz, D., & Krafft, P. M. (2020). Toward situated interventions for algorithmic equity: Lessons from the field. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 45\u201355. https:\/\/doi.org\/10.1145\/3351095.3372874","DOI":"10.1145\/3351095.3372874"},{"key":"22_CR36","unstructured":"Kiesow, A., Zarvic, N., & Thomas, O. (2014). Continuous auditing in big data computing environments: Towards an integrated audit approach by using CAATTs. GI-Jahrestagung."},{"key":"22_CR37","doi-asserted-by":"publisher","unstructured":"Kim, M. P., Ghorbani, A., & Zou, J. (2019). Multiaccuracy: Black-box post-processing for fairness in classification. Proceedings of the 2019 AAAI\/ACM Conference on AI, Ethics, and Society, 247\u2013254. https:\/\/doi.org\/10.1145\/3306618.3314287","DOI":"10.1145\/3306618.3314287"},{"key":"22_CR38","doi-asserted-by":"publisher","unstructured":"Kokina, J., & Davenport, T. H. (2017). The emergence of artificial intelligence: How automation is changing auditing. Journal of Emerging Technologies in Accounting, 14(1), 115\u2013122. Scopus. https:\/\/doi.org\/10.2308\/jeta-51730","DOI":"10.2308\/jeta-51730"},{"key":"22_CR39","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3778998","author":"A Koshiyama","year":"2021","unstructured":"Koshiyama, A., Kazim, E., Treleaven, P., Rai, P., Szpruch, L., Pavey, G., Ahamat, G., Leutner, F., Goebel, R., Knight, A., Adams, J., Hitrova, C., Barnett, J., Nachev, P., Barber, D., Chamorro-Premuzic, T., Klemmer, K., Gregorovic, M., Khan, S., & Lomas, E. (2021). Towards algorithm auditing: A survey on managing legal, ethical and technological risks of AI, ML and associated algorithms (SSRN Scholarly Paper ID 3778998). Social Science Research Network. https:\/\/doi.org\/10.2139\/ssrn.3778998","journal-title":"Social Science Research Network"},{"key":"22_CR40","doi-asserted-by":"crossref","unstructured":"Laato, S., Birkstedt, T., M\u00e4ntym\u00e4ki, M., Minkkinen, M., & Mikkonen, T. (2022a). AI governance in the system development life cycle: Insights on responsible machine learning engineering. Proceedings of the 1st Conference on AI Engineering\u2014Software Engineering for AI.","DOI":"10.1145\/3522664.3528598"},{"key":"22_CR41","doi-asserted-by":"crossref","unstructured":"Laato, S., M\u00e4ntym\u00e4ki, M., Minkkinen, M., Birkstedt, T., Islam, A. K. M. N., & Dennehy, D. (2022b). Integrating machine learning with software development lifecycles: Insights from experts. ECIS 2022b Proceedings. ECIS, Timi\u0219oara, Romania.","DOI":"10.1145\/3522664.3528598"},{"key":"22_CR42","unstructured":"LaBrie, R., & Steinke, G. (2019). Towards a Framework for Ethical Audits of AI Algorithms. AMCIS 2019 Proceedings. https:\/\/aisel.aisnet.org\/amcis2019\/data_science_analytics_for_decision_support\/data_science_analytics_for_decision_support\/24"},{"key":"22_CR43","doi-asserted-by":"publisher","unstructured":"Lee, M. S. Ah., Floridi, L., & Denev, A. (2020). Innovating with confidence: Embedding AI governance and fairness in a financial services risk management framework. In L. Floridi (Ed.), Ethics, governance, and policies in artificial intelligence (Vol. 144, pp. 353\u2013371). Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-030-81907-1_20","DOI":"10.1007\/978-3-030-81907-1_20"},{"issue":"2","key":"22_CR44","doi-asserted-by":"publisher","first-page":"304","DOI":"10.1108\/14637151211225216","volume":"18","author":"M Majdalawieh","year":"2012","unstructured":"Majdalawieh, M., Sahraoui, S., & Barkhi, R. (2012). Intra\/inter process continuous auditing (IIPCA), integrating CA within an enterprise system environment. Business Process Management Journal, 18(2), 304\u2013327. https:\/\/doi.org\/10.1108\/14637151211225216","journal-title":"Business Process Management Journal"},{"key":"22_CR45","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-022-00143-x","author":"M M\u00e4ntym\u00e4ki","year":"2022","unstructured":"M\u00e4ntym\u00e4ki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022a). Defining organizational AI governance. AI and Ethics. https:\/\/doi.org\/10.1007\/s43681-022-00143-x","journal-title":"AI and Ethics"},{"key":"22_CR46","doi-asserted-by":"publisher","unstructured":"M\u00e4ntym\u00e4ki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022b). Putting AI ethics into practice: The hourglass model of organizational AI governance (arXiv:2206.00335). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2206.00335","DOI":"10.48550\/arXiv.2206.00335"},{"key":"22_CR47","doi-asserted-by":"publisher","unstructured":"Marques, R. P., & Santos, C. (2017). Research on continuous auditing: A bibliometric analysis. 2017 12th Iberian Conference on Information Systems and Technologies (CISTI), 1\u20134. https:\/\/doi.org\/10.23919\/CISTI.2017.7976048","DOI":"10.23919\/CISTI.2017.7976048"},{"issue":"4","key":"22_CR48","doi-asserted-by":"publisher","first-page":"835","DOI":"10.1007\/s10551-018-3921-3","volume":"160","author":"K Martin","year":"2019","unstructured":"Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835\u2013850. https:\/\/doi.org\/10.1007\/s10551-018-3921-3","journal-title":"Journal of Business Ethics"},{"key":"22_CR49","doi-asserted-by":"publisher","unstructured":"Metcalf, J., Moss, E., Watkins, E. A., Singh, R., & Elish, M. C. (2021). Algorithmic impact assessments and accountability: The co-construction of impacts. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 735\u2013746. https:\/\/doi.org\/10.1145\/3442188.3445935","DOI":"10.1145\/3442188.3445935"},{"key":"22_CR50","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-022-01415-0","volume-title":"What about investors?","author":"M Minkkinen","year":"2022","unstructured":"Minkkinen, M., Niukkanen, A., & M\u00e4ntym\u00e4ki, M. (2022a). What about investors? AI & SOCIETY. https:\/\/doi.org\/10.1007\/s00146-022-01415-0"},{"key":"22_CR51","doi-asserted-by":"publisher","DOI":"10.1007\/s10796-022-10269-2","author":"M Minkkinen","year":"2022","unstructured":"Minkkinen, M., Zimmer, M. P., & M\u00e4ntym\u00e4ki, M. (2022b). Co-shaping an ecosystem for responsible AI: Five types of expectation work in response to a technological frame. Information Systems Frontiers. https:\/\/doi.org\/10.1007\/s10796-022-10269-2","journal-title":"Information Systems Frontiers"},{"key":"22_CR52","doi-asserted-by":"publisher","first-page":"241","DOI":"10.1007\/s11023-021-09577-4","volume":"32","author":"J M\u00f6kander","year":"2022","unstructured":"M\u00f6kander, J., Axente, M., Casolari, F., & Floridi, L. (2022). Conformity assessments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds and Machines, 32, 241\u2013268. https:\/\/doi.org\/10.1007\/s11023-021-09577-4","journal-title":"Minds and Machines"},{"issue":"4","key":"22_CR53","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1007\/s11948-021-00319-4","volume":"27","author":"J M\u00f6kander","year":"2021","unstructured":"M\u00f6kander, J., Morley, J., Taddeo, M., & Floridi, L. (2021). Ethics-based auditing of automated decision-making systems: Nature, scope, and limitations. Science and Engineering Ethics, 27(4), 44. https:\/\/doi.org\/10.1007\/s11948-021-00319-4","journal-title":"Science and Engineering Ethics"},{"key":"22_CR54","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-021-01285-y","author":"I Nandutu","year":"2021","unstructured":"Nandutu, I., Atemkeng, M., & Okouma, P. (2021). Integrating AI ethics in wildlife conservation AI systems in South Africa: A review, challenges, and future research agenda. AI & SOCIETY. https:\/\/doi.org\/10.1007\/s00146-021-01285-y","journal-title":"AI & SOCIETY"},{"issue":"12","key":"22_CR55","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1007\/s10916-021-01783-y","volume":"45","author":"L Oala","year":"2021","unstructured":"Oala, L., Murchison, A. G., Balachandran, P., Choudhary, S., Fehr, J., Leite, A. W., Goldschmidt, P. G., Johner, C., Sch\u00f6rverth, E. D. M., Nakasi, R., Meyer, M., Cabitza, F., Baird, P., Prabhu, C., Weicken, E., Liu, X., Wenzel, M., Vogler, S., Akogo, D., & Wiegand, T. (2021). Machine learning for health: Algorithm auditing & quality control. Journal of Medical Systems, 45(12), 105. https:\/\/doi.org\/10.1007\/s10916-021-01783-y","journal-title":"Journal of Medical Systems"},{"issue":"5","key":"22_CR56","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2021.102657","volume":"58","author":"C Panigutti","year":"2021","unstructured":"Panigutti, C., Perotti, A., Panisson, A., Bajardi, P., & Pedreschi, D. (2021). FairLens: Auditing black-box clinical decision support systems. Information Processing & Management, 58(5), 102657. https:\/\/doi.org\/10.1016\/j.ipm.2021.102657","journal-title":"Information Processing & Management"},{"key":"22_CR57","doi-asserted-by":"crossref","unstructured":"Pasquale, F. (2015). The black box society: The secret al.gorithms that control money and information. Harvard University Press.","DOI":"10.4159\/harvard.9780674736061"},{"key":"22_CR58","doi-asserted-by":"publisher","unstructured":"Pasquier, T. F. J.-M., Singh, J., Bacon, J., & Eyers, D. (2016). Information flow audit for PaaS clouds. 2016 IEEE International Conference on Cloud Engineering (IC2E), 42\u201351. https:\/\/doi.org\/10.1109\/IC2E.2016.19","DOI":"10.1109\/IC2E.2016.19"},{"key":"22_CR59","unstructured":"PDPC. (2020). PDPC Model AI Governance Framework, Second Edition. Retrieved February 11, 2021, from https:\/\/iapp.org\/resources\/article\/pdpc-model-ai-governance-framework-second-edition\/"},{"key":"22_CR60","unstructured":"PwC. (2019). Responsible AI Toolkit. Retrieved August 1, 2022, from https:\/\/www.pwc.com\/gx\/en\/issues\/data-and-analytics\/artificial-intelligence\/what-isresponsible-ai.html"},{"key":"22_CR61","doi-asserted-by":"publisher","unstructured":"Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020). Saving face: Investigating the ethical concerns of facial recognition auditing. In\u00a0Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society,\u00a0145-151.\u00a0https:\/\/doi.org\/10.1145\/3375627.3375820","DOI":"10.1145\/3375627.3375820"},{"key":"22_CR62","doi-asserted-by":"publisher","unstructured":"Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020b). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020b Conference on Fairness, Accountability, and Transparency, 33\u201344. https:\/\/doi.org\/10.1145\/3351095.3372873","DOI":"10.1145\/3351095.3372873"},{"key":"22_CR63","unstructured":"Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now. Retrieved August 22, 2022, from\u00a0http:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/07349165.1995.9726076"},{"issue":"3","key":"22_CR64","doi-asserted-by":"publisher","first-page":"375","DOI":"10.1177\/2032284420948161","volume":"11","author":"IN Rezende","year":"2020","unstructured":"Rezende, I. N. (2020). Facial recognition in police hands: Assessing the \u2018Clearview case\u2019 from a European perspective. New Journal of European Criminal Law, 11(3), 375\u2013389.","journal-title":"New Journal of European Criminal Law"},{"key":"22_CR65","unstructured":"Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson."},{"key":"22_CR66","doi-asserted-by":"publisher","unstructured":"Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A., Rodolfa, K. T., & Ghani, R. (2018). Aequitas: A bias and fairness audit toolkit. https:\/\/doi.org\/10.48550\/ARXIV.1811.05577","DOI":"10.48550\/ARXIV.1811.05577"},{"key":"22_CR67","unstructured":"Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: Converting critical concerns into productive inquiry: A preconference at the 64th Annual Meeting of the International Communication Association."},{"key":"22_CR68","doi-asserted-by":"publisher","unstructured":"Sapiezynski, P., Zeng, W., E Robertson, R., Mislove, A., & Wilson, C. (2019). Quantifying the impact of user attention on fair group representation in ranked lists. Companion proceedings of the 2019 World Wide Web Conference, 553\u2013562. https:\/\/doi.org\/10.1145\/3308560.3317595","DOI":"10.1145\/3308560.3317595"},{"key":"22_CR69","doi-asserted-by":"publisher","DOI":"10.1080\/10580530.2022.2085825","author":"J Schneider","year":"2022","unstructured":"Schneider, J., Abraham, R., Meske, C., & Vom Brocke, J. (2022). Artificial intelligence governance for businesses. Information Systems Management. https:\/\/doi.org\/10.1080\/10580530.2022.2085825","journal-title":"Information Systems Management"},{"key":"22_CR70","unstructured":"Sepp\u00e4l\u00e4, A., Birkstedt, T., & M\u00e4ntym\u00e4ki, M. (2021). From ethical AI principles to governed AI. Proceedings of the 42nd International Conference on Information Systems (ICIS2021). International Conference on Information Systems (ICIS), Austin, Texas. Retrieved March 3, 2022, from\u00a0https:\/\/aisel.aisnet.org\/icis2021\/ai_business\/ai_business\/10\/"},{"issue":"2128","key":"22_CR71","doi-asserted-by":"publisher","first-page":"20170362","DOI":"10.1098\/rsta.2017.0362","volume":"376","author":"H Shah","year":"2018","unstructured":"Shah, H. (2018). Algorithmic accountability. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 376(2128), 20170362. https:\/\/doi.org\/10.1098\/rsta.2017.0362","journal-title":"Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences"},{"key":"22_CR72","doi-asserted-by":"publisher","unstructured":"Sharma, S., Henderson, J., & Ghosh, J. (2019). CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models. https:\/\/doi.org\/10.48550\/ARXIV.1905.07857","DOI":"10.48550\/ARXIV.1905.07857"},{"issue":"5\u20136","key":"22_CR73","doi-asserted-by":"publisher","first-page":"269","DOI":"10.1002\/mcda.1758","volume":"28","author":"W Shiue","year":"2021","unstructured":"Shiue, W., Liu, J. Y., & Li, Z. Y. (2021). Strategic multiple criteria group decision-making model for continuous auditing system. Journal of Multi-Criteria Decision Analysis, 28(5\u20136), 269\u2013282. https:\/\/doi.org\/10.1002\/mcda.1758","journal-title":"Journal of Multi-Criteria Decision Analysis"},{"issue":"4","key":"22_CR74","doi-asserted-by":"publisher","first-page":"26","DOI":"10.1145\/3419764","volume":"10","author":"B Shneiderman","year":"2020","unstructured":"Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 26. https:\/\/doi.org\/10.1145\/3419764","journal-title":"ACM Transactions on Interactive Intelligent Systems"},{"key":"22_CR75","unstructured":"Smart Dubai. (2019). AI ethics principles and guidelines.\u00a0Retrieved August 1, 2022, from https:\/\/www.digitaldubai.ae\/docs\/default-source\/ai-principlesresources\/ai-ethics.pdf"},{"key":"22_CR76","doi-asserted-by":"publisher","unstructured":"Stilgoe, J. (2018). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science, 48(1), 25\u201356. https:\/\/doi.org\/10.1177\/0306312717741687","DOI":"10.1177\/0306312717741687"},{"key":"22_CR77","unstructured":"Stix, C. (forthcoming). The ghost of AI governance past, present and future: AI governance in the European Union. In J. Bullock & V. Hudson (Eds.), Oxford University Press handbook on AI governance. Oxford University Press."},{"key":"22_CR78","doi-asserted-by":"publisher","unstructured":"Sulaimon, I. A., Ghoneim, A., & Alrashoud, M. (2019). A new reinforcement learning-based framework for unbiased autonomous software systems. 2019 8th International Conference on Modeling Simulation and Applied Optimization (ICMSAO), 1\u20136. https:\/\/doi.org\/10.1109\/ICMSAO.2019.8880288","DOI":"10.1109\/ICMSAO.2019.8880288"},{"key":"22_CR79","doi-asserted-by":"publisher","unstructured":"Sutton, A., & Samavi, R. (2018). Tamper-proof privacy auditing for artificial intelligence systems. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 5374\u20135378. https:\/\/doi.org\/10.24963\/ijcai.2018\/756","DOI":"10.24963\/ijcai.2018\/756"},{"key":"22_CR80","unstructured":"Tewari, G. (2022). Council post: The future of AI: 5 things to expect in the next 10 years. Forbes. Retrieved August 11, 2022, from\u00a0https:\/\/www.forbes.com\/sites\/forbesbusinesscouncil\/2022\/05\/05\/the-future-of-ai-5-things-to-expect-in-the-next-10-years\/"},{"key":"22_CR81","doi-asserted-by":"publisher","unstructured":"Thangavel, M., & Varalakshmi, P. (2020). Enabling Ternary Hash Tree Based Integrity Verification for Secure Cloud Data Storage. IEEE Transactions on Knowledge and Data Engineering, 32(12), 2351\u20132362. https:\/\/doi.org\/10.1109\/TKDE.2019.2922357","DOI":"10.1109\/TKDE.2019.2922357"},{"key":"22_CR82","doi-asserted-by":"publisher","DOI":"10.1007\/s10796-021-10146-4","author":"C Trocin","year":"2021","unstructured":"Trocin, C., Mikalef, P., Papamitsiou, Z., & Conboy, K. (2021). Responsible AI for digital health: A synthesis and a research agenda. Information Systems Frontiers. https:\/\/doi.org\/10.1007\/s10796-021-10146-4","journal-title":"Information Systems Frontiers"},{"key":"22_CR83","doi-asserted-by":"publisher","unstructured":"Tronto, S., & Killingsworth, B. L. (2021). How internal audit can champion continuous monitoring in a business operation via visual reporting and overcome barriers to success.\u00a0The International Journal of Digital Accounting Research,\u00a021(27), 23-59.\u00a0https:\/\/doi.org\/10.4192\/1577-8517-v21_2","DOI":"10.4192\/1577-8517-v21_2"},{"key":"22_CR84","unstructured":"Vasarhelyi, M. A., & Halper, F. (1991). The continuous audit of online systems. Auditing: A Journal of Practice & Theory, 10(1)."},{"issue":"3","key":"22_CR85","doi-asserted-by":"publisher","first-page":"23","DOI":"10.3390\/bdcc4030023","volume":"4","author":"K Wang","year":"2020","unstructured":"Wang, K., Zipperle, M., Becherer, M., Gottwalt, F., & Zhang, Y. (2020). An AI-based automated continuous compliance awareness framework (CoCAF) for procurement auditing. Big Data and Cognitive Computing, 4(3), 23. https:\/\/doi.org\/10.3390\/bdcc4030023","journal-title":"Big Data and Cognitive Computing"},{"key":"22_CR86","unstructured":"WEF (World Economic Forum). (2020). A Framework for Responsible Limits on Facial Recognition Use Case: Flow Management. Retrieved February 20,2021, from http:\/\/www3.weforum.org\/docs\/WEF_Framework_for_action_Facial_recognition_2020.pdf"},{"key":"22_CR87","doi-asserted-by":"publisher","unstructured":"Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Vi\u00e9gas, F., & Wilson, J. (2020). The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics, 26(1), 56\u201365. https:\/\/doi.org\/10.1109\/TVCG.2019.2934619","DOI":"10.1109\/TVCG.2019.2934619"},{"key":"22_CR88","doi-asserted-by":"publisher","unstructured":"Yeung, K., Howes, A., & Pogrebna, G. (2020). AI governance by human rights-centered design, deliberation, and oversight: An end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI (pp. 75\u2013106). Oxford University Press. https:\/\/doi.org\/10.1093\/oxfordhb\/9780190067397.013.5","DOI":"10.1093\/oxfordhb\/9780190067397.013.5"},{"key":"22_CR89","doi-asserted-by":"publisher","unstructured":"Yoon, K., Liu, Y., Chiu, T., & Vasarhelyi, M. A. (2021). Design and evaluation of an advanced continuous data level auditing system: A three-layer structure.\u00a0International Journal of Accounting Information Systems,\u00a042, 100524.\u00a0https:\/\/doi.org\/10.1016\/j.accinf.2021.100524","DOI":"10.1016\/j.accinf.2021.100524"},{"issue":"2","key":"22_CR90","doi-asserted-by":"publisher","first-page":"83","DOI":"10.1109\/TTS.2021.3066209","volume":"2","author":"RV Zicari","year":"2021","unstructured":"Zicari, R. V., Brodersen, J., Brusseau, J., Dudder, B., Eichhorn, T., Ivanov, T., Kararigas, G., Kringen, P., McCullough, M., Moslein, F., Mushtaq, N., Roig, G., Sturtz, N., Tolle, K., Tithi, J. J., van Halem, I., & Westerlund, M. (2021). Z-Inspection: A process to assess trustworthy AI. IEEE Transactions on Technology and Society, 2(2), 83\u201397. https:\/\/doi.org\/10.1109\/TTS.2021.3066209","journal-title":"IEEE Transactions on Technology and Society"}],"container-title":["Digital Society"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44206-022-00022-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44206-022-00022-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44206-022-00022-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,20]],"date-time":"2022-12-20T12:06:02Z","timestamp":1671537962000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44206-022-00022-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,4]]},"references-count":90,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["22"],"URL":"https:\/\/doi.org\/10.1007\/s44206-022-00022-2","relation":{},"ISSN":["2731-4650","2731-4669"],"issn-type":[{"value":"2731-4650","type":"print"},{"value":"2731-4669","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,10,4]]},"assertion":[{"value":"24 March 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 September 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 October 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of Interest"}}],"article-number":"21"}}