{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,6]],"date-time":"2026-05-06T15:28:52Z","timestamp":1778081332227,"version":"3.51.4"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2025,2,13]],"date-time":"2025-02-13T00:00:00Z","timestamp":1739404800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Digit. Gov.: Res. Pract."],"published-print":{"date-parts":[[2025,3,31]]},"abstract":"<jats:p>Much like cars, AI technologies must undergo rigorous testing to ensure their safety and reliability. However, just as a 16-wheel truck\u2019s brakes are different from that of a standard hatchback, AI models too may need distinct analyses based on their risk, size, application domain, and other factors. Prior research has attempted to do this, by identifying areas of concern for AI\/ML applications and tools needed to simulate the effect of adversarial actors. However, currently, a variety of frameworks exist which poses challenges due to inconsistent terminology, focus, complexity, and interoperability issues, hindering effective threat discovery. In this article, we present a meta-analysis of 14 AI threat modeling frameworks, providing a streamlined set of questions for AI\/ML threat analysis. We then review this library, incorporating feedback from 10 experts to refine the questions. This refined set of questions allow practitioners to seamlessly integrate threat analysis for comprehensive manual evaluation of a wide range of AI\/ML applications.<\/jats:p>","DOI":"10.1145\/3674845","type":"journal-article","created":{"date-parts":[[2024,6,26]],"date-time":"2024-06-26T11:25:13Z","timestamp":1719401113000},"page":"1-18","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":7,"title":["Building Guardrails in AI Systems with Threat Modeling"],"prefix":"10.1145","volume":"6","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2673-5964","authenticated-orcid":false,"given":"Jayati","family":"Dev","sequence":"first","affiliation":[{"name":"Comcast Corporation, Philadelphia, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4710-1939","authenticated-orcid":false,"given":"Nuray Baltaci","family":"Akhuseyinoglu","sequence":"additional","affiliation":[{"name":"Comcast Corporation, Philadelphia, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7186-3442","authenticated-orcid":false,"given":"Golam","family":"Kayas","sequence":"additional","affiliation":[{"name":"Comcast Corporation, Philadelphia, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8817-0757","authenticated-orcid":false,"given":"Bahman","family":"Rashidi","sequence":"additional","affiliation":[{"name":"Comcast Corporation, Philadelphia, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1796-0177","authenticated-orcid":false,"given":"Vaibhav","family":"Garg","sequence":"additional","affiliation":[{"name":"Comcast Corporation, Philadelphia, United States"}]}],"member":"320","published-online":{"date-parts":[[2025,2,13]]},"reference":[{"key":"e_1_3_5_2_2","doi-asserted-by":"crossref","unstructured":"Tegjyot Singh Sethi and Mehmed Kantardzic. 2018. Data driven exploratory attacks on black box classifiers in adversarial domains. Neurocomput. 289 C (May 2018) 129\u2013143.","DOI":"10.1016\/j.neucom.2018.02.007"},{"key":"e_1_3_5_3_2","unstructured":"2020. NVD - CVE-2019-20634. Retrieved January 13 2024 from https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2019-20634"},{"key":"e_1_3_5_4_2","unstructured":"2022. OECD AI Policy Observatory Portal. Retrieved January 13 2024 from https:\/\/oecd.ai\/en\/catalogue\/tools\/ai-vulnerability-database"},{"key":"e_1_3_5_5_2","doi-asserted-by":"crossref","first-page":"4226","DOI":"10.1109\/BigData55660.2022.10020368","volume-title":"Proceedings of the 2022 IEEE International Conference on Big Data","author":"Alatwi Huda Ali","year":"2022","unstructured":"Huda Ali Alatwi and Charles Morisset. 2022. Threat modeling for machine learning-based network intrusion detection systems. In Proceedings of the 2022 IEEE International Conference on Big Data. IEEE, 4226\u20134235."},{"key":"e_1_3_5_6_2","article-title":"Understanding AI technology","author":"Allen Greg","year":"2020","unstructured":"Greg Allen. 2020. Understanding AI technology. Joint Artificial Intelligence Center (JAIC) The Pentagon United States (2020).","journal-title":"Joint Artificial Intelligence Center (JAIC) The Pentagon United States"},{"key":"e_1_3_5_7_2","doi-asserted-by":"crossref","unstructured":"Saleema Amershi Dan Weld Mihaela Vorvoreanu Adam Fourney Besmira Nushi Penny Collisson Jina Suh Shamsi Iqbal Paul N. Bennett Kori Inkpen Jaime Teevan Ruth Kikin-Gil and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI\u201919). Association for Computing Machinery New York NY USA Paper 3 1\u201313.","DOI":"10.1145\/3290605.3300233"},{"key":"e_1_3_5_8_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58592-1_29"},{"key":"e_1_3_5_9_2","unstructured":"Ruby Annette Aisha Banu Sharon Priya S Subash Chandran. 2023. Taxonomy of AISecOps threat modeling for cloud based medical chatbots. Retrieved January 13 2024 from https:\/\/arxiv.org\/abs\/2305.11189"},{"key":"e_1_3_5_10_2","unstructured":"Isabel Barber\u00e1. 2020. Privacy Library Of Threats for AI (PLOT4AI). Retrieved January 13 2024 from https:\/\/plot4.ai\/"},{"key":"e_1_3_5_11_2","unstructured":"P. Bezombes S. Brunessaux and S. Cadzow. 2023. Cybersecurity of AI and Standardisation. Retrieved January 13 2024 from https:\/\/www.enisa.europa.eu\/publications\/cybersecurity-of-ai-and-standardisation"},{"key":"e_1_3_5_12_2","volume-title":"Threat Modeling Manifesto","author":"Braiterman Zoe","year":"2021","unstructured":"Zoe Braiterman, Adam Shostack, Jonathan Marcil, Stephen de Vries, Irene Michlin, Kim Wuyts, Robert Hurlbut, Brook S. E. Schoenfield, Fraser Scott, Matthew Coles, Chris Romeo, Alyssa Miller, Izar Tarandach, Avi Douglen, and Marc French. 2021. Threat Modeling Manifesto. Retrieved from http:\/\/www.threatmodelingmanifesto.org\/"},{"key":"e_1_3_5_13_2","doi-asserted-by":"publisher","DOI":"10.3386\/w31161"},{"key":"e_1_3_5_14_2","unstructured":"Yihan Cao Siyu Li Yixin Liu Zhiling Yan Yutong Dai Philip S. Yu and Lichao Sun. 2023. A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv:2303.04226. Retrieved from https:\/\/arxiv.org\/abs\/2303.04226"},{"key":"e_1_3_5_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140448"},{"key":"e_1_3_5_16_2","volume-title":"MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)","author":"Corporation MITRE","year":"2020","unstructured":"MITRE Corporation. 2020. MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems). Retrieved January 13, 2024 from https:\/\/atlas.mitre.org\/"},{"key":"e_1_3_5_17_2","unstructured":"Lauren Feiner. 2023. OpenAI faces complaint to FTC that seeks investigation and suspension of ChatGPT releases. Retrieved January 13 2024 from https:\/\/www.cnbc.com\/2023\/03\/30\/openai-faces-complaint-to-ftc-that-seeks-suspension-of-chatgpt-updates.html"},{"key":"e_1_3_5_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3458723"},{"key":"e_1_3_5_19_2","doi-asserted-by":"crossref","unstructured":"Subhadip Ghosh Aydin Zaboli Junho Hong and Jaerock Kwon. 2023. An integrated approach of threat analysis for autonomous vehicles perception system. IEEE Access 11 (2023) 14752\u201314777.","DOI":"10.1109\/ACCESS.2023.3243906"},{"key":"e_1_3_5_20_2","unstructured":"Roberto Gozalo-Brizuela and Eduardo C. Garrido-Merchan. 2023. ChatGPT is not all you need. A state of the art review of large generative AI models. arXiv:2301.04655. Retrieved from https:\/\/arxiv.org\/abs\/2301.04655"},{"key":"e_1_3_5_21_2","doi-asserted-by":"publisher","DOI":"10.1002\/cl2.1230"},{"key":"e_1_3_5_22_2","doi-asserted-by":"publisher","DOI":"10.5555\/1202957"},{"key":"e_1_3_5_23_2","volume-title":"Securing Artificial Intelligence (SAI); Problem Statement","author":"(ISG) Secure AI (SAI) ETSI Industry Specification Group","year":"2020","unstructured":"Secure AI (SAI) ETSI Industry Specification Group (ISG). 2020. Securing Artificial Intelligence (SAI); Problem Statement. Retrieved January 13, 2024 from https:\/\/www.etsi.org\/deliver\/etsi_gr\/SAI\/001_099\/004\/01.01.01_60\/gr_SAI004v010101p.pdf"},{"key":"e_1_3_5_24_2","first-page":"293","volume-title":"Proceedings of the International Conference on Intelligent Information Technologies for Industry","author":"Kotenko Igor","year":"2022","unstructured":"Igor Kotenko, Igor Saenko, Oleg Lauta, Nikita Vasiliev, and Ksenia Kribel. 2022. Attacks against artificial intelligence systems: Classification, the threat model and the approach to protection. In Proceedings of the International Conference on Intelligent Information Technologies for Industry. Springer, 293\u2013302."},{"key":"e_1_3_5_25_2","volume-title":"Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them","author":"Kumar Ram Shankar Siva","year":"2023","unstructured":"Ram Shankar Siva Kumar and Hyrum Anderson. 2023. Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them. John Wiley and Sons."},{"key":"e_1_3_5_26_2","unstructured":"Ram Shankar Siva Kumar David O Brien Kendra Albert Salom\u00e9 Vilj\u00f6en and Jeffrey Snover. 2019. Failure Modes in Machine Learning Systems. arXiv:1911.11034. Retrieved from https:\/\/arxiv.org\/abs\/1911.11034"},{"key":"e_1_3_5_27_2","doi-asserted-by":"crossref","first-page":"69","DOI":"10.1109\/SPW50608.2020.00028","volume-title":"Proceedings of the 2020 IEEE Security and Privacy Workshops","author":"Kumar Ram Shankar Siva","year":"2020","unstructured":"Ram Shankar Siva Kumar, Magnus Nystr\u00f6m, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, and Sharon Xia. 2020. Adversarial machine learning-industry perspectives. In Proceedings of the 2020 IEEE Security and Privacy Workshops. IEEE, 69\u201375."},{"key":"e_1_3_5_28_2","unstructured":"Avivah Litan. 2021. Use Gartner\u2019s MOST Framework for AI Trust and Risk Management. Retrieved January 13 2024 from https:\/\/www.gartner.com\/en\/documents\/4001144"},{"key":"e_1_3_5_29_2","unstructured":"Andrew Marshall Jugal Parikh E. Kiciman and R. S. S. Kumar. 2019. Threat modeling AI\/ML systems and dependencies. Security and Documentation. (2019). Retrieved January 13 2024 from https:\/\/learn.microsoft.com\/en-us\/security\/engineering\/threat-modeling-aiml"},{"key":"e_1_3_5_30_2","doi-asserted-by":"publisher","DOI":"10.3390\/s22176662"},{"key":"e_1_3_5_31_2","first-page":"2133","volume-title":"Proceedings of the 32nd USENIX Security Symposium","author":"Niu Liang","year":"2023","unstructured":"Liang Niu, Shujaat Mirza, Zayd Maradni, and Christina P\u00f6pper. 2023. CodexLeaks: Privacy leaks from code generation language models in GitHub copilot. In Proceedings of the 32nd USENIX Security Symposium. USENIX Association, Anaheim, CA, 2133\u20132150. Retrieved from https:\/\/www.usenix.org\/conference\/usenixsecurity23\/presentation\/niu"},{"key":"e_1_3_5_32_2","doi-asserted-by":"crossref","DOI":"10.6028\/NIST.AI.100-2e2023.ipd","volume-title":"Adversarial machine learning: A taxonomy and terminology of attacks and mitigations","author":"Oprea Alina","year":"2023","unstructured":"Alina Oprea and Apostol Vassilev. 2023. Adversarial machine learning: A taxonomy and terminology of attacks and mitigations. Technical Report. National Institute of Standards and Technology."},{"key":"e_1_3_5_33_2","article-title":"EU \u201cin touching distance\u201d of world\u2019s first laws regulating artificial intelligence","author":"O\u2019Carroll Lisa","year":"2023","unstructured":"Lisa O\u2019Carroll. 2023. EU \u201cin touching distance\u201d of world\u2019s first laws regulating artificial intelligence. The Guardian (Oct2023). Retrieved January 13, 2024 from https:\/\/www.theguardian.com\/technology\/2023\/oct\/24\/eu-touching-distance-world-first-law-regulating-artificial-intelligence-dragos-tudorache","journal-title":"The Guardian"},{"key":"e_1_3_5_34_2","first-page":"8446","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Rahmati Ali","year":"2020","unstructured":"Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, and Huaiyu Dai. 2020. Geoda: A geometric framework for black-box adversarial attacks. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 8446\u20138455."},{"key":"e_1_3_5_35_2","article-title":"Governments race to regulate AI tools","year":"2023","unstructured":"Reuters. 2023. Governments race to regulate AI tools. Reuters (Oct2023). Retrieved January 13, 2024 from https:\/\/www.reuters.com\/technology\/governments-race-regulate-ai-tools-2023-10-13\/","journal-title":"Reuters"},{"issue":"2","key":"e_1_3_5_36_2","doi-asserted-by":"crossref","first-page":"158","DOI":"10.1007\/s42979-022-01043-x","article-title":"Ai-based modeling: Techniques, applications and research issues towards automation, intelligent and smart systems","volume":"3","author":"Sarker Iqbal H.","year":"2022","unstructured":"Iqbal H. Sarker. 2022. Ai-based modeling: Techniques, applications and research issues towards automation, intelligent and smart systems. SN Computer Science 3, 2 (2022), 158.","journal-title":"SN Computer Science"},{"key":"e_1_3_5_37_2","volume-title":"Proceedings of the Workshop on Artificial Intelligence Safety 2020 co-located with the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Yokohama, Japan, January, 2021.","author":"Vargas Danilo Vasconcellos","year":"2020","unstructured":"Danilo Vasconcellos Vargas and Jiawei Su. 2020. Understanding the One pixel attack: Propagation maps and locality analysis. In Proceedings of the Workshop on Artificial Intelligence Safety 2020 co-located with the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Yokohama, Japan, January, 2021.Hu\u00e1scar Espinoza, John A. McDermid, Xiaowei Huang, Mauricio Castillo-Effen, Xin Cynthia Chen, Jos\u00e9 Hern\u00e1ndez-Orallo, Se\u00e1n \u00d3 h\u00c9igeartaigh, and Richard Mallah (Eds.), CEUR-WS.org. Retrieved from https:\/\/ceur-ws.org\/Vol-2640\/paper_4.pdf"},{"key":"e_1_3_5_38_2","unstructured":"Chloe Xiang. 2022. Scientists Increasingly Can\u2019t Explain How AI Works. Retrieved January 13 2024 from https:\/\/www.vice.com\/en\/article\/y3pezm\/scientists-increasingly-cant-explain-how-ai-works"}],"container-title":["Digital Government: Research and Practice"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3674845","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3674845","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:05:56Z","timestamp":1750291556000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3674845"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,13]]},"references-count":37,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,3,31]]}},"alternative-id":["10.1145\/3674845"],"URL":"https:\/\/doi.org\/10.1145\/3674845","relation":{},"ISSN":["2691-199X","2639-0175"],"issn-type":[{"value":"2691-199X","type":"print"},{"value":"2639-0175","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,13]]},"assertion":[{"value":"2024-01-16","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-11","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-02-13","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}