{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,21]],"date-time":"2026-03-21T09:07:44Z","timestamp":1774084064655,"version":"3.50.1"},"reference-count":99,"publisher":"Association for Computing Machinery (ACM)","issue":"CSCW2","license":[{"start":{"date-parts":[[2024,11,7]],"date-time":"2024-11-07T00:00:00Z","timestamp":1730937600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2024,11,7]]},"abstract":"<jats:p>Many guidelines for responsible AI have been suggested to help AI practitioners in the development of ethical and responsible AI systems. However, these guidelines are often neither grounded in regulation nor usable by different roles, from developers to decision makers. To bridge this gap, we developed a four-step method to generate a list of responsible AI guidelines; these steps are: (1) manual coding of 17 papers on responsible AI; (2) compiling an initial catalog of responsible AI guidelines; (3) refining the catalog through interviews and expert panels; and (4) finalizing the catalog. To evaluate the resulting 22 guidelines, we incorporated them into an interactive tool and assessed them in a user study with 14 AI researchers, engineers, designers, and managers from a large technology company. Through interviews with these practitioners, we found that the guidelines were grounded in current regulations and usable across roles, encouraging self-reflection on ethical considerations at early stages of development. This significantly contributes to the concept of 'Responsible AI by Design'- a design-first approach that embeds responsible AI values throughout the development lifecycle and across various business roles.<\/jats:p>","DOI":"10.1145\/3686927","type":"journal-article","created":{"date-parts":[[2024,11,8]],"date-time":"2024-11-08T15:52:40Z","timestamp":1731081160000},"page":"1-28","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":21,"title":["RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles"],"prefix":"10.1145","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1454-0641","authenticated-orcid":false,"given":"Marios","family":"Constantinides","sequence":"first","affiliation":[{"name":"Nokia Bell Labs, Cambridge, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8774-2386","authenticated-orcid":false,"given":"Edyta","family":"Bogucka","sequence":"additional","affiliation":[{"name":"Nokia Bell Labs, Cambridge, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9461-5804","authenticated-orcid":false,"given":"Daniele","family":"Quercia","sequence":"additional","affiliation":[{"name":"Nokia Bell Labs, Cambridge, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-1988-5373","authenticated-orcid":false,"given":"Susanna","family":"Kallio","sequence":"additional","affiliation":[{"name":"Nokia Bell Labs, Espoo, Finland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9666-2663","authenticated-orcid":false,"given":"Mohammad","family":"Tahaei","sequence":"additional","affiliation":[{"name":"Nokia Bell Labs, Cambridge, United Kingdom"}]}],"member":"320","published-online":{"date-parts":[[2024,11,8]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1207\/S15327051HCI1523_5"},{"key":"e_1_2_1_2_1","unstructured":"EU AI Act. 2021. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. https:\/\/artificialintelligenceact.eu\/the-act\/"},{"key":"e_1_2_1_3_1","unstructured":"K.K. Aggarwal and Yogesh Singh. 2008. Software Engineering. New Age International."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300233"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.12.012"},{"key":"e_1_2_1_6_1","unstructured":"Vijay Arya Rachel K.E. Bellamy Pin-Yu Chen Amit Dhurandhar Michael Hind Samuel C. Hoffman Stephanie Houde Q. Vera Liao Ronny Luss Aleksandra Mojsilovi? et al. 2019. One Explanation Does Not Fit All: A Toolkit And Taxonomy Of AI Explainability Techniques. arxiv: 1909.03012"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3209581"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3600211.3604674"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1108\/ET-11--2016-0169"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00041"},{"key":"e_1_2_1_11_1","unstructured":"Sarah Bird Miro Dud\u00edk Richard Edgar Brandon Horn Roman Lutz Vanessa Milan Mehrnoosh Sameki Hanna Wallach and Kathleen Walker. 2020. Fairlearn: A Toolkit for Assessing and Improving Fairness in AI. Technical Report. Microsoft. https:\/\/www.microsoft.com\/en-us\/research\/publication\/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai\/"},{"key":"e_1_2_1_12_1","unstructured":"Michael Boone Nikki Pope Chaowei Xiao and Anima Anandkumar. 2022. Enhancing AI Transparency and Ethical Considerations with Model Card. Nvidia. https:\/\/developer.nvidia.com\/blog\/enhancing-ai-transparency-and-ethical-considerations-with-model-card\/"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1191\/1478088706qp063oa"},{"key":"e_1_2_1_14_1","volume-title":"SUS: A 'Quick and Dirty' Usability Scale. In Usability Evaluation In Industry,, Patrick W","author":"Brooke John","year":"1996","unstructured":"John Brooke. 1996. SUS: A 'Quick and Dirty' Usability Scale. In Usability Evaluation In Industry,, Patrick W. Jordan, B. Thomas, Ian L. McClelland, and Bernard Weerdmeester (Eds.). CRC Press, Chapter 12, 107--114."},{"key":"e_1_2_1_15_1","volume-title":"Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). 77--91","author":"Buolamwini Joy","year":"2018","unstructured":"Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). 77--91. https:\/\/proceedings.mlr.press\/v81\/buolamwini18a.html"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300733"},{"key":"e_1_2_1_17_1","unstructured":"Ann Cavoukian. 2009. Privacy by Design: The 7 Foundational Principles. Information & Privacy Commissioner of On\u00a7tario Canada. https:\/\/iab.org\/wp-content\/IAB-uploads\/2011\/03\/fred_carter.pdf"},{"key":"e_1_2_1_18_1","unstructured":"CEN-CENELEC. 2023. European Committee for Electrotechnical Standardization. https:\/\/www.cencenelec.eu\/areas-of-work\/cen-cenelec-topics\/artificial-intelligence\/"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1161\/CIRCULATIONAHA.114.014508"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290607.3299057"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533113"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3593013.3594037"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3134674"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278729"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3--540--79228--4_1"},{"key":"e_1_2_1_26_1","volume-title":"HCI International","author":"Ehsan Upol","unstructured":"Upol Ehsan and Mark O Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International. Springer, 449--466."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3580771"},{"key":"e_1_2_1_28_1","unstructured":"Equal Employment Opportunity Commission. 1977. Prohibited Employment Policies\/Practices. https:\/\/www.eeoc.gov\/prohibited-employment-policiespractices"},{"key":"e_1_2_1_29_1","unstructured":"European Union. 2018. General Data Protection Regulation. https:\/\/gdpr-info.eu\/"},{"key":"e_1_2_1_30_1","unstructured":"Fairlearn. 2022. Improve Fairness of AI Systems. https:\/\/fairlearn.org"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3518482"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1046\/j.1440--1614.2002.01100.x"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3555190"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3458723"},{"key":"e_1_2_1_35_1","unstructured":"Brent Gleeson. 2013. The Silo Mentality: How To Break Down The Barriers. https:\/\/www.forbes.com\/sites\/brentgleeson\/2013\/10\/02\/the-silo-mentality-how-to-break-down-the-barriers\/"},{"key":"e_1_2_1_36_1","unstructured":"Google. 2022. AI Explorables. https:\/\/pair.withgoogle.com\/explorables\/"},{"key":"e_1_2_1_37_1","unstructured":"Google. 2022. Fairness Indicators. https:\/\/github.com\/tensorflow\/fairness-indicators"},{"key":"e_1_2_1_38_1","unstructured":"Government Equalities Office and Equality and Human Rights Commission. 2010. Equality Act 2010: Guidance. https:\/\/www.gov.uk\/guidance\/equality-act-2010-guidance"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1177\/1525822X05279903"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.aay7120"},{"key":"e_1_2_1_41_1","volume-title":"Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes. MIT technology Review","author":"Hao Karen","year":"2019","unstructured":"Karen Hao. 2019. Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes. MIT technology Review (2019). https:\/\/www.technologyreview.com\/2019\/06\/06\/239031\/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes\/"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.5555\/3157382.3157469"},{"key":"e_1_2_1_43_1","volume-title":"Proceedings of the Second Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, 107--124","author":"Havens Lucy","year":"2020","unstructured":"Lucy Havens, Melissa Terras, Benjamin Bach, and Beatrice Alex. 2020. Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, 107--124. https:\/\/aclanthology.org\/2020.gebnlp-1.10"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.5040\/9781509932771.ch-001"},{"key":"e_1_2_1_45_1","unstructured":"The White House. 2023. Blueprint for an AI Bill of Rights. https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/"},{"key":"e_1_2_1_46_1","unstructured":"IBM. 2022. AI Fairness 360. https:\/\/aif360.mybluemix.net"},{"key":"e_1_2_1_47_1","unstructured":"IDEO. 2019. AI needs ethical compass. This tool can help. https:\/\/www.ideo.com\/blog\/ai-needs-an-ethical-compass-this-tool-can-help"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0088--2"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376219"},{"key":"e_1_2_1_50_1","unstructured":"Christopher M. Kelty. 2018. The Participatory Development Toolkit. https:\/\/limn.it\/articles\/the-participatory-development-toolkit\/"},{"key":"e_1_2_1_51_1","volume-title":"Huttenlocher","author":"Kissinger Henry","year":"2021","unstructured":"Henry Kissinger, Eric Schmidt, and Daniel P. Huttenlocher. 2021. The Age of AI: And Our Human Future. John Murray London."},{"key":"e_1_2_1_52_1","unstructured":"Knowledge Centre Data and Society. 2019. AI Blindspots Card Set 1.0. https:\/\/data-en-maatschappij.ai\/en\/tools\/ai-blindspot"},{"key":"e_1_2_1_53_1","unstructured":"Knowledge Centre Data and Society. 2019. AI Blindspots Card Set 2.0. https:\/\/data-en-maatschappij.ai\/en\/tools\/ai-blindspots-2.0"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/2678025.2701399"},{"key":"e_1_2_1_55_1","unstructured":"Superrr Lab. 2022. The Feminist Tech Card Deck. https:\/\/superrr.net\/feministtech\/deck"},{"key":"e_1_2_1_56_1","volume-title":"Varshney","author":"Vera Liao Q.","year":"2021","unstructured":"Q. Vera Liao and Kush R. Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arxiv: 2110.10790"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3626234"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295230"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376445"},{"key":"e_1_2_1_60_1","unstructured":"Shannon Mattern. 2021. Unboxing the Toolkit. https:\/\/tool-shed.org\/unboxing-the-toolkit\/"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359174"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1186\/1748--5908--6--10"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12160-013--9486--6"},{"key":"e_1_2_1_64_1","volume-title":"Qualitative Data Analysis: A Methods Sourcebook","author":"Miles Matthew","unstructured":"Matthew Miles and Michael Huberman. 1994. Qualitative Data Analysis: A Methods Sourcebook. Sage."},{"key":"e_1_2_1_65_1","unstructured":"Miro. 2022. Miro | Online Whiteboard for Visual Collaboration. https:\/\/miro.com\/"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287596"},{"key":"e_1_2_1_67_1","volume-title":"Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions. arxiv","author":"Mitchell Shira","year":"1811","unstructured":"Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum. 2018. Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions. arxiv: 1811.07867"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1177\/2053951716679679"},{"key":"e_1_2_1_69_1","unstructured":"Interpret ML. 2019. Interpret ML. https:\/\/interpret.ml\/"},{"key":"e_1_2_1_70_1","unstructured":"Aleksandra Mojsilovic. 2019. Introducing AI Explainability 360. IBM. https:\/\/www.ibm.com\/blogs\/research\/2019\/08\/ai-explainability-360\/"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510209"},{"key":"e_1_2_1_72_1","unstructured":"Arvind Narayanan. 2018. 21 Fairness definitions and their politics. In Tutorial presented at the ACM Conference on Fairness Accountability and Transparency (FAccT)."},{"key":"e_1_2_1_73_1","unstructured":"National Institute of Standards and Technology. 2023. AI Risk Management Framework. https:\/\/www.nist.gov\/itl\/ai-risk-management-framework"},{"key":"e_1_2_1_74_1","unstructured":"OECD. 2024. Catalogue of Tools & Metrics for Trustworthy AI. https:\/\/oecd.ai\/en\/catalogue\/tools"},{"key":"e_1_2_1_75_1","volume-title":"Stack Overflow Developer Survey","author":"Overflow Stack","year":"2022","unstructured":"Stack Overflow. 2022. Stack Overflow Developer Survey 2022. https:\/\/survey.stackoverflow.co\/2022\/"},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3449205"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372873"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3449081"},{"key":"e_1_2_1_79_1","volume-title":"The Coding Manual for Qualitative Researchers","author":"Johnny Salda","unstructured":"Johnny Salda na. 2015. The Coding Manual for Qualitative Researchers. Sage."},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1109\/TTS.2023.3257303"},{"key":"e_1_2_1_81_1","unstructured":"Jeff Sauro. 2011. A Practical Guide to the System Usability Scale: Background Benchmarks & Best Practices. Measuring Usability LLC."},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12394-010-0055-x"},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287598"},{"key":"e_1_2_1_84_1","volume-title":"The Cost of Training NLP Models: A Concise Overview. arxiv","author":"Sharir Or","year":"2004","unstructured":"Or Sharir, Barak Peleg, and Yoav Shoham. 2020. The Cost of Training NLP Models: A Concise Overview. arxiv: 2004.08900"},{"key":"e_1_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/3445973"},{"key":"e_1_2_1_86_1","doi-asserted-by":"crossref","unstructured":"Ben Shneiderman. 2022. Human-centered AI. Oxford University Press.","DOI":"10.1093\/oso\/9780192845290.001.0001"},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-023--34622-w"},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i09.7123"},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3517537"},{"key":"e_1_2_1_90_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544549.3583178"},{"key":"e_1_2_1_91_1","unstructured":"Mohammad Tahaei Marios Constantinides Daniele Quercia and Michael Muller. 2023. A Systematic Literature Review of Human-Centered Ethical and Responsible AI. arxiv: 2302.05284"},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2021.111067"},{"key":"e_1_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1145\/3194770.3194776"},{"key":"e_1_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-022-01625--5"},{"key":"e_1_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3581278"},{"key":"e_1_2_1_96_1","doi-asserted-by":"publisher","DOI":"10.23860\/JMLE-2020--12--3--8"},{"key":"e_1_2_1_97_1","doi-asserted-by":"publisher","DOI":"10.1145\/3579621"},{"key":"e_1_2_1_98_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3580900"},{"key":"e_1_2_1_99_1","doi-asserted-by":"publisher","DOI":"10.1145\/3392826"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3686927","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3686927","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T00:59:17Z","timestamp":1755737957000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3686927"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,7]]},"references-count":99,"journal-issue":{"issue":"CSCW2","published-print":{"date-parts":[[2024,11,7]]}},"alternative-id":["10.1145\/3686927"],"URL":"https:\/\/doi.org\/10.1145\/3686927","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,7]]},"assertion":[{"value":"2024-11-08","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}