{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T18:45:17Z","timestamp":1755801917351,"version":"3.44.0"},"reference-count":77,"publisher":"Association for Computing Machinery (ACM)","issue":"CSCW2","license":[{"start":{"date-parts":[[2024,11,7]],"date-time":"2024-11-07T00:00:00Z","timestamp":1730937600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2024,11,7]]},"abstract":"<jats:p>\n            AI regulations are expected to prohibit machine learning models from using sensitive attributes during training. However, the latest Natural Language Processing (NLP) classifiers, which rely on deep learning, operate as black-box systems, complicating the detection and remediation of such misuse. Traditional bias mitigation methods in NLP aim for comparable performance across different groups based on attributes like gender or race but fail to address the underlying issue of reliance on protected attributes. To partly fix that, we introduce\n            <jats:sc>NLPGuard,<\/jats:sc>\n            a framework for mitigating the reliance on protected attributes in NLP classifiers.\n            <jats:sc>NLPGuard<\/jats:sc>\n            takes an unlabeled dataset, an existing NLP classifier, and its training data as input, producing a modified training dataset that significantly reduces dependence on protected attributes without compromising accuracy.\n            <jats:sc>NLPGuard<\/jats:sc>\n            is applied to three classification tasks: identifying toxic language, sentiment analysis, and occupation classification. Our evaluation shows that current NLP classifiers heavily depend on protected attributes, with up to 23% of the most predictive words associated with these attributes. However,\n            <jats:sc>NLPGuard<\/jats:sc>\n            effectively reduces this reliance by up to 79%, while slightly improving accuracy.\n          <\/jats:p>","DOI":"10.1145\/3686924","type":"journal-article","created":{"date-parts":[[2024,11,8]],"date-time":"2024-11-08T15:52:40Z","timestamp":1731081160000},"page":"1-25","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["NLPGuard: A Framework for Mitigating the Use of Protected Attributes by NLP Classifiers"],"prefix":"10.1145","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7239-9602","authenticated-orcid":false,"given":"Salvatore","family":"Greco","sequence":"first","affiliation":[{"name":"Politecnico di Torino, Turin, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7177-9152","authenticated-orcid":false,"given":"Ke","family":"Zhou","sequence":"additional","affiliation":[{"name":"Nokia Bell Labs, Cambridge, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1425-3837","authenticated-orcid":false,"given":"Licia","family":"Capra","sequence":"additional","affiliation":[{"name":"University College London, London, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9039-6226","authenticated-orcid":false,"given":"Tania","family":"Cerquitelli","sequence":"additional","affiliation":[{"name":"Politecnico di Torino, Turin, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9461-5804","authenticated-orcid":false,"given":"Daniele","family":"Quercia","sequence":"additional","affiliation":[{"name":"Nokia Bell Labs, Cambridge, United Kingdom"}]}],"member":"320","published-online":{"date-parts":[[2024,11,8]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSEC.2018.2888775"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W16-1601"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.263"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-acl.88"},{"key":"e_1_2_2_5_1","volume-title":"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics.","author":"Attanasio Giuseppe","year":"2023","unstructured":"Giuseppe Attanasio, Eliana Pastor, Chiara Di Bonaventura, and Debora Nozza. 2023. ferret: a Framework for Benchmarking Explainers on Transformers. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics."},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308558"},{"key":"e_1_2_2_7_1","volume-title":"Man is to Computer Programmer asWoman is to Homemaker? DebiasingWord Embeddings. CoRR abs\/1607.06520","author":"Bolukbasi Tolga","year":"2016","unstructured":"Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer asWoman is to Homemaker? DebiasingWord Embeddings. CoRR abs\/1607.06520 (2016). arXiv:1607.06520 http:\/\/arxiv.org\/abs\/1607.06520"},{"key":"e_1_2_2_8_1","volume-title":"Language Models are Few-Shot Learners. CoRR abs\/2005.14165","author":"Brown Tom B.","year":"2020","unstructured":"Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. CoRR abs\/2005.14165 (2020). arXiv:2005.14165 https:\/\/arxiv.org\/abs\/2005.14165"},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.aal4230"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3376898"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1177\/001316446002000104"},{"key":"e_1_2_2_12_1","volume-title":"Prohibited Employment Policies\/Practices. https:\/\/www. eeoc.gov\/prohibited-employment-policiespractices Accessed","author":"Equal Employment Opportunity Commission","year":"2023","unstructured":"Equal Employment Opportunity Commission. 1977. Prohibited Employment Policies\/Practices. https:\/\/www. eeoc.gov\/prohibited-employment-policiespractices Accessed: June 2023."},{"volume-title":"Shaping the Future of ICT Research","author":"Crowston Kevin","key":"e_1_2_2_13_1","unstructured":"Kevin Crowston. 2012. Amazon Mechanical Turk: A Research Tool for Organizations and Information Systems Scholars. In Shaping the Future of ICT Research. Methods and Approaches, Anol Bhattacherjee and Brian Fitzgerald (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 210--221."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.aacl-main.46"},{"key":"e_1_2_2_15_1","volume-title":"Racial Bias in Hate Speech and Abusive Language Detection Datasets. CoRR abs\/1905.12516","author":"Davidson Thomas","year":"2019","unstructured":"Thomas Davidson, Debasmita Bhattacharya, and IngmarWeber. 2019. Racial Bias in Hate Speech and Abusive Language Detection Datasets. CoRR abs\/1905.12516 (2019). arXiv:1905.12516 http:\/\/arxiv.org\/abs\/1905.12516"},{"key":"e_1_2_2_16_1","volume-title":"Automated Hate Speech Detection and the Problem of Offensive Language. CoRR abs\/1703.04009","author":"Davidson Thomas","year":"2017","unstructured":"Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. CoRR abs\/1703.04009 (2017). arXiv:1703.04009 http:\/\/arxiv.org\/abs\/ 1703.04009"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287572"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19--1423"},{"key":"e_1_2_2_19_1","volume-title":"Build it break it fix it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083","author":"Dinan Emily","year":"2019","unstructured":"Emily Dinan, Samuel Humeau, Bharath Chintagunta, and JasonWeston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083 (2019)."},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278729"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.4064091"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.3390\/app11073184"},{"key":"e_1_2_2_23_1","volume-title":"Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056","author":"Gilardi Fabrizio","year":"2023","unstructured":"Fabrizio Gilardi, Meysam Alizadeh, and Ma\u00ebl Kubli. 2023. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056 (2023)."},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2023\/742"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1061"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v38i3.2741"},{"key":"e_1_2_2_27_1","volume-title":"Proceedings of the NIPS Symposium on Machine Learning and the Law","volume":"1","author":"Grgic-Hlaca Nina","year":"2016","unstructured":"Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P. Gummadi, and Adrian Weller. 2016. The case for process fairness in learning: Feature selection for fair decision making. In Proceedings of the NIPS Symposium on Machine Learning and the Law, Vol. 1. 2."},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3236009"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3465416.3483299"},{"key":"e_1_2_2_30_1","unstructured":"Laura Hanu and Unitary team. 2020. Detoxify. Github. https:\/\/github.com\/unitaryai\/detoxify."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533144"},{"key":"e_1_2_2_32_1","volume-title":"Blue print for an AI Bill of Rights. https:\/\/www.whitehouse.gov\/ostp\/ai-bill-ofrights\/# discrimination Accessed","author":"House White","year":"2023","unstructured":"White House. 2023. Blue print for an AI Bill of Rights. https:\/\/www.whitehouse.gov\/ostp\/ai-bill-ofrights\/# discrimination Accessed: June 2023."},{"key":"e_1_2_2_33_1","volume-title":"Proceedings of the First Conference on Causal Learning and Reasoning (Proceedings of Machine Learning Research","volume":"351","author":"Idrissi Badr Youbi","year":"2022","unstructured":"Badr Youbi Idrissi, Martin Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. 2022. Simple data balancing achieves competitive worst-group-accuracy. In Proceedings of the First Conference on Causal Learning and Reasoning (Proceedings of Machine Learning Research, Vol. 177), Bernhard Sch\u00f6lkopf, Caroline Uhler, and Kun Zhang (Eds.). PMLR, 336--351. https:\/\/proceedings.mlr.press\/v177\/idrissi22a.html"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0088--2"},{"key":"e_1_2_2_35_1","volume-title":"Equality Act 2010: guidance. https:\/\/www.gov.uk\/guidance\/equality-act-2010- guidance Accessed","author":"United Kingdom. 2010.","year":"2023","unstructured":"United Kingdom. 2010. Equality Act 2010: guidance. https:\/\/www.gov.uk\/guidance\/equality-act-2010- guidance Accessed: June 2023."},{"key":"e_1_2_2_36_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1017\/jme.2022.13"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ejor.2021.06.023"},{"key":"e_1_2_2_39_1","volume-title":"Seventeenth Symposium on Usable Privacy and Security (SOUPS","author":"Kumar Deepak","year":"2021","unstructured":"Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. 2021. Designing toxic content classification for a diversity of perspectives. In Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021). 299--318."},{"key":"e_1_2_2_40_1","volume-title":"Advances in Neural Information Processing Systems","volume":"30","author":"Kusner Matt J","year":"2017","unstructured":"Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper_ files\/paper\/2017\/file\/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf"},{"key":"e_1_2_2_41_1","volume-title":"The measurement of observer agreement for categorical data. biometrics","author":"Richard Landis J","year":"1977","unstructured":"J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics (1977), 159--174."},{"key":"e_1_2_2_42_1","unstructured":"European Union Law. 2023. Proposal for a Regulation laying down harmonised rules on Artificial Intelligence and amending certain union legislative acts. https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri= celex%3A52021PC0206 Accessed: June 2023."},{"key":"e_1_2_2_43_1","unstructured":"Nelson F. Liu Kevin Lin John Hewitt Ashwin Paranjape Michele Bevilacqua Fabio Petroni and Percy Liang. 2023. Lost in the Middle: How Language Models Use Long Contexts. arXiv:2307.03172 [cs.CL]"},{"key":"e_1_2_2_44_1","first-page":"I","article-title":"A Unified Approach to Interpreting Model Predictions","volume":"30","author":"Lundberg Scott M","year":"2017","unstructured":"Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765--4774. http:\/\/papers.nips.cc\/paper\/7062-a-unifiedapproach- to-interpreting-model-predictions.pdf","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_2_45_1","volume-title":"Article 115 (jul","author":"Mehrabi Ninareh","year":"2021","unstructured":"Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6, Article 115 (jul 2021), 35 pages. https:\/\/doi.org\/10. 1145\/3457607"},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/219717.219748"},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287574"},{"key":"e_1_2_2_48_1","doi-asserted-by":"crossref","unstructured":"Brent Mittelstadt Sandra Wachter and Chris Russell. 2023. The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default. arXiv:2302.02404 [cs.AI]","DOI":"10.36645\/mtlr.30.1.unfairness"},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1177\/2053951716679679"},{"key":"e_1_2_2_50_1","unstructured":"EQUINET European Network of Equality Bodies. 2022. EXPANDING THE LIST OF PROTECTED GROUNDS WITHIN ANTI-DISCRIMINATION LAWIN THE EU: AN EQUINET REPORT. https:\/\/equineteurope.org\/expandingthe- list-of-protected-grounds-within-anti-discrimination-law-in-the-eu-an-equinetreport\/ Accessed: January 2024."},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3593013.3594069"},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1302"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i11.21468"},{"key":"e_1_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v7i1.5281"},{"key":"e_1_2_2_55_1","volume-title":"Manning","author":"Pennington Jeffrey","year":"2014","unstructured":"Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors forWord Representation. In Empirical Methods in Natural Language Processing (EMNLP). 1532--1543. http:\/\/www.aclweb.org\/anthology\/ D14--1162"},{"key":"e_1_2_2_56_1","unstructured":"Chengwei Qin Aston Zhang Zhuosheng Zhang Jiaao Chen Michihiro Yasunaga and Diyi Yang. 2023. Is ChatGPT a General-Purpose Natural Language Processing Task Solver? arXiv:2302.06476 [cs.CL]"},{"key":"e_1_2_2_57_1","volume-title":"Liu","author":"Raffel Colin","year":"2019","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. CoRR abs\/1910.10683 (2019). arXiv:1910.10683 http:\/\/arxiv.org\/abs\/1910.10683"},{"key":"e_1_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.647"},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/1753846.1753873"},{"key":"e_1_2_2_61_1","doi-asserted-by":"publisher","DOI":"10.5555\/3524938.3525711"},{"key":"e_1_2_2_62_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19--1163"},{"key":"e_1_2_2_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_2_2_64_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.naacl-main.347"},{"key":"e_1_2_2_65_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.blackboxnlp-1.33"},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1164"},{"key":"e_1_2_2_67_1","volume-title":"Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976","author":"Sun Tony","year":"2019","unstructured":"Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976 (2019)."},{"key":"e_1_2_2_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3492854"},{"key":"e_1_2_2_69_1","unstructured":"Mukund Sundararajan Ankur Taly and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks (ICML'17). JMLR.org 3319--3328."},{"key":"e_1_2_2_70_1","volume-title":"General Data Protection Regulation. https:\/\/gdpr-info.eu\/ Accessed","author":"Union European","year":"2023","unstructured":"European Union. 2018. General Data Protection Regulation. https:\/\/gdpr-info.eu\/ Accessed: June 2023."},{"key":"e_1_2_2_71_1","volume-title":"The AI Act. https:\/\/artificialintelligenceact.eu\/ Accessed","author":"Union European","year":"2023","unstructured":"European Union. 2023. The AI Act. https:\/\/artificialintelligenceact.eu\/ Accessed: June 2023."},{"key":"e_1_2_2_72_1","unstructured":"Ilse van der Linden Hinda Haned and Evangelos Kanoulas. 2019. Global Aggregations of Local Explanations for Black Box models. arXiv:1907.03039 [cs.IR]"},{"key":"e_1_2_2_73_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-022-01690--9"},{"key":"e_1_2_2_74_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2021.105567"},{"key":"e_1_2_2_75_1","volume-title":"Recommendations for Bias Mitigation Methods: Applicability and Legality. CEUR Workshop Proceedings.","author":"Waller Madeleine","year":"2023","unstructured":"Madeleine Waller, Odinaldo Rodrigues, and Oana Cocarascu. 2023. Recommendations for Bias Mitigation Methods: Applicability and Legality. CEUR Workshop Proceedings."},{"key":"e_1_2_2_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3415179"},{"key":"e_1_2_2_77_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.380"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3686924","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3686924","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T01:00:14Z","timestamp":1755738014000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3686924"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,7]]},"references-count":77,"journal-issue":{"issue":"CSCW2","published-print":{"date-parts":[[2024,11,7]]}},"alternative-id":["10.1145\/3686924"],"URL":"https:\/\/doi.org\/10.1145\/3686924","relation":{},"ISSN":["2573-0142"],"issn-type":[{"type":"electronic","value":"2573-0142"}],"subject":[],"published":{"date-parts":[[2024,11,7]]},"assertion":[{"value":"2024-11-08","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}