{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,9]],"date-time":"2026-02-09T18:52:25Z","timestamp":1770663145075,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":82,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,7,11]],"date-time":"2021-07-11T00:00:00Z","timestamp":1625961600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Sloan Foundation, Sloan Fellowship 2019"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,7,11]]},"DOI":"10.1145\/3404835.3462850","type":"proceedings-article","created":{"date-parts":[[2021,7,12]],"date-time":"2021-07-12T02:41:52Z","timestamp":1626057712000},"page":"1033-1043","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":28,"title":["When Fair Ranking Meets Uncertain Inference"],"prefix":"10.1145","author":[{"given":"Avijit","family":"Ghosh","sequence":"first","affiliation":[{"name":"Northeastern University, Boston, MA, USA"}]},{"given":"Ritam","family":"Dutt","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, Pittsburgh, PA, USA"}]},{"given":"Christo","family":"Wilson","sequence":"additional","affiliation":[{"name":"Northeastern University, Boston, MA, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,7,11]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Dzifa Adjaye-Gbewonyo Robert A Bednarczyk Robert L Davis and Saad BOmer. 2014. Using the Bayesian Improved Surname Geocoding Method (BISG)to create a working classification of race and ethnicity in a diverse managed care population: a validation study. Health services research 49 1 (2014) 268--283.","DOI":"10.1111\/1475-6773.12089"},{"key":"e_1_3_2_2_2_1","unstructured":"Alekh Agarwal Miroslav Dud\u00edk and Zhiwei Steven Wu. 2019. Fair regression:Quantitative definitions and reduction-based algorithms. arXiv preprint arXiv:1905.12843(2019)."},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445888"},{"key":"e_1_3_2_2_4_1","volume-title":"Selbst","author":"Barocas Solon","year":"2016","unstructured":"Solon Barocas and Andrew D. Selbst. 2016. Big Data's Disparate Impact. 104 California Law Review 671 (2016)."},{"key":"e_1_3_2_2_5_1","unstructured":"Sid Basu Ruthie Berman Adam Bloomston John Campbell Anne Diaz Nanako Era Benjamin Evans Sukhada Palkar and Skyler Wharton. 2020. Measuring discrepancies in Airbnb guest acceptance rates using anonymized demographic data. AirBNB. https:\/\/news.airbnb.com\/wp-content\/uploads\/sites\/4\/2020\/06\/Project-Lighthouse-Airbnb-2020-06--12.pdf."},{"key":"e_1_3_2_2_6_1","volume-title":"2018. AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXivpreprint arXiv","author":"Bellamy Rachel KE","year":"1810","unstructured":"Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, et al.2018. AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXivpreprint arXiv: 1810.01943(2018)."},{"key":"e_1_3_2_2_7_1","unstructured":"Richard Berk Hoda Heidari Shahin Jabbari Matthew Joseph Michael Kearns Jamie Morgenstern Seth Neel and Aaron Roth. 2017. A convex framework for fair regression. arXiv preprint arXiv:1706.02409(2017)."},{"key":"e_1_3_2_2_8_1","volume-title":"Chi, and Cristos Goodrow","author":"Beutel Alex","year":"2019","unstructured":"Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, Yi Wu, Lukasz Heldt, Zhe Zhao, Lichan Hong, Ed H. Chi, and Cristos Goodrow. 2019. Fairness in Recommendation Ranking through Pairwise Comparisons. In KDD. https:\/\/arxiv.org\/pdf\/1903.00780.pdf"},{"key":"e_1_3_2_2_9_1","volume-title":"The 41st international acm sigir conference on research & development in information retrieval. 405--414.","author":"Biega Asia J","unstructured":"Asia J Biega, Krishna P Gummadi, and Gerhard Weikum. 2018. Equity of attention: Amortizing individual fairness in rankings. In The 41st international acm sigir conference on research & development in information retrieval. 405--414."},{"key":"e_1_3_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372877"},{"key":"e_1_3_2_2_11_1","unstructured":"Tolga Bolukbasi Kai-Wei Chang James Y Zou Venkatesh Saligrama and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker-debiasing word embeddings. In Advances in neural information processing systems. 4349--4357."},{"key":"e_1_3_2_2_12_1","volume-title":"International Conference on Machine Learning. 803--811","author":"Brunet Marc-Etienne","year":"2019","unstructured":"Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019. Understanding the origins of bias in word embeddings. In International Conference on Machine Learning. 803--811."},{"key":"e_1_3_2_2_13_1","volume-title":"Conference on fairness, accountability and transparency. 77--91","author":"Buolamwini Joy","year":"2018","unstructured":"Joy Buolamwini and Timnit Gebru. 2018.Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. 77--91."},{"key":"e_1_3_2_2_14_1","unstructured":"Consumer Financial Protection Bureau. 2014. Using publicly available information to proxy for unidentified race and ethnicity. Report available athttp:\/\/files. consumerfinance. gov\/f\/201409_cfpb_report_ proxy-methodology.pdf(2014)."},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287586"},{"key":"e_1_3_2_2_16_1","unstructured":"L Elisa Celis Lingxiao Huang and Nisheeth K Vishnoi. 2020. Fair Classification with Noisy Protected Attributes. arXiv preprint arXiv:2006.04778(2020)."},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3415210"},{"key":"e_1_3_2_2_18_1","volume-title":"45th International Colloquium on Automata,Languages, and Programming (ICALP","author":"Celis L Elisa","year":"2018","unstructured":"L Elisa Celis, Damian Straszak, and Nisheeth K Vishnoi. 2018. Ranking with Fairness Constraints. In 45th International Colloquium on Automata,Languages, and Programming (ICALP 2018). Schloss Dagstuhl-Leibniz-Zentrumfuer Informatik."},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287594"},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174225"},{"key":"e_1_3_2_2_21_1","unstructured":"Nicholas Diakopoulos Daniel Trielli Jennifer Stark and Sean Mussenden. 2018. I Vote For-How Search Informs Our Choice of Candidate. In Digital Dominance:The Power of Google Amazon Facebook and Apple M. Moore and D. Tambini(Eds.). 22."},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3411962"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_2_2_24_1","volume-title":"Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI","author":"Fjeld Jessica","year":"2020","unstructured":"Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.Berkman Klein Center Research Publication 2020, 1 (2020). https:\/\/ssrn.com\/abstract=3518482"},{"key":"e_1_3_2_2_25_1","volume-title":"Equalizing gender biases in neural machine translation with word embeddings techniques.arXiv preprintarXiv","author":"Font Joel Escud\u00e9","year":"1901","unstructured":"Joel Escud\u00e9 Font and Marta R Costa-Jussa. 2019. Equalizing gender biases in neural machine translation with word embeddings techniques.arXiv preprintarXiv: 1901.03116(2019)."},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE48307.2020.00203"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287589"},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372862"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330691"},{"key":"e_1_3_2_2_30_1","unstructured":"Avijit Ghosh Lea Genuit and Mary Reagan. 2021. Characterizing Intersectional Group Fairness with Worst-Case Comparisons. arXiv:2101.01673 [cs.LG]"},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278722"},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372826"},{"key":"e_1_3_2_2_33_1","volume-title":"Proc. of CSCW.","author":"Hann\u00e1k Anik\u00f3","year":"2017","unstructured":"Anik\u00f3 Hann\u00e1k, Claudia Wagner, David Garcia, Alan Mislove, Markus Strohmaier, and Christo Wilson. 2017. Bias in Online Freelance Marketplaces: Evidence from Task Rabbit and Fiverr. In Proc. of CSCW."},{"key":"e_1_3_2_2_34_1","unstructured":"Moritz Hardt Eric Price and Nati Srebro. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems. 3315--3323."},{"key":"e_1_3_2_2_35_1","volume-title":"Terms of inclusion: Data, discourse, violence","author":"Hoffmann Anna Lauren","year":"2020","unstructured":"Anna Lauren Hoffmann. 2020. Terms of inclusion: Data, discourse, violence. New Media & Society(Sept. 2020)."},{"key":"e_1_3_2_2_36_1","volume-title":"Bryan He, Dan Jurafsky, and Daniel A McFarland.","author":"Hofstra Bas","year":"2020","unstructured":"Bas Hofstra, Vivek V Kulkarni, Sebastian Munoz-Najar Galvez, Bryan He, Dan Jurafsky, and Daniel A McFarland. 2020. The Diversity--Innovation Paradox in Science. Proceedings of the National Academy of Sciences117, 17 (2020), 9284--9291."},{"key":"e_1_3_2_2_37_1","unstructured":"Lily Hu and Issa Kohler-Hausmann. 2020. What's Sex Got To Do With Machine Learning. arXiv preprint arXiv:2006.01770(2020)."},{"key":"e_1_3_2_2_38_1","unstructured":"Lingxiao Huang and Nisheeth K Vishnoi. 2019. Stable and fair classification. arXiv preprint arXiv:1902.07823(2019)."},{"key":"e_1_3_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3392854"},{"key":"e_1_3_2_2_40_1","volume-title":"International Conference on Machine Learning. PMLR, 1617--1626","author":"Jabbari Shahin","year":"2017","unstructured":"Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and AaronRoth. 2017. Fairness in reinforcement learning. In International Conference on Machine Learning. PMLR, 1617--1626."},{"key":"e_1_3_2_2_41_1","volume-title":"Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS)20, 4","author":"J\u00e4rvelin Kalervo","year":"2002","unstructured":"Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS)20, 4 (2002), 422--446."},{"key":"e_1_3_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372829"},{"key":"e_1_3_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-33486-3_3"},{"key":"e_1_3_2_2_44_1","volume-title":"Fairface: Face attribute dataset for balanced race, gender, and age.arXiv preprint arXiv:1908.04913(2019).","author":"K\u00e4rkk\u00e4inen Kimmo","year":"2019","unstructured":"Kimmo K\u00e4rkk\u00e4inen and Jungseock Joo. 2019. Fairface: Face attribute dataset for balanced race, gender, and age.arXiv preprint arXiv:1908.04913(2019)."},{"key":"e_1_3_2_2_45_1","volume-title":"Proc. of HT.","author":"Kawakami Anna","year":"2020","unstructured":"Anna Kawakami, Khonzoda Umarova, Dongchen Huang, and Eni Mustafaraj. 2020. The 'Fairness Doctrine' Lives on? Theorizing about the Algorithmic News Curation of Google's Top Stories. In Proc. of HT."},{"key":"e_1_3_2_2_46_1","volume-title":"Munson","author":"Kay Matthew","year":"2015","unstructured":"Matthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. In Proc. of CHI."},{"key":"e_1_3_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1"},{"key":"e_1_3_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313443"},{"key":"e_1_3_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/2998181.2998321"},{"key":"e_1_3_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/1367497.1367620"},{"key":"e_1_3_2_2_51_1","unstructured":"Joshua R Loftus Chris Russell Matt J Kusner and Ricardo Silva. 2018. Causalreasoning for algorithmic fairness.arXiv preprint arXiv:1805.05859(2018)."},{"key":"e_1_3_2_2_52_1","first-page":"I","article-title":"A Unified Approach to Interpreting Model Predictions","volume":"30","author":"Lundberg Scott M","year":"2017","unstructured":"Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett(Eds.). Curran Associates, Inc., 4765--4774. http:\/\/papers.nips.cc\/paper\/7062-a-unified-approach-to-interpreting-model-predictions.pdf","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3201064.3201095"},{"key":"e_1_3_2_2_54_1","volume-title":"Evaluation in information retrieval","author":"Manning Christopher D.","unstructured":"Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2009. Evaluation in information retrieval. Cambridge University Press, Chapter 8, 151--175."},{"key":"e_1_3_2_2_55_1","unstructured":"Ninareh Mehrabi Fred Morstatter Nripsuta Saxena Kristina Lerman and Aram Galstyan. 2019. A survey on bias and fairness in machine learning.arXiv preprint arXiv:1908.09635(2019)."},{"key":"e_1_3_2_2_56_1","doi-asserted-by":"crossref","unstructured":"Anay Mehrotra and L Elisa Celis. 2020. Mitigating Bias in Set Selection with Noisy Protected Attributes. arXiv preprint arXiv:2011.04219(2020).","DOI":"10.1145\/3442188.3445887"},{"key":"e_1_3_2_2_57_1","volume-title":"Conference on Fairness, Accountability and Transparency. 107--118","author":"Menon Aditya Krishna","year":"2018","unstructured":"Aditya Krishna Menon and Robert C Williamson. 2018. The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency. 107--118."},{"key":"e_1_3_2_2_58_1","doi-asserted-by":"crossref","unstructured":"Marco Morik Ashudeep Singh Jessica Hong and Thorsten Joachims. 2020. Controlling Fairness and Bias in Dynamic Learning-to-Rank.arXiv preprint arXiv:2005.14713(2020).","DOI":"10.24963\/ijcai.2021\/655"},{"key":"e_1_3_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-15719-7_23"},{"key":"e_1_3_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11553"},{"key":"e_1_3_2_2_61_1","unstructured":"Jakob Nielsen. 2003. Usability 101: introduction to usability. Jakob Nielsen's Alertbox."},{"key":"e_1_3_2_2_62_1","volume-title":"Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 (Oct","author":"Obermeyer Ziad","year":"2019","unstructured":"Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 (Oct. 2019)."},{"key":"e_1_3_2_2_63_1","volume-title":"An intelligence in our image: The risks of bias and errors in artificial intelligence","author":"Osoba Osonde A","unstructured":"Osonde A Osoba and William Welser IV. 2017. An intelligence in our image: The risks of bias and errors in artificial intelligence. Rand Corporation."},{"key":"e_1_3_2_2_64_1","unstructured":"Amifa Raj Connor Wood Ananda Montoly and Michael D Ekstrand. 2020. Comparing Fair Ranking Metrics. arXiv preprint arXiv:2009.01311(2020)."},{"key":"e_1_3_2_2_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_2_66_1","volume-title":"CSCW (November","author":"Robertson Ronald E","year":"2018","unstructured":"Ronald E Robertson, Shan Jiang, Kenneth Joseph, Lisa Friedland, David Lazer,and Christo Wilson. 2018. Auditing Partisan Audience Bias within Google Search.Proceedings of the ACM: Human-Computer Interaction2, CSCW (November 2018)."},{"key":"e_1_3_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292522.3326047"},{"key":"e_1_3_2_2_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3178876.3186143"},{"key":"e_1_3_2_2_69_1","volume-title":"Tackling the problem of classification with noisy data using multiple classifier systems:Analysis of the performance and robustness.Information Sciences247","author":"S\u00e1ez Jos\u00e9 A","year":"2013","unstructured":"Jos\u00e9 A S\u00e1ez, Mikel Galar, Juli\u00e1N Luengo, and Francisco Herrera. 2013. Tackling the problem of classification with noisy data using multiple classifier systems:Analysis of the performance and robustness.Information Sciences247 (2013),1--20."},{"key":"e_1_3_2_2_70_1","doi-asserted-by":"publisher","DOI":"10.7717\/peerj-cs.156"},{"key":"e_1_3_2_2_71_1","volume-title":"Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists. In Companion Proceedings of The 2019 World Wide Web Conference. 553--562","author":"Sapiezynski Piotr","year":"2019","unstructured":"Piotr Sapiezynski, Wesley Zeng, Ronald E Robertson, Alan Mislove, and Christo Wilson. 2019. Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists. In Companion Proceedings of The 2019 World Wide Web Conference. 553--562."},{"key":"e_1_3_2_2_72_1","volume-title":"Light Face: A Hybrid Deep Face Recognition Framework. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE.","author":"Serengil Sefik Ilkin","year":"2020","unstructured":"Sefik Ilkin Serengil and Alper Ozpinar. 2020. Light Face: A Hybrid Deep Face Recognition Framework. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE."},{"key":"e_1_3_2_2_73_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220088"},{"key":"e_1_3_2_2_74_1","unstructured":"Gaurav Sood and Suriyan Laohaprapanon. 2018. Predicting Race and Ethnicity From the Sequence of Characters in a Name. arXiv:1805.02109 [stat.AP]"},{"key":"e_1_3_2_2_75_1","unstructured":"Lisa Stryjewski. 2010. 40 years of box plots. (2010)."},{"key":"e_1_3_2_2_76_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.220"},{"key":"e_1_3_2_2_77_1","volume-title":"The what-if tool: Interactive probing of machine learning models","author":"Wexler James","year":"2019","unstructured":"James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Vi\u00e9gas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics 26, 1(2019), 56--65."},{"key":"e_1_3_2_2_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3085504.3085526"},{"key":"e_1_3_2_2_79_1","doi-asserted-by":"publisher","DOI":"10.1145\/3132847.3133008"},{"key":"e_1_3_2_2_80_1","volume-title":"Manuel Gomez Rogriguez, and Krishna P Gummadi","author":"Zafar Muhammad Bilal","year":"2017","unstructured":"Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics. PMLR, 962--970."},{"key":"e_1_3_2_2_81_1","doi-asserted-by":"publisher","DOI":"10.1145\/3132847.3132938"},{"key":"e_1_3_2_2_82_1","doi-asserted-by":"publisher","DOI":"10.1145\/3366424.3380048"}],"event":{"name":"SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval","location":"Virtual Event Canada","acronym":"SIGIR '21","sponsor":["SIGIR ACM Special Interest Group on Information Retrieval"]},"container-title":["Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3404835.3462850","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3404835.3462850","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:47:17Z","timestamp":1750193237000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3404835.3462850"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,7,11]]},"references-count":82,"alternative-id":["10.1145\/3404835.3462850","10.1145\/3404835"],"URL":"https:\/\/doi.org\/10.1145\/3404835.3462850","relation":{},"subject":[],"published":{"date-parts":[[2021,7,11]]},"assertion":[{"value":"2021-07-11","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}