{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T00:41:34Z","timestamp":1776127294322,"version":"3.50.1"},"reference-count":89,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,3,20]],"date-time":"2023-03-20T00:00:00Z","timestamp":1679270400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"NSF","award":["IIS-1939716, and IIS-1900990"],"award-info":[{"award-number":["IIS-1939716, and IIS-1900990"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2023,4,30]]},"abstract":"<jats:p>Machine learning models are becoming pervasive in high-stakes applications. Despite their clear benefits in terms of performance, the models could show discrimination against minority groups and result in fairness issues in a decision-making process, leading to severe negative impacts on the individuals and the society. In recent years, various techniques have been developed to mitigate the unfairness for machine learning models. Among them, in-processing methods have drawn increasing attention from the community, where fairness is directly taken into consideration during model design to induce intrinsically fair models and fundamentally mitigate fairness issues in outputs and representations. In this survey, we review the current progress of in-processing fairness mitigation techniques. Based on where the fairness is achieved in the model, we categorize them into explicit and implicit methods, where the former directly incorporates fairness metrics in training objectives, and the latter focuses on refining latent representation learning. Finally, we conclude the survey with a discussion of the research challenges in this community to motivate future exploration.<\/jats:p>","DOI":"10.1145\/3551390","type":"journal-article","created":{"date-parts":[[2022,7,30]],"date-time":"2022-07-30T11:03:01Z","timestamp":1659178981000},"page":"1-27","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":81,"title":["In-Processing Modeling Techniques for Machine Learning Fairness: A Survey"],"prefix":"10.1145","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1536-3643","authenticated-orcid":false,"given":"Mingyang","family":"Wan","sequence":"first","affiliation":[{"name":"Department of Computer Science and Engineering, Texas A&amp;M University"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6677-7504","authenticated-orcid":false,"given":"Daochen","family":"Zha","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Rice University"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9170-2424","authenticated-orcid":false,"given":"Ninghao","family":"Liu","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Georgia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1984-795X","authenticated-orcid":false,"given":"Na","family":"Zou","sequence":"additional","affiliation":[{"name":"Department of Engineering Technology and Industrial Distribution, Texas A&amp;M University"}]}],"member":"320","published-online":{"date-parts":[[2023,3,20]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA51647.2021.00072"},{"key":"e_1_3_2_3_2","first-page":"50","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Adel Tameem","year":"2018","unstructured":"Tameem Adel, Zoubin Ghahramani, and Adrian Weller. 2018. Discovering interpretable representations for both deep generative and discriminative models. In Proceedings of the International Conference on Machine Learning. PMLR, 50\u201359."},{"key":"e_1_3_2_4_2","first-page":"120","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Agarwal Alekh","year":"2019","unstructured":"Alekh Agarwal, Miroslav Dud\u00edk, and Zhiwei Steven Wu. 2019. Fair regression: Quantitative definitions and reduction-based algorithms. In Proceedings of the International Conference on Machine Learning. PMLR, 120\u2013129."},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33011418"},{"issue":"2016","key":"e_1_3_2_6_2","first-page":"139","article-title":"Machine bias","volume":"23","author":"Angwin Julia","year":"2016","unstructured":"Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May 23, 2016 (2016), 139\u2013159.","journal-title":"ProPublica, May"},{"key":"e_1_3_2_7_2","first-page":"405","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Backurs Arturs","year":"2019","unstructured":"Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, and Tal Wagner. 2019. Scalable fair clustering. In Proceedings of the International Conference on Machine Learning. PMLR, 405\u2013413."},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2013.50"},{"key":"e_1_3_2_9_2","unstructured":"Richard Berk Hoda Heidari Shahin Jabbari Matthew Joseph Michael Kearns Jamie Morgenstern Seth Neel and Aaron Roth. 2017. A convex framework for fair regression. arXiv:1706.02409. Retrieved from https:\/\/arxiv.org\/abs\/1706.02409."},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1177\/0049124118782533"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330745"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3306618.3314234"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372864"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3468264.3468536"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2021.3106615"},{"key":"e_1_3_2_16_2","first-page":"715","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Bose Avishek","year":"2019","unstructured":"Avishek Bose and William Hamilton. 2019. Compositional fairness constraints for graph embeddings. In Proceedings of the International Conference on Machine Learning. PMLR, 715\u2013724."},{"key":"e_1_3_2_17_2","unstructured":"Sabri Boughorbel Fethi Jarray and Abdou Kadri. 2021. Fairness in TabNet model by disentangled representation for the prediction of hospital no-show. arXiv:2103.04048. Retrieved from https:\/\/arxiv.org\/abs\/2103.04048."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/285"},{"key":"e_1_3_2_19_2","unstructured":"Simon Caton and Christian Haas. 2020. Fairness in machine learning: A survey. arXiv:2010.04053. Retrieved from https:\/\/arxiv.org\/abs\/2010.04053."},{"key":"e_1_3_2_20_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Cheng Pengyu","year":"2021","unstructured":"Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2021. FairFil: Contrastive neural debiasing method for pretrained text encoders. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_2_21_2","article-title":"Fair clustering through fairlets","volume":"30","author":"Chierichetti Flavio","year":"2017","unstructured":"Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. 2017. Fair clustering through fairlets. Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1089\/big.2016.0047"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.5555\/1050985"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3098095"},{"key":"e_1_3_2_25_2","doi-asserted-by":"crossref","unstructured":"Anupam Datta Matt Fredrikson Gihyuk Ko Piotr Mardziel and Shayak Sen. 2017. Proxy non-discrimination in data-driven systems. arXiv:1707.08120. Retrieved from https:\/\/arxiv.org\/abs\/1707.08120.","DOI":"10.1145\/3133956.3134097"},{"key":"e_1_3_2_26_2","volume-title":"Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics."},{"key":"e_1_3_2_27_2","unstructured":"Pietro G. Di Stefano James M. Hickey and Vlasios Vasileiou. 2020. Counterfactual fairness: Removing direct effects through regularization. arXiv:2002.10774. Retrieved from https:\/\/arxiv.org\/abs\/2002.10774."},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3359786"},{"key":"e_1_3_2_29_2","article-title":"Fairness via representation neutralization","volume":"34","author":"Du Mengnan","year":"2021","unstructured":"Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Awadallah, and Xia Hu. 2021. Fairness via representation neutralization. Advances in Neural Information Processing Systems 34 (2021).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_30_2","article-title":"Fairness in deep learning: A computational perspective","author":"Du Mengnan","year":"2020","unstructured":"Mengnan Du, Fan Yang, Na Zou, and Xia Hu. 2020. Fairness in deep learning: A computational perspective. IEEE Intelligent Systems (2020).","journal-title":"IEEE Intelligent Systems"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_2_32_2","unstructured":"Harrison Edwards and Amos Storkey. 2015. Censoring representations with an adversary. arXiv:1511.05897. Retrieved from https:\/\/arxiv.org\/abs\/1511.05897."},{"key":"e_1_3_2_33_2","doi-asserted-by":"crossref","unstructured":"Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. arXiv:1808.06640. Retrieved from https:\/\/arxiv.org\/abs\/1808.06640.","DOI":"10.18653\/v1\/D18-1002"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2783311"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE48307.2020.00203"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467349"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3306618.3317950"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-33-6546-9"},{"key":"e_1_3_2_39_2","unstructured":"Maya Gupta Andrew Cotter Mahdi Milani Fard and Serena Wang. 2018. Proxy fairness. arXiv:1806.11212. Retrieved from https:\/\/arxiv.org\/abs\/1806.11212."},{"key":"e_1_3_2_40_2","article-title":"Equality of opportunity in supervised learning","volume":"29","author":"Hardt Moritz","year":"2016","unstructured":"Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems 29 (2016).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_41_2","first-page":"1929","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Hashimoto Tatsunori","year":"2018","unstructured":"Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness without demographics in repeated loss minimization. In Proceedings of the International Conference on Machine Learning. PMLR, 1929\u20131938."},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3038912.3052569"},{"key":"e_1_3_2_44_2","first-page":"862","volume-title":"Uncertainty in Artificial Intelligence","author":"Jiang Ray","year":"2020","unstructured":"Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, and Silvia Chiappa. 2020. Wasserstein fair classification. In Uncertainty in Artificial Intelligence. PMLR, 862\u2013872."},{"key":"e_1_3_2_45_2","unstructured":"Zhimeng Jiang Xiaotian Han Chao Fan Zirui Liu Na Zou Ali Mostafavi and Xia Hu. 2022. FMP: Toward fair graph message passing against topology bias. arXiv:2202.04187. Retrieved from https:\/\/arxiv.org\/abs\/2202.04187."},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3391403.3399473"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-011-0463-8"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDMW.2011.83"},{"key":"e_1_3_2_49_2","first-page":"2564","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Kearns Michael","year":"2018","unstructured":"Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the International Conference on Machine Learning. PMLR, 2564\u20132572."},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287592"},{"key":"e_1_3_2_51_2","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems","author":"Kim Michael P.","year":"2018","unstructured":"Michael P. Kim, Omer Reingold, and Guy N. Rothblum. 2018. Fairness through computationally-bounded awareness. In Proceedings of the International Conference on Neural Information Processing Systems."},{"key":"e_1_3_2_52_2","doi-asserted-by":"crossref","unstructured":"Svetlana Kiritchenko and Saif M. Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. arXiv:1805.04508. Retrieved from https:\/\/arxiv.org\/abs\/1805.04508.","DOI":"10.18653\/v1\/S18-2005"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330899"},{"key":"e_1_3_2_54_2","unstructured":"\u00d6yk\u00fc Deniz K\u00f6se and Yanning Shen. 2021. Fairness-aware node representation learning. arXiv:2106.05391. Retrieved from https:\/\/arxiv.org\/abs\/2106.05391."},{"key":"e_1_3_2_55_2","unstructured":"Matt J. Kusner Joshua R. Loftus Chris Russell and Ricardo Silva. 2017. Counterfactual fairness. arXiv:1703.06856. Retrieved from https:\/\/arxiv.org\/abs\/1703.06856."},{"key":"e_1_3_2_56_2","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems","author":"Lahoti Preethi","year":"2020","unstructured":"Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H. Chi. 2020. Fairness without demographics through adversarially reweighted learning. In Proceedings of the International Conference on Neural Information Processing Systems."},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00909"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1631"},{"key":"e_1_3_2_59_2","first-page":"14611","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","volume":"32","author":"Locatello Francesco","year":"2019","unstructured":"Francesco Locatello, Gabriele Abbati, Thomas Rainforth, Stefan Bauer, Bernhard Sch\u00f6lkopf, and Olivier Bachem. 2019. On the fairness of disentangled representations. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 32. 14611\u201314624."},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1145\/2020408.2020488"},{"key":"e_1_3_2_61_2","first-page":"3384","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Madras David","year":"2018","unstructured":"David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning adversarially fair and transferable representations. In Proceedings of the International Conference on Machine Learning. PMLR, 3384\u20133393."},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3457607"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1111\/1475-3995.00375"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11553"},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.1145\/3306618.3314277"},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i3.16341"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.5555\/1642718"},{"key":"e_1_3_2_68_2","unstructured":"Dana Pessach and Erez Shmueli. 2020. Algorithmic fairness. arXiv:2001.09784. Retrieved from https:\/\/arxiv.org\/abs\/2001.09784."},{"key":"e_1_3_2_69_2","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems","author":"Pleiss Geoff","year":"2017","unstructured":"Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On fairness and calibration. In Proceedings of the International Conference on Neural Information Processing Systems."},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372828"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.2307\/j.ctv31xf5v0"},{"key":"e_1_3_2_72_2","unstructured":"Yuji Roh Kangwook Lee Steven Euijong Whang and Changho Suh. 2020. Fairbatch: Batch selection for model fairness. arXiv:2012.01696. Retrieved from https:\/\/arxiv.org\/abs\/2012.01696."},{"key":"e_1_3_2_73_2","volume-title":"Proceedings of the International Joint Conference on Artificial Intelligence","author":"Ross Andrew Slavin","year":"2007","unstructured":"Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2007. Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the International Joint Conference on Artificial Intelligence."},{"key":"e_1_3_2_74_2","unstructured":"Cropanzano Russell. 2001. Three roads to organizational justice. (2001)."},{"key":"e_1_3_2_75_2","unstructured":"Shiori Sagawa Pang Wei Koh Tatsunori B. Hashimoto and Percy Liang. 2019. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv:1911.08731. Retrieved from https:\/\/arxiv.org\/abs\/1911.0873."},{"key":"e_1_3_2_76_2","unstructured":"Melanie Schmidt Chris Schwiegelshohn and Christian Sohler. 2018. Fair coresets and streaming algorithms for fair k-means clustering. arXiv:1812.10854. Retrieved from https:\/\/arxiv.org\/abs\/1812.10854."},{"key":"e_1_3_2_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/CDC.2016.7798400"},{"key":"e_1_3_2_78_2","article-title":"Interfacegan: Interpreting the disentangled face representation learned by gans","author":"Shen Yujun","year":"2020","unstructured":"Yujun Shen, Ceyuan Yang, Xiaoou Tang, and Bolei Zhou. 2020. Interfacegan: Interpreting the disentangled face representation learned by gans. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372837"},{"key":"e_1_3_2_80_2","unstructured":"Yao-Hung Hubert Tsai Martin Q. Ma Han Zhao Kun Zhang Louis-Philippe Morency and Ruslan Salakhutdinov. 2021. Conditional contrastive learning: Removing undesirable information in self-supervised representations. arXiv:2106.02866. Retrieved from https:\/\/arxiv.org\/abs\/2106.02866."},{"key":"e_1_3_2_81_2","unstructured":"Christina Wadsworth Francesca Vera and Chris Piech. 2018. Achieving fairness through adversarial learning: An application to recidivism prediction. arXiv:1807.00199. Retrieved from https:\/\/arxiv.org\/abs\/1807.00199."},{"key":"e_1_3_2_82_2","unstructured":"Angelina Wang and Olga Russakovsky. 2021. Directional bias amplification. arXiv:2102.12594. Retrieved from https:\/\/arxiv.org\/abs\/2102.12594."},{"key":"e_1_3_2_83_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00541"},{"key":"e_1_3_2_84_2","first-page":"8783","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems","author":"Wick Michael","year":"2019","unstructured":"Michael Wick, Swetasudha Panda, and Jean-Baptiste Tristan. 2019. Unlocking fairness: A trade-off revisited. In Proceedings of the International Conference on Neural Information Processing Systems. 8783\u20138792."},{"key":"e_1_3_2_85_2","doi-asserted-by":"publisher","DOI":"10.1145\/3038912.3052660"},{"key":"e_1_3_2_86_2","doi-asserted-by":"publisher","DOI":"10.5555\/3322706.3362016"},{"key":"e_1_3_2_87_2","first-page":"962","volume-title":"Proceedings of the Artificial Intelligence and Statistics","author":"Zafar Muhammad Bilal","year":"2017","unstructured":"Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Proceedings of the Artificial Intelligence and Statistics. PMLR, 962\u2013970."},{"key":"e_1_3_2_88_2","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278779"},{"key":"e_1_3_2_89_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2020.3002567"},{"key":"e_1_3_2_90_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467102"}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3551390","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3551390","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3551390","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:00:25Z","timestamp":1750186825000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3551390"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,20]]},"references-count":89,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,4,30]]}},"alternative-id":["10.1145\/3551390"],"URL":"https:\/\/doi.org\/10.1145\/3551390","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,3,20]]},"assertion":[{"value":"2021-11-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-07-04","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-03-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}