{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,14]],"date-time":"2026-03-14T09:49:44Z","timestamp":1773481784186,"version":"3.50.1"},"reference-count":82,"publisher":"Association for Computing Machinery (ACM)","issue":"CSCW2","license":[{"start":{"date-parts":[[2021,10,13]],"date-time":"2021-10-13T00:00:00Z","timestamp":1634083200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100006769","name":"Russian Science Foundation","doi-asserted-by":"crossref","award":["Project No. 19-18-00282"],"award-info":[{"award-number":["Project No. 19-18-00282"]}],"id":[{"id":"10.13039\/501100006769","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2021,10,13]]},"abstract":"<jats:p>Crowdsourcing is being increasingly adopted as a platform to run studies with human subjects. Running a crowdsourcing experiment involves several choices and strategies to successfully port an experimental design into an otherwise uncontrolled research environment, e.g., sampling crowd workers, mapping experimental conditions to micro-tasks, or ensure quality contributions. While several guidelines inform researchers in these choices, guidance of how and what to report from crowdsourcing experiments has been largely overlooked. If under-reported, implementation choices constitute variability sources that can affect the experiment's reproducibility and prevent a fair assessment of research outcomes. In this paper, we examine the current state of reporting of crowdsourcing experiments and offer guidance to address associated reporting issues. We start by identifying sensible implementation choices, relying on existing literature and interviews with experts, to then extensively analyze the reporting of 171 crowdsourcing experiments. Informed by this process, we propose a checklist for reporting crowdsourcing experiments.<\/jats:p>","DOI":"10.1145\/3479531","type":"journal-article","created":{"date-parts":[[2021,10,19]],"date-time":"2021-10-19T02:32:07Z","timestamp":1634610727000},"page":"1-34","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":16,"title":["On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices"],"prefix":"10.1145","volume":"5","author":[{"given":"Jorge","family":"Ram\u00edrez","sequence":"first","affiliation":[{"name":"University of Trento, Trento, Italy"}]},{"given":"Burcu","family":"Sayin","sequence":"additional","affiliation":[{"name":"University of Trento, Trento, Italy"}]},{"given":"Marcos","family":"Baez","sequence":"additional","affiliation":[{"name":"Universit\u00e9 Claude Bernard Lyon 1, Villeurbanne, France"}]},{"given":"Fabio","family":"Casati","sequence":"additional","affiliation":[{"name":"Servicenow, Santa Clara, CA, USA"}]},{"given":"Luca","family":"Cernuzzi","sequence":"additional","affiliation":[{"name":"Catholic University of Asuncion, Asuncion, Paraguay"}]},{"given":"Boualem","family":"Benatallah","sequence":"additional","affiliation":[{"name":"University of New South Wales, Sydney, NSW, Australia"}]},{"given":"Gianluca","family":"Demartini","sequence":"additional","affiliation":[{"name":"University of Queensland, Brisbane, QLD, Australia"}]}],"member":"320","published-online":{"date-parts":[[2021,10,18]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Statistics As Principled Argument","author":"Abelson Robert P.","unstructured":"Robert P. Abelson. 1995. Statistics As Principled Argument. Psychology Press. 238 pages."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1147\/JRD.2019.2942288"},{"key":"e_1_2_1_3_1","volume-title":"Publication manual of the American Psychological Association","author":"American Psychological Association","unstructured":"American Psychological Association. 2010. Publication manual of the American Psychological Association (sixth ed.). American Psychological Association."},{"key":"e_1_2_1_4_1","volume-title":"Guidelines for performing Systematic Literature Reviews in Software Engineering. 2 (01","author":"Barbara Kitchenham","year":"2007","unstructured":"Kitchenham Barbara and Stuart Charters. 2007. Guidelines for performing Systematic Literature Reviews in Software Engineering. 2 (01 2007)."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300773"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00041"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1177\/1745691617706516"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1001\/jama.291.20.2457"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-013-0365--7"},{"key":"e_1_2_1_10_1","volume-title":"Weld","author":"Chen Quanze","year":"2018","unstructured":"Quanze Chen, Jonathan Bragg, Lydia B. Chilton, and Daniel S. Weld. 2018. Cicero: Multi-Turn, Contextual Argumentation for Accurate Crowdsourcing. CoRR abs\/1810.10733 (2018). arXiv:1810.10733 http:\/\/arxiv.org\/abs\/1810.10733"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0057410"},{"key":"e_1_2_1_12_1","doi-asserted-by":"crossref","unstructured":"A. P. Dawid and A. M. Skene. 1979. Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm. Journal of the Royal Statistical Society. Series C Applied Statistics 28 1 (1979).","DOI":"10.2307\/2346806"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v2i1.13154"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025870"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3--642--36257--6"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v4i1.13270"},{"key":"e_1_2_1_17_1","volume-title":"Effective Crowd Annotation for Relation Extraction. In NAACL HLT","author":"Angli","year":"2016","unstructured":"Angli Liu et al. 2016. Effective Crowd Annotation for Relation Extraction. In NAACL HLT 2016."},{"key":"e_1_2_1_18_1","volume-title":"Incentivizing High Quality Crowdwork. In WWW","author":"Chien-Ju","year":"2015","unstructured":"Chien-Ju Ho et al. 2015. Incentivizing High Quality Crowdwork. In WWW 2015."},{"key":"e_1_2_1_19_1","volume-title":"Demographics and Dynamics of Mechanical Turk Workers. In WSDM","author":"Eddine Djellel","year":"2018","unstructured":"Djellel Eddine Difallah et al. 2018. Demographics and Dynamics of Mechanical Turk Workers. In WSDM 2018. 135--143."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3148148"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3289600.3291035"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2145204.2145355"},{"key":"e_1_2_1_23_1","first-page":"143","article-title":"Working the crowd: employment and labor law in the crowdsourcing industry","author":"Felstiner Alek","year":"2011","unstructured":"Alek Felstiner. 2011. Working the crowd: employment and labor law in the crowdsourcing industry. Berkeley J. Emp. & Lab. L. 32 (2011), 143.","journal-title":"Berkeley J. Emp. & Lab."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130914"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3027385.3027402"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/2631775.2631819"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3--319--66435--4_2"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3078714.3078715"},{"key":"e_1_2_1_29_1","volume-title":"Hanna M. Wallach, Hal Daum\u00e9 III, and Kate Crawford.","author":"Gebru Timnit","year":"2018","unstructured":"Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daum\u00e9 III, and Kate Crawford. 2018. Datasheets for Datasets. CoRR abs\/1803.09010 (2018). arXiv:1803.09010 http:\/\/arxiv.org\/abs\/1803.09010"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--1--4939-0378--8_9"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1136\/medethics-2012--100798"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2441776.2441848"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174023"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10683-011--9273--9"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cosrev.2015.05.001"},{"key":"e_1_2_1_36_1","unstructured":"Christoph Hube Besnik Fetahu and Ujwal Gadiraju. 2019. Understanding and Mitigating Worker Biases in the Crowdsourced Collection of Subjective Judgments. (2019)."},{"key":"e_1_2_1_37_1","unstructured":"Panos Ipeirotis. 2010 (accessed August 26 2020). Mechanical Turk Low Wages and the Market for Lemons. https:\/\/www.behind-the-enemy-lines.com\/2010\/07\/mechanical-turk-low-wages-and-market.html"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/1357054.1357127"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/2441776.2441923"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/2736277.2741681"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858115"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v4i1.13284"},{"key":"e_1_2_1_43_1","volume-title":"TurkServer: Enabling Synchronous and Longitudinal Online Experiments. In The 4th Human Computation Workshop, HCOMP@AAAI 2012","author":"Mao Andrew","year":"2012","unstructured":"Andrew Mao, Yiling Chen, Krzysztof Z. Gajos, David C. Parkes, Ariel D. Procaccia, and Haoqi Zhang. 2012. TurkServer: Enabling Synchronous and Longitudinal Online Experiments. In The 4th Human Computation Workshop, HCOMP@AAAI 2012, Toronto, Ontario, Canada, July 23, 2012. http:\/\/www.aaai.org\/ocs\/index.php\/WS\/AAAIW12\/paper\/view\/5315"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/2464464.2464485"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/2531602.2531663"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-011-0124--6"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/1809400.1809422"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0039116"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287596"},{"key":"e_1_2_1_50_1","volume-title":"Mortensen et al","author":"Michael","year":"2016","unstructured":"Michael L. Mortensen et al. 2016. An exploration of crowdsourcing citation screening for systematic reviews. Research Synthesis Methods (2016)."},{"key":"e_1_2_1_51_1","volume-title":"Kellogg","author":"Olson Judith S.","year":"2014","unstructured":"Judith S. Olson and Wendy A. Kellogg. 2014. Ways of Knowing in HCI. Springer Publishing Company, Incorporated."},{"key":"e_1_2_1_52_1","volume-title":"Identifying and avoiding bias in research. Plastic and reconstructive surgery 126, 2","author":"Pannucci Christopher J","year":"2010","unstructured":"Christopher J Pannucci and Edwin G Wilkins. 2010. Identifying and avoiding bias in research. Plastic and reconstructive surgery 126, 2 (2010), 619."},{"key":"e_1_2_1_53_1","volume-title":"Ipeirotis","author":"Paolacci Gabriele","year":"2010","unstructured":"Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk."},{"key":"e_1_2_1_54_1","volume-title":"Proceedings of the First International Workshop on Crowdsourcing Web Search","author":"Paritosh Praveen","year":"2012","unstructured":"Praveen Paritosh. 2012. Human Computation Must Be Reproducible. In Proceedings of the First International Workshop on Crowdsourcing Web Search, Lyon, France, April 17, 2012. 20--25. http:\/\/ceur-ws.org\/Vol-842\/crowdsearch-paritosh.pdf"},{"key":"e_1_2_1_55_1","volume-title":"Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program). arXiv preprint arXiv:2003.12206","author":"Pineau Joelle","year":"2020","unstructured":"Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivi\u00e8re, Alina Beygelzimer, Florence d'Alch\u00e9-Buc, Emily B. Fox, and Hugo Larochelle. 2020. Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program). arXiv preprint arXiv:2003.12206 (2020). arXiv:2003.12206 https:\/\/arxiv.org\/abs\/2003.12206"},{"key":"e_1_2_1_56_1","volume-title":"Reproducibility vs. replicability: a brief history of a confused terminology. Frontiers in neuroinformatics 11","author":"Plesser Hans E","year":"2018","unstructured":"Hans E Plesser. 2018. Reproducibility vs. replicability: a brief history of a confused terminology. Frontiers in neuroinformatics 11 (2018), 76."},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0233154"},{"key":"e_1_2_1_58_1","volume-title":"Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks. In HCOMP","author":"Qarout Rehab Kamal","year":"2019","unstructured":"Rehab Kamal Qarout, Alessandro Checco, Gianluca Demartini, and Kalina Bontcheva. 2019. Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks. In HCOMP 2019."},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1186\/s13104-019-4858-z"},{"key":"e_1_2_1_60_1","volume-title":"Understanding the Impact of Text Highlighting in Crowdsourcing Tasks. In HCOMP","volume":"7","author":"Ram\u00edrez Jorge","year":"2019","unstructured":"Jorge Ram\u00edrez, Marcos Baez, Fabio Casati, and Boualem Benatallah. 2019. Understanding the Impact of Text Highlighting in Crowdsourcing Tasks. In HCOMP 2019, Vol. 7. AAAI, 144--152."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/3406865.3418318"},{"key":"e_1_2_1_62_1","volume-title":"Crowd-Hub: Extending crowdsourcing platforms for the controlled evaluation of tasks designs. arXiv preprint arXiv:1909.02800","author":"Ram\u00edrez Jorge","year":"2019","unstructured":"Jorge Ram\u00edrez, Simone Degiacomi, Davide Zanella, Marcos B\u00e1ez, Fabio Casati, and Boualem Benatallah. 2019. Crowd-Hub: Extending crowdsourcing platforms for the controlled evaluation of tasks designs. arXiv preprint arXiv:1909.02800 (2019). arXiv:1909.02800 http:\/\/arxiv.org\/abs\/1909.02800"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jtbi.2011.03.004"},{"key":"e_1_2_1_64_1","volume-title":"Proceedings of the Fifth International Conference on Weblogs and Social Media","author":"Rogstadius Jakob","year":"2011","unstructured":"Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. 2011. An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets. In Proceedings of the Fifth International Conference on Weblogs and Social Media, Barcelona, Catalonia, Spain, July 17--21, 2011. http:\/\/www.aaai.org\/ocs\/index.php\/ICWSM\/ICWSM11\/paper\/view\/2778"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/2556288.2557155"},{"key":"e_1_2_1_66_1","volume-title":"Saturation in Qualitative Research: Exploring its Conceptualization and Operationalization. Quality & quantity 52, 4","author":"Saunders Benjamin","year":"2018","unstructured":"Benjamin Saunders, Julius Sim, Tom Kingstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. Saturation in Qualitative Research: Exploring its Conceptualization and Operationalization. Quality & quantity 52, 4 (2018), 1893--1907."},{"key":"e_1_2_1_67_1","volume-title":"Irresolvable Disagreement: A Study on Worker Deliberation in Crowd Work. CSCW 2018","author":"Schaekermann Mike","year":"2018","unstructured":"Mike Schaekermann, Joslin Goh, Kate Larson, and Edith Law. 2018. Resolvable vs. Irresolvable Disagreement: A Study on Worker Deliberation in Crowd Work. CSCW 2018 (2018)."},{"key":"e_1_2_1_68_1","first-page":"1","volume-title":"CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC medicine 8","author":"Schulz K.F.","year":"2010","unstructured":"K.F. Schulz, D.G. Altman, D. Moher, et al. 2010. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC medicine 8, 1 (2010), 18."},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1136\/bmj.h4672"},{"key":"e_1_2_1_70_1","unstructured":"W.R. Shadish T.D. Cook and D.T. Campbell. 2002. Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin and Company."},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1136\/bmj.g7647"},{"key":"e_1_2_1_72_1","volume-title":"WWW","author":"Wilson","year":"2018","unstructured":"Wilson Shomir and et al. 2016. Crowdsourcing Annotations for Websites' Privacy Policies: Can It Really Work?. In WWW 2018."},{"key":"e_1_2_1_73_1","volume-title":"Ng","author":"Snow Rion","year":"2008","unstructured":"Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks. In 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Proceedings of the Conference, 25--27 October 2008, Honolulu, Hawaii, USA, A meeting of SIGDAT, a Special Interest Group of the ACL. 254--263. http:\/\/www.aclweb.org\/anthology\/D08--1027"},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2008.4562953"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/2702123.2702541"},{"key":"e_1_2_1_76_1","volume-title":"Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. Journal of Machine Learning Research 18","author":"Vaughan Jennifer Wortman","year":"2017","unstructured":"Jennifer Wortman Vaughan. 2017. Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. Journal of Machine Learning Research 18 (2017). http:\/\/jmlr.org\/papers\/v18\/17--234.html"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376448"},{"key":"e_1_2_1_78_1","volume-title":"Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. (2009)","author":"Whitehill Jacob","year":"2035","unstructured":"Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. (2009), 2035--2043."},{"key":"e_1_2_1_79_1","volume-title":"Bernstein","author":"Whiting Mark E.","year":"2019","unstructured":"Mark E. Whiting, Grant Hugh, and Michael S. Bernstein. 2019. Fair Work: Crowd Work Minimum Wage with One Line of Code. In HCOMP 2019."},{"key":"e_1_2_1_80_1","volume-title":"Quinn","author":"Wu Meng-Han","year":"2017","unstructured":"Meng-Han Wu and Alexander J. Quinn. 2017. Confusing the Crowd: Task Instruction Quality on Amazon Mechanical Turk. In HCOMP 2017. https:\/\/aaai.org\/ocs\/index.php\/HCOMP\/HCOMP17\/paper\/view\/15943"},{"key":"e_1_2_1_81_1","volume-title":"Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4086--4097","author":"Wayne Wu Y","year":"2016","unstructured":"Y Wayne Wu and Brian P Bailey. 2016. Novices Who Focused or Experts Who Didn't?. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4086--4097."},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331353"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3479531","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3479531","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,14]],"date-time":"2025-07-14T05:01:59Z","timestamp":1752469319000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3479531"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,13]]},"references-count":82,"journal-issue":{"issue":"CSCW2","published-print":{"date-parts":[[2021,10,13]]}},"alternative-id":["10.1145\/3479531"],"URL":"https:\/\/doi.org\/10.1145\/3479531","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,10,13]]},"assertion":[{"value":"2021-10-18","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}