{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T11:18:08Z","timestamp":1775128688091,"version":"3.50.1"},"reference-count":72,"publisher":"Association for Computing Machinery (ACM)","issue":"FSE","license":[{"start":{"date-parts":[[2024,7,12]],"date-time":"2024-07-12T00:00:00Z","timestamp":1720742400000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001691","name":"Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["JP23KJ1589, JP20H05706"],"award-info":[{"award-number":["JP23KJ1589, JP20H05706"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002241","name":"Japan Science and Technology Agency","doi-asserted-by":"publisher","award":["JPMJPR22P6"],"award-info":[{"award-number":["JPMJPR22P6"]}],"id":[{"id":"10.13039\/501100002241","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Softw. Eng."],"published-print":{"date-parts":[[2024,7,12]]},"abstract":"<jats:p>GitHub\u2019s Copilot for Pull Requests (PRs) is a promising service aiming to automate various developer tasks related to PRs, such as generating summaries of changes or providing complete walkthroughs with links to the relevant code. As this innovative technology gains traction in the Open Source Software (OSS) community, it is crucial to examine its early adoption and its impact on the development process. Additionally, it offers a unique opportunity to observe how developers respond when they disagree with the generated content. In our study, we employ a mixed-methods approach, blending quantitative analysis with qualitative insights, to examine 18,256 PRs in which parts of the descriptions were crafted by generative AI. Our findings indicate that: (1) Copilot for PRs, though in its infancy, is seeing a marked uptick in adoption. (2) PRs enhanced by Copilot for PRs require less review time and have a higher likelihood of being merged. (3) Developers using Copilot for PRs often complement the automated descriptions with their manual input. These results offer valuable insights into the growing integration of generative AI in software development.<\/jats:p>","DOI":"10.1145\/3643773","type":"journal-article","created":{"date-parts":[[2024,7,12]],"date-time":"2024-07-12T10:22:09Z","timestamp":1720779729000},"page":"1043-1065","source":"Crossref","is-referenced-by-count":11,"title":["Generative AI for Pull Request Descriptions: Adoption, Impact, and Developer Interventions"],"prefix":"10.1145","volume":"1","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4070-585X","authenticated-orcid":false,"given":"Tao","family":"Xiao","sequence":"first","affiliation":[{"name":"Nara Institute of Science and Technology, Ikoma, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0708-5222","authenticated-orcid":false,"given":"Hideaki","family":"Hata","sequence":"additional","affiliation":[{"name":"Shinshu University, Nagano, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6919-2149","authenticated-orcid":false,"given":"Christoph","family":"Treude","sequence":"additional","affiliation":[{"name":"Singapore Management University, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7418-9323","authenticated-orcid":false,"given":"Kenichi","family":"Matsumoto","sequence":"additional","affiliation":[{"name":"Nara Institute of Science and Technology, Ikoma, Japan"}]}],"member":"320","published-online":{"date-parts":[[2024,7,12]]},"reference":[{"key":"e_1_3_1_2_1","article-title":"Improving Few-Shot Prompts with Relevant Static Analysis Products","author":"Ahmed Toufique","year":"2023","unstructured":"Toufique Ahmed, Kunal Suresh Pai, Premkumar Devanbu, and Earl T Barr. 2023. Improving Few-Shot Prompts with Relevant Static Analysis Products. arXiv preprint arXiv:2304.06815 (2023).","journal-title":"arXiv preprint arXiv:2304.06815"},{"key":"e_1_3_1_3_1","article-title":"Exploring Distributional Shifts in Large Language Models for Code Analysis","author":"Arakelyan Shushan","year":"2023","unstructured":"Shushan Arakelyan, Rocktim Jyoti Das, Yi Mao, and Xiang Ren. 2023. Exploring Distributional Shifts in Large Language Models for Code Analysis. arXiv preprint arXiv:2303.09128 (2023).","journal-title":"arXiv preprint arXiv:2303.09128"},{"key":"e_1_3_1_4_1","doi-asserted-by":"publisher","DOI":"10.1002\/sim.9519"},{"key":"e_1_3_1_5_1","doi-asserted-by":"publisher","DOI":"10.1177\/0962280215601134"},{"key":"e_1_3_1_6_1","article-title":"Neural machine translation by jointly learning to align and translate","author":"Bahdanau Dzmitry","year":"2014","unstructured":"Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014).","journal-title":"arXiv preprint arXiv:1409.0473"},{"key":"e_1_3_1_7_1","article-title":"Code generation tools (almost) for free? a study of few-shot, pre-trained language models on code","author":"Barei\u00df Patrick","year":"2022","unstructured":"Patrick Barei\u00df, Beatriz Souza, Marcelo d\u2019Amorim, and Michael Pradel. 2022. Code generation tools (almost) for free? a study of few-shot, pre-trained language models on code. arXiv preprint arXiv:2206.01335 (2022).","journal-title":"arXiv preprint arXiv:2206.01335"},{"key":"e_1_3_1_8_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-015-9366-8"},{"key":"e_1_3_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3524610.3527917"},{"key":"e_1_3_1_10_1","article-title":"Evaluating large language models trained on code","author":"Chen Mark","year":"2021","unstructured":"Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).","journal-title":"arXiv preprint arXiv:2107.03374"},{"key":"e_1_3_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416538"},{"key":"e_1_3_1_12_1","article-title":"Teaching large language models to self-debug","author":"Chen Xinyun","year":"2023","unstructured":"Xinyun Chen, Maxwell Lin, Nathanael Sch\u00e4rli, and Denny Zhou. 2023. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128 (2023).","journal-title":"arXiv preprint arXiv:2304.05128"},{"key":"e_1_3_1_13_1","article-title":"Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic?","author":"D\u00f6derlein Jean-Baptiste","year":"2022","unstructured":"Jean-Baptiste D\u00f6derlein, Mathieu Acher, Djamel Eddine Khelladi, and Benoit Combemale. 2022. Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic? arXiv preprint arXiv:2210.14699 (2022).","journal-title":"arXiv preprint arXiv:2210.14699"},{"key":"e_1_3_1_14_1","article-title":"Self-collaboration Code Generation via ChatGPT","author":"Dong Yihong","year":"2023","unstructured":"Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2023. Self-collaboration Code Generation via ChatGPT. arXiv preprint arXiv:2304.07590 (2023).","journal-title":"arXiv preprint arXiv:2304.07590"},{"key":"e_1_3_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/MS.2023.3265877"},{"key":"e_1_3_1_16_1","doi-asserted-by":"publisher","DOI":"10.1613\/jair.1523"},{"key":"e_1_3_1_17_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2021.111160"},{"key":"e_1_3_1_18_1","article-title":"Constructing Effective In-Context Demonstration for Code Intelligence Tasks: An Empirical Study","author":"Gao Shuzheng","year":"2023","unstructured":"Shuzheng Gao, Xin-Cheng Wen, Cuiyun Gao, Wenxuan Wang, and Michael R Lyu. 2023. Constructing Effective In-Context Demonstration for Code Intelligence Tasks: An Empirical Study. arXiv preprint arXiv:2304.07575 (2023).","journal-title":"arXiv preprint arXiv:2304.07575"},{"key":"e_1_3_1_19_1","article-title":"Semantic Compression With Large Language Models","author":"Gilbert Henry","year":"2023","unstructured":"Henry Gilbert, Michael Sandborn, Douglas C Schmidt, Jesse Spencer-Smith, and Jules White. 2023. Semantic Compression With Large Language Models. arXiv preprint arXiv:2304.12512 (2023).","journal-title":"arXiv preprint arXiv:2304.12512"},{"key":"e_1_3_1_20_1","unstructured":"GitHub Next. 2023. GitHub Next | Copilot for Pull Requests \u2014 githubnext.com. https:\/\/githubnext.com\/projects\/copilot-for-pull-requests. [Accessed 23-09-2023]."},{"key":"e_1_3_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528228.3528406"},{"key":"e_1_3_1_22_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2021.110911"},{"key":"e_1_3_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/2884781.2884826"},{"key":"e_1_3_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER53432.2022.00112"},{"key":"e_1_3_1_25_1","doi-asserted-by":"publisher","DOI":"10.1093\/pan\/mpr025"},{"key":"e_1_3_1_26_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-021-10058-6"},{"key":"e_1_3_1_27_1","article-title":"Large Language Models for Software Engineering: A Systematic Literature Review","author":"Hou Xinyi","year":"2023","unstructured":"Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, and Haoyu Wang. 2023. Large Language Models for Software Engineering: A Systematic Literature Review. arXiv preprint arXiv:2308.10620 (2023).","journal-title":"arXiv preprint arXiv:2308.10620"},{"key":"e_1_3_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSME55016.2022.00058"},{"key":"e_1_3_1_29_1","article-title":"SelfEvolve: A Code Evolution Framework via Large Language Models","author":"Jiang Shuyang","year":"2023","unstructured":"Shuyang Jiang, Yuhao Wang, and Yu Wang. 2023. SelfEvolve: A Code Evolution Framework via Large Language Models. arXiv preprint arXiv:2306.02907 (2023).","journal-title":"arXiv preprint arXiv:2306.02907"},{"key":"e_1_3_1_30_1","article-title":"Explainable Automated Debugging via Large Language Model-driven Scientific Debugging","author":"Kang Sungmin","year":"2023","unstructured":"Sungmin Kang, Bei Chen, Shin Yoo, and Jian-Guang Lou. 2023. Explainable Automated Debugging via Large Language Model-driven Scientific Debugging. arXiv preprint arXiv:2304.02195 (2023).","journal-title":"arXiv preprint arXiv:2304.02195"},{"key":"e_1_3_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER53432.2022.00094"},{"key":"e_1_3_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3183519.3183542"},{"key":"e_1_3_1_33_1","first-page":"18319","article-title":"DS-1000: A natural and reliable benchmark for data science code generation","author":"Lai Yuhang","year":"2023","unstructured":"Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-tau Yih, Daniel Fried, Sida Wang, and Tao Yu. 2023. DS-1000: A natural and reliable benchmark for data science code generation. In International Conference on Machine Learning. 18319\u201318345.","journal-title":"International Conference on Machine Learning"},{"key":"e_1_3_1_34_1","article-title":"Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension","author":"Lewis Mike","year":"2019","unstructured":"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019).","journal-title":"arXiv preprint arXiv:1910.13461"},{"key":"e_1_3_1_35_1","doi-asserted-by":"publisher","DOI":"10.1080\/01621459.2016.1260466"},{"key":"e_1_3_1_36_1","article-title":"Enabling Programming Thinking in Large Language Models Toward Code Generation","author":"Li Jia","year":"2023","unstructured":"Jia Li, Ge Li, Yongmin Li, and Zhi Jin. 2023a. Enabling Programming Thinking in Large Language Models Toward Code Generation. arXiv preprint arXiv:2305.06599 (2023).","journal-title":"arXiv preprint arXiv:2305.06599"},{"key":"e_1_3_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE48619.2023.00110"},{"key":"e_1_3_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2022.3224053"},{"key":"e_1_3_1_39_1","first-page":"74","article-title":"Rouge: A package for automatic evaluation of summaries","author":"Lin Chin-Yew","year":"2004","unstructured":"Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out. 74\u201381.","journal-title":"Text summarization branches out."},{"key":"e_1_3_1_40_1","article-title":"Improving ChatGPT Prompt for Code Generation","author":"Liu Chao","year":"2023","unstructured":"Chao Liu, Xuanlin Bao, Hongyu Zhang, Neng Zhang, Haibo Hu, Xiaohong Zhang, and Meng Yan. 2023a. Improving ChatGPT Prompt for Code Generation. arXiv preprint arXiv:2305.08360 (2023).","journal-title":"arXiv preprint arXiv:2305.08360"},{"key":"e_1_3_1_41_1","article-title":"Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation","author":"Liu Jiawei","year":"2023","unstructured":"Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023b. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210 (2023).","journal-title":"arXiv preprint arXiv:2305.01210"},{"key":"e_1_3_1_42_1","article-title":"Text summarization with pretrained encoders","author":"Liu Yang","year":"2019","unstructured":"Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345 (2019).","journal-title":"arXiv preprint arXiv:1908.08345"},{"key":"e_1_3_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASE.2019.00026"},{"key":"e_1_3_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3511561"},{"key":"e_1_3_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00041"},{"key":"e_1_3_1_46_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-015-9381-9"},{"key":"e_1_3_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSM.2000.883028"},{"key":"e_1_3_1_48_1","article-title":"Comparing Software Developers with ChatGPT: An Empirical Investigation","author":"Nascimento Nathalia","year":"2023","unstructured":"Nathalia Nascimento, Paulo Alencar, and Donald Cowan. 2023. Comparing Software Developers with ChatGPT: An Empirical Investigation. arXiv preprint arXiv:2305.11837 (2023).","journal-title":"arXiv preprint arXiv:2305.11837"},{"key":"e_1_3_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER56733.2023.00096"},{"key":"e_1_3_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP46215.2023.10179324"},{"key":"e_1_3_1_51_1","article-title":"From Copilot to Pilot: Towards AI Supported Software Development","author":"Pudari Rohith","year":"2023","unstructured":"Rohith Pudari and Neil A Ernst. 2023. From Copilot to Pilot: Towards AI Supported Software Development. arXiv preprint arXiv:2303.04142 (2023).","journal-title":"arXiv preprint arXiv:2303.04142"},{"key":"e_1_3_1_52_1","first-page":"5485","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"1","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 1 (2020), 5485\u20135551.","journal-title":"The Journal of Machine Learning Research"},{"key":"e_1_3_1_53_1","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/70.1.41"},{"key":"e_1_3_1_54_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1020363010465"},{"key":"e_1_3_1_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER56733.2023.00049"},{"key":"e_1_3_1_56_1","article-title":"ChatGPT: A Study on its Utility for Ubiquitous Software Engineering Tasks","author":"Sridhara Giriprasad","year":"2023","unstructured":"Giriprasad Sridhara, Sourav Mazumdar, et al. 2023. ChatGPT: A Study on its Utility for Ubiquitous Software Engineering Tasks. arXiv preprint arXiv:2305.16837 (2023).","journal-title":"arXiv preprint arXiv:2305.16837"},{"key":"e_1_3_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/2884781.2884852"},{"key":"e_1_3_1_58_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-016-9452-6"},{"key":"e_1_3_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/2568225.2568315"},{"key":"e_1_3_1_60_1","article-title":"Attention is all you need","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems (2017).","journal-title":"Advances in neural information processing systems"},{"key":"e_1_3_1_61_1","first-page":"360","article-title":"Understanding interobserver agreement: the kappa statistic","author":"Viera Anthony J","year":"2005","unstructured":"Anthony J Viera, Joanne M Garrett, et al. 2005. Understanding interobserver agreement: the kappa statistic. Fam med 5 (2005), 360\u2013363.","journal-title":"Fam med 5"},{"key":"e_1_3_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3334480.3381069"},{"key":"e_1_3_1_63_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-021-09997-x"},{"key":"e_1_3_1_64_1","article-title":"Evaluating AIGC Detectors on Code Content","author":"Wang Jian","year":"2023","unstructured":"Jian Wang, Shangqing Liu, Xiaofei Xie, and Yi Li. 2023. Evaluating AIGC Detectors on Code Content. arXiv preprint arXiv:2304.05193 (2023).","journal-title":"arXiv preprint arXiv:2304.05193"},{"key":"e_1_3_1_65_1","article-title":"How Effective Are Neural Networks for Fixing Security Vulnerabilities","author":"Wu Yi","year":"2023","unstructured":"Yi Wu, Nan Jiang, Hung Viet Pham, Thibaud Lutellier, Jordan Davis, Lin Tan, Petr Babkin, and Sameena Shah. 2023. How Effective Are Neural Networks for Fixing Security Vulnerabilities. arXiv preprint arXiv:2305.18607 (2023).","journal-title":"arXiv preprint arXiv:2305.18607"},{"key":"e_1_3_1_66_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-78462-1_13"},{"key":"e_1_3_1_67_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE48619.2023.00129"},{"key":"e_1_3_1_68_1","article-title":"Conversational automated program repair","author":"Xia Chunqiu Steven","year":"2023","unstructured":"Chunqiu Steven Xia and Lingming Zhang. 2023. Conversational automated program repair. arXiv preprint arXiv:2301.13246 (2023).","journal-title":"arXiv preprint arXiv:2301.13246"},{"key":"e_1_3_1_69_1","doi-asserted-by":"publisher","unstructured":"Tao Xiao Hideaki Hata Christoph Treude and Kenichi Matsumoto. 2024. Research Artifact - Generative AI for Pull Request Descriptions: Adoption Impact and Developer Interventions. https:\/\/doi.org\/10.5281\/zenodo.10656106 10.5281\/zenodo.10656106","DOI":"10.5281\/zenodo.10656106"},{"key":"e_1_3_1_70_1","doi-asserted-by":"publisher","DOI":"10.1145\/3520312.3534862"},{"key":"e_1_3_1_71_1","article-title":"Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT","author":"Yeti\u015ftiren Burak","year":"2023","unstructured":"Burak Yeti\u015ftiren, I\u015fik \u00d6zsoy, Miray Ayerdem, and Eray T\u00fcz\u00fcn. 2023. Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT. arXiv preprint arXiv:2304.10778 (2023).","journal-title":"arXiv preprint arXiv:2304.10778"},{"key":"e_1_3_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/3533767.3534390"},{"key":"e_1_3_1_73_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2022.3165056"}],"container-title":["Proceedings of the ACM on Software Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643773","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3643773","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3643773","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,4]],"date-time":"2026-02-04T08:04:03Z","timestamp":1770192243000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643773"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,12]]},"references-count":72,"journal-issue":{"issue":"FSE","published-print":{"date-parts":[[2024,7,12]]}},"alternative-id":["10.1145\/3643773"],"URL":"https:\/\/doi.org\/10.1145\/3643773","relation":{},"ISSN":["2994-970X"],"issn-type":[{"value":"2994-970X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,12]]}}}