{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,8]],"date-time":"2026-04-08T10:46:17Z","timestamp":1775645177609,"version":"3.50.1"},"reference-count":52,"publisher":"Association for Computing Machinery (ACM)","issue":"FSE","license":[{"start":{"date-parts":[[2024,7,12]],"date-time":"2024-07-12T00:00:00Z","timestamp":1720742400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2023YFB2703600"],"award-info":[{"award-number":["2023YFB2703600"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100003453","name":"Natural Science Foundation of Guangdong Province","doi-asserted-by":"publisher","award":["2023A1515010746"],"award-info":[{"award-number":["2023A1515010746"]}],"id":[{"id":"10.13039\/501100003453","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Softw. Eng."],"published-print":{"date-parts":[[2024,7,12]]},"abstract":"<jats:p>\n                    Just-in-time defect prediction (JIT-DP) is used to predict the defect-proneness of a commit and just-in-time defect localization (JIT-DL) is used to locate the exact buggy positions (defective lines) in a commit. Recently, various JIT-DP and JIT-DL techniques have been proposed, while most of them use a post-mortem way (e.g., code entropy, attention weight, LIME) to achieve the JIT-DL goal based on the prediction results in JIT-DP. These methods do not utilize the label information of the defective code lines during model building. In this paper, we propose a unified model JIT-Smart, which makes the training process of just-in-time defect prediction and localization tasks a mutually reinforcing multi-task learning process. Specifically, we design a novel defect localization network (DLN), which explicitly introduces the label information of defective code lines for supervised learning in JIT-DL with considering the class imbalance issue. To further investigate the accuracy and cost-effectiveness of JIT-Smart, we compare JIT-Smart with 7 state-of-the-art baselines under 5 commit-level and 5 line-level evaluation metrics in JIT-DP and JIT-DL. The results demonstrate that JIT-Smart is statistically better than all the state-of-the-art baselines in JIT-DP and JIT-DL. In JIT-DP, at the median value, JIT-Smart achieves F1-Score of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.475<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    , AUC of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.886<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    , Recall@20%Effort of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.823<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    , Effort@20%Recall of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.01<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    and Popt of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.942<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    and improves the baselines by\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>19<\/mml:mn>\n                          <mml:mn>.89%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>702<\/mml:mn>\n                          <mml:mn>.74%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    ,\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>1<\/mml:mn>\n                          <mml:mn>.23%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>31<\/mml:mn>\n                          <mml:mn>.34%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    ,\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>9<\/mml:mn>\n                          <mml:mn>.44%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>33<\/mml:mn>\n                          <mml:mn>.16%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    ,\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>21<\/mml:mn>\n                          <mml:mn>.6%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>53<\/mml:mn>\n                          <mml:mn>.82%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    and\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>1<\/mml:mn>\n                          <mml:mn>.94%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>34<\/mml:mn>\n                          <mml:mn>.89%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    , respectively. In JIT-DL, at the median value, JIT-Smart achieves Top-5 Accuracy of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.539<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    and Top-10 Accuracy of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.396<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    , Recall@\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>20<\/mml:mn>\n                          <mml:mi>%<\/mml:mi>\n                          <mml:msub>\n                            <mml:mrow>\n                              <mml:mtext>Effort<\/mml:mtext>\n                            <\/mml:mrow>\n                            <mml:mrow>\n                              <mml:mi>l<\/mml:mi>\n                              <mml:mi>i<\/mml:mi>\n                              <mml:mi>n<\/mml:mi>\n                              <mml:mi>e<\/mml:mi>\n                            <\/mml:mrow>\n                          <\/mml:msub>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.726<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    , Effort@\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>20<\/mml:mn>\n                          <mml:mi>%<\/mml:mi>\n                          <mml:msub>\n                            <mml:mrow>\n                              <mml:mtext>Recall<\/mml:mtext>\n                            <\/mml:mrow>\n                            <mml:mrow>\n                              <mml:mi>l<\/mml:mi>\n                              <mml:mi>i<\/mml:mi>\n                              <mml:mi>n<\/mml:mi>\n                              <mml:mi>e<\/mml:mi>\n                            <\/mml:mrow>\n                          <\/mml:msub>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.087<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    and\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:msub>\n                            <mml:mrow>\n                              <mml:mtext>IFA<\/mml:mtext>\n                            <\/mml:mrow>\n                            <mml:mrow>\n                              <mml:mi>l<\/mml:mi>\n                              <mml:mi>i<\/mml:mi>\n                              <mml:mi>n<\/mml:mi>\n                              <mml:mi>e<\/mml:mi>\n                            <\/mml:mrow>\n                          <\/mml:msub>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    of\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>0.098<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    and improves the baselines by\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>101<\/mml:mn>\n                          <mml:mn>.83%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>178<\/mml:mn>\n                          <mml:mn>.35%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    ,\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>101<\/mml:mn>\n                          <mml:mn>.01%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>277<\/mml:mn>\n                          <mml:mn>.31%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    ,\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mtext>257<\/mml:mtext>\n                          <mml:mn>.88%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>404<\/mml:mn>\n                          <mml:mn>.63%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    ,\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>71<\/mml:mn>\n                          <mml:mn>.91%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>74<\/mml:mn>\n                          <mml:mn>.31%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    and\n                    <jats:inline-formula>\n                      <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"inline\">\n                        <mml:mrow>\n                          <mml:mn>99<\/mml:mn>\n                          <mml:mn>.11%<\/mml:mn>\n                          <mml:mi>-<\/mml:mi>\n                          <mml:mn>99<\/mml:mn>\n                          <mml:mn>.41%<\/mml:mn>\n                        <\/mml:mrow>\n                      <\/mml:math>\n                    <\/jats:inline-formula>\n                    , respectively. Statistical analysis shows that our JIT-Smart performs more stably than the bestperforming model. Besides, JIT-Smart also achieves the best performance compared with the state-of-the-art baselines in cross-project evaluation.\n                  <\/jats:p>","DOI":"10.1145\/3643727","type":"journal-article","created":{"date-parts":[[2024,7,12]],"date-time":"2024-07-12T10:22:09Z","timestamp":1720779729000},"page":"1-23","source":"Crossref","is-referenced-by-count":11,"title":["JIT-Smart: A Multi-task Learning Framework for Just-in-Time Defect Prediction and Localization"],"prefix":"10.1145","volume":"1","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8234-3186","authenticated-orcid":false,"given":"Xiangping","family":"Chen","sequence":"first","affiliation":[{"name":"Sun Yat-sen University, Guangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6258-5233","authenticated-orcid":false,"given":"Furen","family":"Xu","sequence":"additional","affiliation":[{"name":"Sun Yat-sen University, Zhuhai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9548-0208","authenticated-orcid":false,"given":"Yuan","family":"Huang","sequence":"additional","affiliation":[{"name":"Sun Yat-sen University, Zhuhai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8662-5690","authenticated-orcid":false,"given":"Neng","family":"Zhang","sequence":"additional","affiliation":[{"name":"Sun Yat-sen University, Zhuhai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7878-4330","authenticated-orcid":false,"given":"Zibin","family":"Zheng","sequence":"additional","affiliation":[{"name":"Sun Yat-sen University, Guangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2024,7,12]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2009.06.055"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1010933404324"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE.2019.00076"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.5555\/1622407.1622416"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.infsof.2017.08.004"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1423"},{"key":"e_1_3_1_8_2","article-title":"Codebert: A pre-trained model for programming and natural languages","author":"Feng Zhangyin","year":"2020","unstructured":"Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155 (2020).","journal-title":"arXiv preprint arXiv:2002.08155"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.5555\/2502692"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/3533767.3534368"},{"key":"e_1_3_1_11_2","article-title":"Graphcodebert: Pre-training code representations with data flow","author":"Guo Daya","year":"2020","unstructured":"Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. 2020. Graphcodebert: Pre-training code representations with data flow. arXiv preprint arXiv:2009.08366 (2020).","journal-title":"arXiv preprint arXiv:2009.08366"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1148\/radiology.143.1.7063747"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-021-10083-5"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSR.2019.00016"},{"key":"e_1_3_1_15_2","doi-asserted-by":"crossref","unstructured":"Thong Hoang Hong Jin Kang David Lo and Julia Lawall. 2020. Cc2vec: Distributed representations of code changes. In Proceedings of the ACM\/IEEE 42nd International Conference on Software Engineering. 518\u2013529.","DOI":"10.1145\/3377811.3380361"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3196321.3196334"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-019-09730-9"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2020.110754"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.infsof.2020.106373"},{"key":"e_1_3_1_21_2","doi-asserted-by":"crossref","unstructured":"Yuan Huang Nan Jia Xiangping Chen Kai Hong and Zibin Zheng. 2020. Code review knowledge perception: Fusing multi-features for salient-class location. IEEE Transactions on Software Engineering 48 5 (2020) 1463\u20131479.","DOI":"10.1109\/TSE.2020.3021902"},{"issue":"7","key":"e_1_3_1_22_2","doi-asserted-by":"crossref","first-page":"2376","DOI":"10.1109\/TSE.2021.3059481","article-title":"Change-patterns mapping: A boosting way for change impact analysis","volume":"48","author":"Huang Yuan","year":"2021","unstructured":"Yuan Huang, Jinyu Jiang, Xiapu Luo, Xiangping Chen, Zibin Zheng, Nan Jia, and Gang Huang. 2021. Change-patterns mapping: A boosting way for change impact analysis. IEEE Transactions on Software Engineering 48, 7 (2021), 2376\u20132398.","journal-title":"IEEE Transactions on Software Engineering"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2012.70"},{"key":"e_1_3_1_24_2","doi-asserted-by":"crossref","unstructured":"Sunghun Kim and E James Whitehead Jr. 2006. How long did it take to fix bugs?. In Proceedings of the 2006 international workshop on Mining software repositories. 173\u2013174.","DOI":"10.1145\/1137983.1138027"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14-1181"},{"key":"e_1_3_1_26_2","doi-asserted-by":"crossref","first-page":"111","DOI":"10.1109\/ICSM.2015.7332457","volume-title":"2015 IEEE international conference on software maintenance and evolution (ICSME)","author":"Kononenko Oleksii","year":"2015","unstructured":"Oleksii Kononenko, Olga Baysal, Latifa Guerrouj, Yaxin Cao, and Michael W Godfrey. 2015. Investigating code review quality: Do people and participation matter?. In 2015 IEEE international conference on software maintenance and evolution (ICSME). IEEE, 111\u2013120."},{"key":"e_1_3_1_27_2","unstructured":"Tsung-Yi Lin Priya Goyal Ross Girshick Kaiming He and Piotr Doll\u00e1r. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision. 2980\u20132988."},{"key":"e_1_3_1_28_2","doi-asserted-by":"crossref","unstructured":"Fang Liu Ge Li Yunfei Zhao and Zhi Jin. 2020. Multi-task learning based pre-trained language model for code completion. In Proceedings of the 35th IEEE\/ACM International Conference on Automated Software Engineering. 473\u2013485.","DOI":"10.1145\/3324884.3416591"},{"key":"e_1_3_1_29_2","first-page":"11","volume-title":"2017 ACM\/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)","author":"Liu Jinping","year":"2017","unstructured":"Jinping Liu, Yuming Zhou, Yibiao Yang, Hongmin Lu, and Baowen Xu. 2017. Code churn: A neglected metric in effort-aware just-in-time defect prediction. In 2017 ACM\/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). IEEE, 11\u201319."},{"key":"e_1_3_1_30_2","doi-asserted-by":"crossref","unstructured":"Audris Mockus and David M Weiss. 2000. Predicting risk of software changes. Bell Labs Technical Journal 5 2 (2000) 169\u2013180.","DOI":"10.1002\/bltj.2229"},{"key":"e_1_3_1_31_2","unstructured":"Chao Ni Wei Wang Kaiwen Yang Xin Xia Kui Liu and David Lo. 2022. The Best of Both Worlds: Integrating Semantic Features with Expert Features for Defect Prediction and Localization. In Proceedings of the 30th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1\u201312."},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.3390\/app11114793"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","unstructured":"Chanathip Pornprasit and Chakkrit Tantithamthavorn. 2022. DeepLineDP: Towards a Deep Learning Approach for Line-Level Defect Prediction. IEEE Transactions on Software Engineering (2022) 1\u20131. https:\/\/doi.org\/10.1109\/TSE.2022.3144348 10.1109\/TSE.2022.3144348","DOI":"10.1109\/TSE.2022.3144348"},{"key":"e_1_3_1_34_2","doi-asserted-by":"crossref","first-page":"369","DOI":"10.1109\/MSR52588.2021.00049","volume-title":"2021 IEEE\/ACM 18th International Conference on Mining Software Repositories (MSR)","author":"Pornprasit Chanathip","year":"2021","unstructured":"Chanathip Pornprasit and Chakkrit Kla Tantithamthavorn. 2021. Jitline: A simpler, better, faster, finer-grained just-intime defect prediction. In 2021 IEEE\/ACM 18th International Conference on Mining Software Repositories (MSR). IEEE, 369\u2013379."},{"key":"e_1_3_1_35_2","doi-asserted-by":"crossref","unstructured":"Fangcheng Qiu Meng Yan Xin Xia Xinyu Wang Yuanrui Fan Ahmed E Hassan and David Lo. 2020. JITO: a tool for just-in-time defect identification and localization. In Proceedings of the 28th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering. 1586\u20131590.","DOI":"10.1145\/3368089.3417927"},{"key":"e_1_3_1_36_2","first-page":"428","volume-title":"2016 IEEE\/ACM 38th International Conference on Software Engineering (ICSE)","author":"Ray Baishakhi","year":"2016","unstructured":"Baishakhi Ray, Vincent Hellendoorn, Saheel Godhane, Zhaopeng Tu, Alberto Bacchelli, and Premkumar Devanbu. 2016. On the\" naturalness\" of buggy code. In 2016 IEEE\/ACM 38th International Conference on Software Engineering (ICSE). IEEE, 428\u2013439."},{"key":"e_1_3_1_37_2","doi-asserted-by":"crossref","unstructured":"Marco Tulio Ribeiro Sameer Singh and Carlos Guestrin. 2016. \"Why should i trust you?\" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135\u20131144.","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_1_38_2","doi-asserted-by":"crossref","unstructured":"Christoffer Rosen Ben Grawi and Emad Shihab. 2015. Commit guru: analytics and risk prediction of software commits. In Proceedings of the 2015 10th joint meeting on foundations of software engineering. 966\u2013969.","DOI":"10.1145\/2786805.2803183"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P16-1162"},{"key":"e_1_3_1_40_2","doi-asserted-by":"crossref","unstructured":"Ensheng Shi Yanlin Wang Lun Du Junjie Chen Shi Han Hongyu Zhang Dongmei Zhang and Hongbin Sun. 2022. On the evaluation of neural code summarization. In Proceedings of the 44th International Conference on Software Engineering. 1597\u20131608.","DOI":"10.1145\/3510003.3510060"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2018.2876537"},{"key":"e_1_3_1_42_2","article-title":"The need to report effect size estimates revisited","author":"Tomczak Maciej","year":"2014","unstructured":"Maciej Tomczak and Ewa Tomczak. 2014. The need to report effect size estimates revisited. An overview of some recommended measures of effect size. (2014).","journal-title":"An overview of some recommended measures of effect size"},{"key":"e_1_3_1_43_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).","journal-title":"Advances in neural information processing systems"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/TR.2020.3047396"},{"key":"e_1_3_1_45_2","unstructured":"Supatsara Wattanakriengkrai Patanamon Thongtanunam Chakkrit Tantithamthavorn Hideaki Hata and Kenichi Matsumoto. 2020. Predicting defective lines using a model-agnostic technique. IEEE Transactions on Software Engineering (2020)."},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3487569"},{"key":"e_1_3_1_47_2","unstructured":"Meng Yan Xin Xia Yuanrui Fan Ahmed E Hassan David Lo and Shanping Li. 2020. Just-in-time defect identification and localization: A two-phase framework. IEEE Transactions on Software Engineering (2020)."},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.infsof.2017.03.007"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/QRS.2015.14"},{"key":"e_1_3_1_50_2","doi-asserted-by":"crossref","unstructured":"Zichao Yang Diyi Yang Chris Dyer Xiaodong He Alex Smola and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies. 1480\u20131489.","DOI":"10.18653\/v1\/N16-1174"},{"key":"e_1_3_1_51_2","doi-asserted-by":"crossref","unstructured":"Steven Young Tamer Abdou and Ayse Bener. 2018. A replication study: just-in-time defect prediction with ensemble learning. In Proceedings of the 6th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering. 42\u201347.","DOI":"10.1145\/3194104.3194110"},{"key":"e_1_3_1_52_2","doi-asserted-by":"crossref","unstructured":"Zhengran Zeng Yuqun Zhang Haotian Zhang and Lingming Zhang. 2021. Deep just-in-time defect prediction: how far are we?. In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis. 427\u2013438.","DOI":"10.1145\/3460319.3464819"},{"key":"e_1_3_1_53_2","doi-asserted-by":"crossref","unstructured":"Thomas Zimmermann Nachiappan Nagappan Harald Gall Emanuel Giger and Brendan Murphy. 2009. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In Proceedings of the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering. 91\u2013100.","DOI":"10.1145\/1595696.1595713"}],"container-title":["Proceedings of the ACM on Software Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643727","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3643727","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,4]],"date-time":"2026-02-04T07:52:10Z","timestamp":1770191530000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643727"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,12]]},"references-count":52,"journal-issue":{"issue":"FSE","published-print":{"date-parts":[[2024,7,12]]}},"alternative-id":["10.1145\/3643727"],"URL":"https:\/\/doi.org\/10.1145\/3643727","relation":{},"ISSN":["2994-970X"],"issn-type":[{"value":"2994-970X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,12]]}}}