{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T00:53:55Z","timestamp":1773708835791,"version":"3.50.1"},"reference-count":54,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2024,3,15]],"date-time":"2024-03-15T00:00:00Z","timestamp":1710460800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Australian Research Council\u2019s Discovery Early Career Researcher Award (DECRA) funding scheme","award":["DE200100941"],"award-info":[{"award-number":["DE200100941"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2024,3,31]]},"abstract":"<jats:p>\n            Recently, automated vulnerability repair approaches have been widely adopted to combat increasing software security issues. In particular, transformer-based encoder-decoder models achieve competitive results. Whereas vulnerable programs may only consist of a few vulnerable code areas that need repair, existing AVR approaches lack a mechanism guiding their model to pay more attention to vulnerable code areas during repair generation. In this article, we propose a novel vulnerability repair framework inspired by the Vision Transformer based approaches for object detection in the computer vision domain. Similar to the object queries used to locate objects in object detection in computer vision, we introduce and leverage vulnerability queries (VQs) to locate vulnerable code areas and then suggest their repairs. In particular, we leverage the cross-attention mechanism to achieve the cross-match between VQs and their corresponding vulnerable code areas. To strengthen our cross-match and generate more accurate vulnerability repairs, we propose to learn a novel vulnerability mask (VM) and integrate it into decoders\u2019 cross-attention, which makes our VQs pay more attention to vulnerable code areas during repair generation. In addition, we incorporate our VM into encoders\u2019 self-attention to learn embeddings that emphasize the vulnerable areas of a program. Through an extensive evaluation using the real-world 5,417 vulnerabilities, our approach outperforms all of the automated vulnerability repair baseline methods by 2.68% to 32.33%. Additionally, our analysis of the cross-attention map of our approach confirms the design rationale of our VM and its effectiveness.\n            <jats:styled-content style=\"color:#000000\">Finally, our survey study with 71 software practitioners highlights the significance and usefulness of AI-generated vulnerability repairs in the realm of software security.<\/jats:styled-content>\n            The training code and pre-trained models are available at https:\/\/github.com\/awsm-research\/VQM.\n          <\/jats:p>","DOI":"10.1145\/3632746","type":"journal-article","created":{"date-parts":[[2023,11,13]],"date-time":"2023-11-13T11:49:08Z","timestamp":1699876148000},"page":"1-29","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":31,"title":["Vision Transformer Inspired Automated Vulnerability Repair"],"prefix":"10.1145","volume":"33","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7211-3491","authenticated-orcid":false,"given":"Michael","family":"Fu","sequence":"first","affiliation":[{"name":"Monash University, Clayton, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5838-3409","authenticated-orcid":false,"given":"Van","family":"Nguyen","sequence":"additional","affiliation":[{"name":"Monash University, Clayton, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5516-9984","authenticated-orcid":false,"given":"Chakkrit","family":"Tantithamthavorn","sequence":"additional","affiliation":[{"name":"Monash University, Clayton, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9977-8247","authenticated-orcid":false,"given":"Dinh","family":"Phung","sequence":"additional","affiliation":[{"name":"Monash University, Clayton, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0414-9067","authenticated-orcid":false,"given":"Trung","family":"Le","sequence":"additional","affiliation":[{"name":"Monash University, Clayton, Australia"}]}],"member":"320","published-online":{"date-parts":[[2024,3,15]]},"reference":[{"key":"e_1_3_1_2_1","article-title":"A3Test: Assertion-augmented automated test case generation","author":"Alagarsamy Saranya","year":"2023","unstructured":"Saranya Alagarsamy, Chakkrit Tantithamthavorn, and Aldeida Aleti. 2023. A3Test: Assertion-augmented automated test case generation. arXiv preprint arXiv:2302.10352 (2023).","journal-title":"arXiv preprint arXiv:2302.10352"},{"key":"e_1_3_1_3_1","first-page":"780","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201921)","author":"Berabi Berkay","year":"2021","unstructured":"Berkay Berabi, Jingxuan He, Veselin Raychev, and Martin Vechev. 2021. TFix: Learning to fix coding errors with a text-to-text transformer. In Proceedings of the International Conference on Machine Learning (ICML\u201921). 780\u2013791."},{"key":"e_1_3_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3475960.3475985"},{"key":"e_1_3_1_5_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"e_1_3_1_6_1","article-title":"Neural transfer learning for repairing security vulnerabilities in C code","author":"Chen Zimin","year":"2023","unstructured":"Zimin Chen, Steve Kommrusch, and Martin Monperrus. 2023. Neural transfer learning for repairing security vulnerabilities in C code. IEEE Transactions on Software Engineering 49, 1 (2023), 147\u2013165.","journal-title":"IEEE Transactions on Software Engineering"},{"issue":"9","key":"e_1_3_1_7_1","first-page":"1943","article-title":"Sequencer: Sequence-to-sequence learning for end-to-end program repair","volume":"47","author":"Chen Zimin","year":"2019","unstructured":"Zimin Chen, Steve Kommrusch, Michele Tufano, Louis-No\u00ebl Pouchet, Denys Poshyvanyk, and Martin Monperrus. 2019. Sequencer: Sequence-to-sequence learning for end-to-end program repair. IEEE Transactions on Software Engineering 47, 9 (2019), 1943\u20131959.","journal-title":"IEEE Transactions on Software Engineering"},{"key":"e_1_3_1_8_1","unstructured":"Jianlei Chi Yu Qu Ting Liu Qinghua Zheng and Heng Yin. 2022. SeqTrans: Automatic vulnerability fix via sequence to sequence learning. IEEE Transactions on Software Engineering. Published online March 7 2022."},{"key":"e_1_3_1_9_1","article-title":"Cppcheck: A Tool for Static C\/C++ Code Analysis","unstructured":"Cppcheck. n.d. Cppcheck: A Tool for Static C\/C++ Code Analysis. Retrieved November 22, 2023 from https:\/\/cppcheck.sourceforge.io\/","journal-title":"https:\/\/cppcheck.sourceforge.io\/"},{"key":"e_1_3_1_10_1","article-title":"Software Vulnerability","year":"2020","unstructured":"CSRC. 2020. Software Vulnerability. Retrieved November 22, 2023 from https:\/\/csrc.nist.gov\/glossary\/term\/software_vulnerability","journal-title":"Retrieved November 22, 2023 from https:\/\/csrc.nist.gov\/glossary\/term\/software_vulnerability"},{"key":"e_1_3_1_11_1","article-title":"2022 CWE Top 25 Most Dangerous Software Weaknesses","year":"2022","unstructured":"CWE. 2022. 2022 CWE Top 25 Most Dangerous Software Weaknesses. Retrieved November 22, 2023 from https:\/\/cwe.mitre.org\/top25\/archive\/2022\/2022_cwe_top25.html","journal-title":"Retrieved November 22, 2023 from https:\/\/cwe.mitre.org\/top25\/archive\/2022\/2022_cwe_top25.html"},{"key":"e_1_3_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/DSAA53316.2021.9564227"},{"key":"e_1_3_1_13_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201920)","author":"Dinella Elizabeth","year":"2020","unstructured":"Elizabeth Dinella, Hanjun Dai, Ziyang Li, Mayur Naik, Le Song, and Ke Wang. 2020. Hoppity: Learning graph transformations to detect and fix bugs in programs. In Proceedings of the International Conference on Learning Representations (ICLR\u201920)."},{"key":"e_1_3_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER53432.2022.00114"},{"key":"e_1_3_1_15_1","article-title":"2022 Vulnerability Statistic Report","year":"2022","unstructured":"Edgescan. 2022. 2022 Vulnerability Statistic Report. Retrieved November 22, 2023 from https:\/\/www.edgescan.com\/2022-vulnerability-statistics-report-lp\/","journal-title":"Retrieved November 22, 2023 from https:\/\/www.edgescan.com\/2022-vulnerability-statistics-report-lp\/"},{"key":"e_1_3_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3379597.3387501"},{"key":"e_1_3_1_17_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.139"},{"key":"e_1_3_1_18_1","article-title":"Learning to quantize vulnerability patterns and match to locate statement-level vulnerabilities","author":"Fu Michael","year":"2023","unstructured":"Michael Fu, Trung Le, Van Nguyen, Chakkrit Tantithamthavorn, and Dinh Phung. 2023a. Learning to quantize vulnerability patterns and match to locate statement-level vulnerabilities. arXiv preprint arXiv:2306.06109 (2023).","journal-title":"arXiv preprint arXiv:2306.06109"},{"key":"e_1_3_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2023.3305244"},{"key":"e_1_3_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2022.3158252"},{"key":"e_1_3_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3524842.3528452"},{"key":"e_1_3_1_22_1","article-title":"AIBugHunter: A practical tool for predicting, classifying and repairing software vulnerabilities","author":"Fu Michael","year":"2023","unstructured":"Michael Fu, Chakkrit Tantithamthavorn, Trung Le, Yuki Kume, Van Nguyen, Dinh Phung, and John Grundy. 2023c. AIBugHunter: A practical tool for predicting, classifying and repairing software vulnerabilities. arXiv preprint arXiv:2305.16615 (2023).","journal-title":"arXiv preprint arXiv:2305.16615"},{"key":"e_1_3_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3540250.3549098"},{"key":"e_1_3_1_24_1","article-title":"An example software vulnerability from GoPro systems. GitHub. Retrieved November 22, 2023 from","year":"2019","unstructured":"GoPro. 2019. An example software vulnerability from GoPro systems. GitHub. Retrieved November 22, 2023 from https:\/\/github.com\/gopro\/gpmf-parser\/commit\/341f12cd5b97ab419e53853ca00176457c9f1681","journal-title":"https:\/\/github.com\/gopro\/gpmf-parser\/commit\/341f12cd5b97ab419e53853ca00176457c9f1681"},{"key":"e_1_3_1_25_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201921)","author":"Guo Daya","year":"2021","unstructured":"Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et\u00a0al. 2021. GraphCodeBERT: Pre-training code representations with data flow. In Proceedings of the International Conference on Learning Representations (ICLR\u201921)."},{"key":"e_1_3_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER53432.2022.00121"},{"key":"e_1_3_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3540250.3549119"},{"key":"e_1_3_1_28_1","article-title":"ImageMagick","year":"2016","unstructured":"ImageMagick. 2016. ImageMagick. GitHub. Retrieved November 22, 2023 from https:\/\/github.com\/ADVAN-ELAA-8QM-PRC1\/platform-external-ImageMagick\/commit\/d8ab7f046587f2e9f734b687ba7e6e10147c294b","journal-title":"https:\/\/github.com\/ADVAN-ELAA-8QM-PRC1\/platform-external-ImageMagick\/commit\/d8ab7f046587f2e9f734b687ba7e6e10147c294b"},{"key":"e_1_3_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00107"},{"key":"e_1_3_1_30_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-84800-044-5_3"},{"key":"e_1_3_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380345"},{"key":"e_1_3_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3468264.3468597"},{"key":"e_1_3_1_33_1","article-title":"AutoUpdate: Automatically recommend code updates for Android apps","author":"Liu Yue","year":"2022","unstructured":"Yue Liu, Chakkrit Tantithamthavorn, Yonghui Liu, Patanamon Thongtanunam, and Li Li. 2022. AutoUpdate: Automatically recommend code updates for Android apps. arXiv preprint arXiv:2209.07048 (2022).","journal-title":"arXiv preprint arXiv:2209.07048"},{"key":"e_1_3_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3395363.3397369"},{"key":"e_1_3_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSR52588.2021.00063"},{"key":"e_1_3_1_36_1","article-title":"Distributed representations of words and phrases and their compositionality","volume":"26","author":"Mikolov Tomas","year":"2013","unstructured":"Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems 26 (2013), 3111\u20133119.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_37_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-47426-3_54"},{"key":"e_1_3_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN52387.2021.9533907"},{"key":"e_1_3_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2019.8851923"},{"key":"e_1_3_1_40_1","article-title":"Cross project software vulnerability detection via domain adaptation and max-margin principle","author":"Nguyen Van","year":"2022","unstructured":"Van Nguyen, Trung Le, Chakkrit Tantithamthavorn, John Grundy, Hung Nguyen, and Dinh Phung. 2022a. Cross project software vulnerability detection via domain adaptation and max-margin principle. arXiv preprint arXiv:2209.10406 (2022).","journal-title":"arXiv preprint arXiv:2209.10406"},{"key":"e_1_3_1_41_1","article-title":"An information-theoretic and contrastive learning-based approach for identifying code statements causing software vulnerability","author":"Nguyen Van","year":"2022","unstructured":"Van Nguyen, Trung Le, Chakkrit Tantithamthavorn, John Grundy, Hung Nguyen, Seyit Camtepe, Paul Quirk, and Dinh Phung. 2022b. An information-theoretic and contrastive learning-based approach for identifying code statements causing software vulnerability. arXiv preprint arXiv:2209.10414 (2022).","journal-title":"arXiv preprint arXiv:2209.10414"},{"key":"e_1_3_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP46215.2023.10179420"},{"key":"e_1_3_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASE51524.2021.9678763"},{"key":"e_1_3_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER56733.2023.00036"},{"key":"e_1_3_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSR52588.2021.00049"},{"key":"e_1_3_1_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2022.3144348"},{"issue":"140","key":"e_1_3_1_47_1","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer.","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu, et\u00a0al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research 21, 140 (2020), 1\u201367.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_1_48_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P16-1162"},{"key":"e_1_3_1_49_1","article-title":"Syntax-aware on-the-fly code completion","author":"Takerngsaksiri Wannita","year":"2022","unstructured":"Wannita Takerngsaksiri, Chakkrit Tantithamthavorn, and Yuan-Fang Li. 2022. Syntax-aware on-the-fly code completion. arXiv preprint arXiv:2211.04673 (2022).","journal-title":"arXiv preprint arXiv:2211.04673"},{"key":"e_1_3_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510067"},{"key":"e_1_3_1_51_1","volume-title":"Proceedings of the Advances in Neural Information Processing Systems (NIPS\u201917)","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems (NIPS\u201917), Vol. 30."},{"key":"e_1_3_1_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/CNS48642.2020.9162237"},{"key":"e_1_3_1_53_1","first-page":"8696","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP\u201921)","author":"Wang Yue","unstructured":"Yue Wang, Weishi Wang, Shafiq Joty, and Steven C. H. Hoi. 2021a. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP\u201921). 8696\u20138708."},{"key":"e_1_3_1_54_1","volume-title":"arXiv preprint arXiv:2109.07107","author":"Wang Yingming","unstructured":"Yingming Wang, Xiangyu Zhang, Tong Yang, and Jian Sun. 2021b. Anchor DETR: Query design for transformer-based object detection. arXiv preprint arXiv:2109.07107 (2021)."},{"key":"e_1_3_1_55_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201920)","author":"Zhu Xizhou","year":"2020","unstructured":"Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. 2020. Deformable DETR: Deformable transformers for end-to-end object detection. In Proceedings of the International Conference on Learning Representations (ICLR\u201920)."}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3632746","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3632746","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:51:04Z","timestamp":1750287064000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3632746"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,15]]},"references-count":54,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,3,31]]}},"alternative-id":["10.1145\/3632746"],"URL":"https:\/\/doi.org\/10.1145\/3632746","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,15]]},"assertion":[{"value":"2023-06-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-11-08","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-03-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}