{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T16:29:18Z","timestamp":1775579358037,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":84,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,7,18]],"date-time":"2022-07-18T00:00:00Z","timestamp":1658102400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,7,18]]},"DOI":"10.1145\/3533767.3534390","type":"proceedings-article","created":{"date-parts":[[2022,7,15]],"date-time":"2022-07-15T14:28:50Z","timestamp":1657895330000},"page":"39-51","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":124,"title":["An extensive study on pre-trained models for program understanding and generation"],"prefix":"10.1145","author":[{"given":"Zhengran","family":"Zeng","sequence":"first","affiliation":[{"name":"Southern University of Science and Technology, China"}]},{"given":"Hanzhuo","family":"Tan","sequence":"additional","affiliation":[{"name":"Southern University of Science and Technology, China \/ Hong Kong Polytechnic University, China"}]},{"given":"Haotian","family":"Zhang","sequence":"additional","affiliation":[{"name":"Kwai, China"}]},{"given":"Jing","family":"Li","sequence":"additional","affiliation":[{"name":"Hong Kong Polytechnic University, China"}]},{"given":"Yuqun","family":"Zhang","sequence":"additional","affiliation":[{"name":"Southern University of Science and Technology, China"}]},{"given":"Lingming","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Illinois at Urbana-Champaign, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,7,18]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"2021. Google BigQuery. Website.  https:\/\/console.cloud.google.com\/marketplace\/details\/github\/github-repos \t\t\t\t\t  2021. Google BigQuery. Website.  https:\/\/console.cloud.google.com\/marketplace\/details\/github\/github-repos"},{"key":"e_1_3_2_1_2_1","unstructured":"2022. ISSTA\u201922 CodeStudy. Github.  https:\/\/github.com\/ZZR0\/ISSTA22-CodeStudy \t\t\t\t\t  2022. ISSTA\u201922 CodeStudy. Github.  https:\/\/github.com\/ZZR0\/ISSTA22-CodeStudy"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.211"},{"key":"e_1_3_2_1_4_1","volume-title":"ICML 2016 (JMLR Workshop and Conference Proceedings","volume":"2100","author":"Allamanis Miltiadis","year":"2016","unstructured":"Miltiadis Allamanis , Hao Peng , and Charles Sutton . 2016 . A Convolutional Attention Network for Extreme Summarization of Source Code . In ICML 2016 (JMLR Workshop and Conference Proceedings , Vol. 48). JMLR.org, 2091\u2013 2100 . http:\/\/proceedings.mlr.press\/v48\/allamanis16.html Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A Convolutional Attention Network for Extreme Summarization of Source Code. In ICML 2016 (JMLR Workshop and Conference Proceedings, Vol. 48). JMLR.org, 2091\u20132100. http:\/\/proceedings.mlr.press\/v48\/allamanis16.html"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290353"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-72240-1_1"},{"key":"e_1_3_2_1_7_1","unstructured":"Junyi Ao Rui Wang Long Zhou Shujie Liu Shuo Ren Yu Wu Tom Ko Qing Li Yu Zhang and Zhihua Wei. 2021. Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing. arXiv:2110.07205. \t\t\t\t\t  Junyi Ao Rui Wang Long Zhou Shujie Liu Shuo Ren Yu Wu Tom Ko Qing Li Yu Zhang and Zhihua Wei. 2021. Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing. arXiv:2110.07205."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00109"},{"key":"e_1_3_2_1_9_1","volume-title":"Tree2Tree Neural Translation Model for Learning Source Code Changes. CoRR, abs\/1810.00314","author":"Chakraborty Saikat","year":"2018","unstructured":"Saikat Chakraborty , Miltiadis Allamanis , and Baishakhi Ray . 2018. Tree2Tree Neural Translation Model for Learning Source Code Changes. CoRR, abs\/1810.00314 ( 2018 ), arxiv:1810.00314. arxiv:1810.00314 Saikat Chakraborty, Miltiadis Allamanis, and Baishakhi Ray. 2018. Tree2Tree Neural Translation Model for Learning Source Code Changes. CoRR, abs\/1810.00314 (2018), arxiv:1810.00314. arxiv:1810.00314"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2021.3087402"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2014-564"},{"key":"e_1_3_2_1_12_1","volume-title":"Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, and Greg Brockman.","author":"Chen Mark","year":"2021","unstructured":"Mark Chen , Jerry Tworek , Heewoo Jun , Qiming Yuan , Henrique Ponde de Oliveira Pinto , Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, and Greg Brockman. 2021 . Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, and Greg Brockman. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374."},{"key":"e_1_3_2_1_13_1","volume-title":"ICLR","author":"Clark Kevin","year":"2020","unstructured":"Kevin Clark , Minh-Thang Luong , Quoc V. Le , and Christopher D. Manning . 2020. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators . In ICLR 2020 . OpenReview.net. https:\/\/openreview.net\/forum?id=r1xMH1BtvB Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. In ICLR 2020. OpenReview.net. https:\/\/openreview.net\/forum?id=r1xMH1BtvB"},{"key":"e_1_3_2_1_14_1","volume-title":"Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.","author":"Clark Kevin","year":"2020","unstructured":"Kevin Clark , Minh-Thang Luong , Quoc V Le , and Christopher D Manning . 2020 . Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/n19-1423"},{"key":"e_1_3_2_1_16_1","volume-title":"CodeTrans: Towards Cracking the Language of Silicone\u2019s Code Through Self-Supervised Deep Learning and High Performance Computing. CoRR, abs\/2104.02443","author":"Elnaggar Ahmed","year":"2021","unstructured":"Ahmed Elnaggar , Wei Ding , Llion Jones , Tom Gibbs , Tamas Feher , Christoph Angerer , Silvia Severini , Florian Matthes , and Burkhard Rost . 2021. CodeTrans: Towards Cracking the Language of Silicone\u2019s Code Through Self-Supervised Deep Learning and High Performance Computing. CoRR, abs\/2104.02443 ( 2021 ), arxiv:2104.02443. arxiv:2104.02443 Ahmed Elnaggar, Wei Ding, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Silvia Severini, Florian Matthes, and Burkhard Rost. 2021. CodeTrans: Towards Cracking the Language of Silicone\u2019s Code Through Self-Supervised Deep Learning and High Performance Computing. CoRR, abs\/2104.02443 (2021), arxiv:2104.02443. arxiv:2104.02443"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3395363.3397362"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.139"},{"key":"e_1_3_2_1_19_1","first-page":"6174","article-title":"BAE","volume":"2020","author":"Garg Siddhant","year":"2020","unstructured":"Siddhant Garg and Goutham Ramakrishnan . 2020 . BAE : BERT-based Adversarial Examples for Text Classification. In EMNLP 2020. 6174 \u2013 6181 . Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based Adversarial Examples for Text Classification. In EMNLP 2020. 6174\u20136181.","journal-title":"BERT-based Adversarial Examples for Text Classification. In EMNLP"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3293882.3330559"},{"key":"e_1_3_2_1_21_1","volume-title":"GraphCodeBERT: Pre-training Code Representations with Data Flow. In ICLR","author":"Guo Daya","year":"2021","unstructured":"Daya Guo , Shuo Ren , Shuai Lu , Zhangyin Feng , Duyu Tang , Shujie Liu , Long Zhou , Nan Duan , Alexey Svyatkovskiy , Shengyu Fu , Michele Tufano , Shao Kun Deng , Colin B. Clement , Dawn Drain , Neel Sundaresan , Jian Yin , Daxin Jiang , and Ming Zhou . 2021 . GraphCodeBERT: Pre-training Code Representations with Data Flow. In ICLR 2021. OpenReview.net. https:\/\/openreview.net\/forum?id=jLoC4ez43PZ Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. GraphCodeBERT: Pre-training Code Representations with Data Flow. In ICLR 2021. OpenReview.net. https:\/\/openreview.net\/forum?id=jLoC4ez43PZ"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3236024.3236051"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380361"},{"key":"e_1_3_2_1_24_1","volume-title":"CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. CoRR, abs\/1909.09436","author":"Husain Hamel","year":"2019","unstructured":"Hamel Husain , Ho-Hsiang Wu , Tiferet Gazit , Miltiadis Allamanis , and Marc Brockschmidt . 2019. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. CoRR, abs\/1909.09436 ( 2019 ), arxiv:1909.09436 Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. CoRR, abs\/1909.09436 (2019), arxiv:1909.09436"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/p16-1195"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/d18-1192"},{"key":"e_1_3_2_1_27_1","volume-title":"Contrastive Code Representation Learning. CoRR, abs\/2007.04973","author":"Jain Paras","year":"2020","unstructured":"Paras Jain , Ajay Jain , Tianjun Zhang , Pieter Abbeel , Joseph E. Gonzalez , and Ion Stoica . 2020. Contrastive Code Representation Learning. CoRR, abs\/2007.04973 ( 2020 ), arxiv:2007.04973. arxiv:2007.04973 Paras Jain, Ajay Jain, Tianjun Zhang, Pieter Abbeel, Joseph E. Gonzalez, and Ion Stoica. 2020. Contrastive Code Representation Learning. CoRR, abs\/2007.04973 (2020), arxiv:2007.04973. arxiv:2007.04973"},{"key":"e_1_3_2_1_28_1","first-page":"1161","article-title":"CURE","volume":"2021","author":"Jiang Nan","year":"2021","unstructured":"Nan Jiang , Thibaud Lutellier , and Lin Tan . 2021 . CURE : Code-Aware Neural Machine Translation for Automatic Program Repair. In ICSE 2021. 1161 \u2013 1173 . Nan Jiang, Thibaud Lutellier, and Lin Tan. 2021. CURE: Code-Aware Neural Machine Translation for Automatic Program Repair. In ICSE 2021. 1161\u20131173.","journal-title":"Code-Aware Neural Machine Translation for Automatic Program Repair. In ICSE"},{"key":"e_1_3_2_1_29_1","volume-title":"Joey Tianyi Zhou, and Peter Szolovits","author":"Jin Di","year":"2020","unstructured":"Di Jin , Zhijing Jin , Joey Tianyi Zhou, and Peter Szolovits . 2020 . Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In AAAI 2020. 34, 8018\u20138025. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In AAAI 2020. 34, 8018\u20138025."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00300"},{"key":"e_1_3_2_1_31_1","volume-title":"International Conference on Machine Learning. 5110\u20135121","author":"Kanade Aditya","year":"2020","unstructured":"Aditya Kanade , Petros Maniatis , Gogul Balakrishnan , and Kensen Shi . 2020 . Learning and evaluating contextual embedding of source code . In International Conference on Machine Learning. 5110\u20135121 . Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2020. Learning and evaluating contextual embedding of source code. In International Conference on Machine Learning. 5110\u20135121."},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.320"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"e_1_3_2_1_34_1","unstructured":"Hongyu Li Seohyun Kim and Satish Chandra. 2019. Neural code search evaluation dataset. arXiv preprint arXiv:1908.09804. \t\t\t\t\t  Hongyu Li Seohyun Kim and Satish Chandra. 2019. Neural code search evaluation dataset. arXiv preprint arXiv:1908.09804."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.500"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3293882.3330574"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447571"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416591"},{"key":"e_1_3_2_1_39_1","volume-title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR, abs\/1907.11692","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR, abs\/1907.11692 ( 2019 ), arxiv:1907.11692. arxiv:1907.11692 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR, abs\/1907.11692 (2019), arxiv:1907.11692. arxiv:1907.11692"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3395363.3397351"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3468264.3468580"},{"key":"e_1_3_2_1_42_1","first-page":"13","article-title":"ViLBERT","volume":"2019","author":"Lu Jiasen","year":"2019","unstructured":"Jiasen Lu , Dhruv Batra , Devi Parikh , and Stefan Lee . 2019 . ViLBERT : Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS 2019. 13 \u2013 23 . https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c74d97b01eae257e44aa9d5bade97baf-Abstract.html Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS 2019. 13\u201323. https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c74d97b01eae257e44aa9d5bade97baf-Abstract.html","journal-title":"Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS"},{"key":"e_1_3_2_1_43_1","volume-title":"Shengyu Fu, and Shujie Liu.","author":"Lu Shuai","year":"2021","unstructured":"Shuai Lu , Daya Guo , Shuo Ren , Junjie Huang , Alexey Svyatkovskiy , Ambrosio Blanco , Colin B. Clement , Dawn Drain , Daxin Jiang , Duyu Tang , Ge Li , Lidong Zhou , Linjun Shou , Long Zhou , Michele Tufano , Ming Gong , Ming Zhou , Nan Duan , Neel Sundaresan , Shao Kun Deng , Shengyu Fu, and Shujie Liu. 2021 . CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation. CoRR , abs\/2102.04664 (2021), arxiv:2102.04664 Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. 2021. CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation. CoRR, abs\/2102.04664 (2021), arxiv:2102.04664"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3395363.3397369"},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"crossref","unstructured":"Rishabh Maheshwary Saket Maheshwary and Vikram Pudi. 2021. Generating natural language attacks in a hard label black box setting. \t\t\t\t\t  Rishabh Maheshwary Saket Maheshwary and Vikram Pudi. 2021. Generating natural language attacks in a hard label black box setting.","DOI":"10.1609\/aaai.v35i15.17595"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.661"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3196398.3196464"},{"key":"e_1_3_2_1_48_1","unstructured":"Tomas Mikolov Kai Chen Greg Corrado and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781. \t\t\t\t\t  Tomas Mikolov Kai Chen Greg Corrado and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781."},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-demos.16"},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.5555\/3015812.3016002"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.5555\/3015812.3016002"},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416545"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.nlp4prog-1.5"},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3385412.3386001"},{"key":"e_1_3_2_1_55_1","volume-title":"Language models are unsupervised multitask learners. OpenAI blog, 1, 8","author":"Radford Alec","year":"2019","unstructured":"Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , and Ilya Sutskever . 2019. Language models are unsupervised multitask learners. OpenAI blog, 1, 8 ( 2019 ), 9. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1, 8 (2019), 9."},{"key":"e_1_3_2_1_56_1","article-title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel , Noam Shazeer , Adam Roberts , Katherine Lee , Sharan Narang , Michael Matena , Yanqi Zhou , Wei Li , and Peter J. Liu . 2020 . Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer . J. Mach. Learn. Res. , 21 (2020), 140:1\u2013140:67. http:\/\/jmlr.org\/papers\/v21\/20-074.html Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res., 21 (2020), 140:1\u2013140:67. http:\/\/jmlr.org\/papers\/v21\/20-074.html","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_1_57_1","unstructured":"Goutham Ramakrishnan Jordan Henkel Zi Wang Aws Albarghouthi Somesh Jha and Thomas Reps. 2020. Semantic robustness of models of source code. arXiv preprint arXiv:2002.03043. \t\t\t\t\t  Goutham Ramakrishnan Jordan Henkel Zi Wang Aws Albarghouthi Somesh Jha and Thomas Reps. 2020. Semantic robustness of models of source code. arXiv preprint arXiv:2002.03043."},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/2983990.2984041"},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.442"},{"key":"e_1_3_2_1_60_1","unstructured":"Baptiste Roziere Marie-Anne Lachaux Lowik Chanussot and Guillaume Lample. 2020. Unsupervised Translation of Programming Languages.. In NeurIPS. \t\t\t\t\t  Baptiste Roziere Marie-Anne Lachaux Lowik Chanussot and Guillaume Lample. 2020. Unsupervised Translation of Programming Languages.. In NeurIPS."},{"key":"e_1_3_2_1_61_1","volume-title":"A survey of data leakage detection and prevention solutions","author":"Shabtai Asaf","unstructured":"Asaf Shabtai , Yuval Elovici , and Lior Rokach . 2012. A survey of data leakage detection and prevention solutions . Springer Science & Business Media . Asaf Shabtai, Yuval Elovici, and Lior Rokach. 2012. A survey of data leakage detection and prevention solutions. Springer Science & Business Media."},{"key":"e_1_3_2_1_62_1","unstructured":"Ensheng Shi Yanlin Wang Lun Du Junjie Chen Shi Han Hongyu Zhang Dongmei Zhang and Hongbin Sun. 2021. Neural Code Summarization: How Far Are We? arXiv preprint arXiv:2107.07112. \t\t\t\t\t  Ensheng Shi Yanlin Wang Lun Du Junjie Chen Shi Han Hongyu Zhang Dongmei Zhang and Hongbin Sun. 2021. Neural Code Summarization: How Far Are We? arXiv preprint arXiv:2107.07112."},{"key":"e_1_3_2_1_63_1","unstructured":"Shashank Srikant Sijia Liu Tamara Mitrovska Shiyu Chang Quanfu Fan Gaoyuan Zhang and Una-May O\u2019Reilly. 2020. Generating Adversarial Computer Programs using Optimized Obfuscations. In ICLR. \t\t\t\t\t  Shashank Srikant Sijia Liu Tamara Mitrovska Shiyu Chang Quanfu Fan Gaoyuan Zhang and Una-May O\u2019Reilly. 2020. Generating Adversarial Computer Programs using Optimized Obfuscations. In ICLR."},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00756"},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSME.2014.77"},{"key":"e_1_3_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340544"},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340544"},{"key":"e_1_3_2_1_68_1","volume-title":"Representation Learning with Contrastive Predictive Coding. CoRR, abs\/1807.03748","author":"van den Oord A\u00e4ron","year":"2018","unstructured":"A\u00e4ron van den Oord , Yazhe Li , and Oriol Vinyals . 2018. Representation Learning with Contrastive Predictive Coding. CoRR, abs\/1807.03748 ( 2018 ), arxiv:1807.03748. arxiv:1807.03748 A\u00e4ron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation Learning with Contrastive Predictive Coding. CoRR, abs\/1807.03748 (2018), arxiv:1807.03748. arxiv:1807.03748"},{"key":"e_1_3_2_1_69_1","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N. Gomez , Lukasz Kaiser , and Illia Polosukhin . 2017 . Attention is All you Need . In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 . 5998\u20136008. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. 5998\u20136008. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html"},{"key":"e_1_3_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2020.2979701"},{"key":"e_1_3_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.685"},{"key":"e_1_3_2_1_72_1","unstructured":"Anjiang Wei Yinlin Deng Chenyuan Yang and Lingming Zhang. 2022. Free Lunch for Testing: Fuzzing Deep-Learning Libraries from Open Source. In ICSE. \t\t\t\t\t  Anjiang Wei Yinlin Deng Chenyuan Yang and Lingming Zhang. 2022. Free Lunch for Testing: Fuzzing Deep-Learning Libraries from Open Source. In ICSE."},{"key":"e_1_3_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.538"},{"key":"e_1_3_2_1_74_1","doi-asserted-by":"crossref","unstructured":"Han Xu Zhang Zhengyan Ding Ning Gu Yuxian Liu Xiao Huo Yuqi Qiu Jiezhong Zhang Liang Han Wentao and Huang Minlie. 2021. Pre-Trained Models: Past Present and Future. arXiv preprint arXiv:2106.07139. \t\t\t\t\t  Han Xu Zhang Zhengyan Ding Ning Gu Yuxian Liu Xiao Huo Yuqi Qiu Jiezhong Zhang Liang Han Wentao and Huang Minlie. 2021. Pre-Trained Models: Past Present and Future. arXiv preprint arXiv:2106.07139.","DOI":"10.1016\/j.aiopen.2021.08.002"},{"key":"e_1_3_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3178876.3186081"},{"key":"e_1_3_2_1_76_1","first-page":"1","article-title":"Adversarial examples for models of code","volume":"4","author":"Yefet Noam","year":"2020","unstructured":"Noam Yefet , Uri Alon , and Eran Yahav . 2020 . Adversarial examples for models of code . OOPSLA , 4 (2020), 1 \u2013 30 . Noam Yefet, Uri Alon, and Eran Yahav. 2020. Adversarial examples for models of code. OOPSLA, 4 (2020), 1\u201330.","journal-title":"OOPSLA"},{"key":"e_1_3_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.blackboxnlp-1.30"},{"key":"e_1_3_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3460319.3464819"},{"key":"e_1_3_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380383"},{"key":"e_1_3_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1145\/2544173.2509551"},{"key":"e_1_3_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1145\/3238147.3238187"},{"key":"e_1_3_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380422"},{"key":"e_1_3_2_1_83_1","volume-title":"Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. arXiv preprint arXiv:1909.03496.","author":"Zhou Yaqin","year":"2019","unstructured":"Yaqin Zhou , Shangqing Liu , Jingkai Siow , Xiaoning Du , and Yang Liu . 2019 . Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. arXiv preprint arXiv:1909.03496. Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. arXiv preprint arXiv:1909.03496."},{"key":"e_1_3_2_1_84_1","volume-title":"Xiaoning Du, and Yang Liu.","author":"Zhou Yaqin","year":"2019","unstructured":"Yaqin Zhou , Shangqing Liu , Jing Kai Siow , Xiaoning Du, and Yang Liu. 2019 . Devign : Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. In NeurIPS 2019. 10197\u201310207. Yaqin Zhou, Shangqing Liu, Jing Kai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. In NeurIPS 2019. 10197\u201310207."}],"event":{"name":"ISSTA '22: 31st ACM SIGSOFT International Symposium on Software Testing and Analysis","location":"Virtual South Korea","acronym":"ISSTA '22","sponsor":["SIGSOFT ACM Special Interest Group on Software Engineering"]},"container-title":["Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3533767.3534390","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3533767.3534390","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T18:43:41Z","timestamp":1750272221000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3533767.3534390"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,7,18]]},"references-count":84,"alternative-id":["10.1145\/3533767.3534390","10.1145\/3533767"],"URL":"https:\/\/doi.org\/10.1145\/3533767.3534390","relation":{},"subject":[],"published":{"date-parts":[[2022,7,18]]},"assertion":[{"value":"2022-07-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}