{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T21:21:24Z","timestamp":1775856084194,"version":"3.50.1"},"reference-count":66,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,2,29]],"date-time":"2024-02-29T00:00:00Z","timestamp":1709164800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,2,29]],"date-time":"2024-02-29T00:00:00Z","timestamp":1709164800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002386","name":"Cairo University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100002386","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Code assistance refers to the utilization of various tools, techniques, and models to help developers in the process of software development. As coding tasks become increasingly complex, code assistant plays a pivotal role in enhancing developer productivity, reducing errors, and facilitating a more efficient coding workflow. This assistance can manifest in various forms, including code autocompletion, error detection and correction, code generation, documentation support, and context-aware suggestions. Language models have emerged as integral components of code assistance, offering developers the capability to receive intelligent suggestions, generate code snippets, and enhance overall coding proficiency. In this paper, we propose new hybrid models for code generation by leveraging pre-trained language models BERT, RoBERTa, ELECTRA, and LUKE with the Marian Causal Language Model. Selecting these models based on their strong performance in various natural language processing tasks. We evaluate the performance of these models on two datasets CoNaLa and DJANGO and compare them to existing state-of-the-art models. We aim to investigate the potential of pre-trained transformer language models to revolutionize code generation, offering improved precision and efficiency in navigating complex coding scenarios. Additionally, conducting error analysis and refining the generated code. Our results show that these models, when combined with the Marian Decoder, significantly improve code generation accuracy and efficiency. Notably, the RoBERTaMarian model achieved a maximum BLEU score of 35.74 and an exact match accuracy of 13.8% on CoNaLa, while LUKE-Marian attained a BLEU score of 89.34 and an exact match accuracy of 78.50% on DJANGO. Implementation of this work is available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/AhmedSSoliman\/Leveraging-Pretrained-Language-Models-for-Code-Generation\">https:\/\/github.com\/AhmedSSoliman\/Leveraging-Pretrained-Language-Models-for-Code-Generation<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s40747-024-01373-8","type":"journal-article","created":{"date-parts":[[2024,2,29]],"date-time":"2024-02-29T09:03:13Z","timestamp":1709197393000},"page":"3955-3980","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["Leveraging pre-trained language models for code generation"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4752-3557","authenticated-orcid":false,"given":"Ahmed","family":"Soliman","sequence":"first","affiliation":[]},{"given":"Samir","family":"Shaheen","sequence":"additional","affiliation":[]},{"given":"Mayada","family":"Hadhoud","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,2,29]]},"reference":[{"issue":"7553","key":"1373_CR1","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y LeCun","year":"2015","unstructured":"LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436\u2013444","journal-title":"Nature"},{"key":"1373_CR2","unstructured":"Dai AM, Le QV (2015) Semi-supervised sequence learning. ArXiv arXiv:1511.01432"},{"key":"1373_CR3","doi-asserted-by":"publisher","first-page":"1407","DOI":"10.1162\/tacl_x_00455","volume":"9","author":"Y Elazar","year":"2021","unstructured":"Elazar Y, Kassner N, Ravfogel S, Ravichander A, Hovy E, Sch\u00fctze H, Goldberg Y (2021) Erratum: measuring and improving consistency in pretrained language models. Trans Assoc Comput Linguist 9:1407. https:\/\/doi.org\/10.1162\/tacl_x_00455","journal-title":"Trans Assoc Comput Linguist"},{"key":"1373_CR4","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, vol 30"},{"key":"1373_CR5","doi-asserted-by":"crossref","unstructured":"Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. ArXiv arXiv:1802.05365","DOI":"10.18653\/v1\/N18-1202"},{"key":"1373_CR6","doi-asserted-by":"crossref","unstructured":"Howard J, Ruder S (2018) Universal language model fine-tuning for text classification. In: Annual meeting of the association for computational linguistics. https:\/\/api.semanticscholar.org\/CorpusID:40100965","DOI":"10.18653\/v1\/P18-1031"},{"key":"1373_CR7","unstructured":"Raffel C, Shazeer NM, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv arXiv:1910.10683"},{"key":"1373_CR8","doi-asserted-by":"crossref","unstructured":"Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2019) Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"1373_CR9","doi-asserted-by":"publisher","first-page":"264","DOI":"10.1162\/tacl_a_00313","volume":"8","author":"S Rothe","year":"2020","unstructured":"Rothe S, Narayan S, Severyn A (2020) Leveraging pre-trained checkpoints for sequence generation tasks. Trans Assoc Comput Linguist 8:264\u2013280","journal-title":"Trans Assoc Comput Linguist"},{"key":"1373_CR10","doi-asserted-by":"publisher","unstructured":"LeClair A, Jiang S, McMillan C (2019) A neural model for generating natural language summaries of program subroutines. In: Proceedings of the 41st international conference on software engineering. ICSE \u201919. IEEE Press, pp 795\u2013806. https:\/\/doi.org\/10.1109\/ICSE.2019.00087","DOI":"10.1109\/ICSE.2019.00087"},{"key":"1373_CR11","doi-asserted-by":"publisher","first-page":"3117","DOI":"10.32604\/cmc.2022.019884","volume":"70","author":"W Gad","year":"2021","unstructured":"Gad W, Alokla A, Nazih W, Salem AB, Aref M (2021) DlbtDeep learning-based transformer to generate pseudo-code from source code. CMC 70:3117\u20133123. https:\/\/doi.org\/10.32604\/cmc.2022.019884","journal-title":"CMC"},{"issue":"4","key":"1373_CR12","doi-asserted-by":"publisher","first-page":"604","DOI":"10.3390\/math10040604","volume":"10","author":"A Alokla","year":"2022","unstructured":"Alokla A, Gad W, Nazih W, Aref M, Salem AB (2022) Retrieval-based transformer pseudocode generation. Mathematics 10(4):604. https:\/\/doi.org\/10.3390\/math10040604","journal-title":"Retrieval-based transformer pseudocode generation. Mathematics"},{"key":"1373_CR13","doi-asserted-by":"publisher","first-page":"301","DOI":"10.1016\/j.ijcce.2023.09.002","volume":"4","author":"P Kaur","year":"2023","unstructured":"Kaur P, Kumar H, Kaushal S (2023) Technology-assisted language learning adaptive systems: a comprehensive review. Int J Cogn Comput Eng 4:301\u2013313. https:\/\/doi.org\/10.1016\/j.ijcce.2023.09.002","journal-title":"Int J Cogn Comput Eng"},{"key":"1373_CR14","doi-asserted-by":"publisher","first-page":"165","DOI":"10.1016\/j.ijcce.2021.09.003","volume":"2","author":"M Javidpanah","year":"2021","unstructured":"Javidpanah M, Javadpour A, Rezaei S (2021) ROOA: CloudIDE framework for extension development. Int J Cogn Comput Eng 2:165\u2013170. https:\/\/doi.org\/10.1016\/j.ijcce.2021.09.003","journal-title":"Int J Cogn Comput Eng"},{"key":"1373_CR15","doi-asserted-by":"publisher","first-page":"47","DOI":"10.1007\/11561347_5","volume-title":"Generative programming and component engineering","author":"A Moss","year":"2005","unstructured":"Moss A, Muller H (2005) Efficient code generation for a domain specific language. In: Gl\u00fcck R, Lowry M (eds) Generative programming and component engineering. Springer, Berlin, pp 47\u201362"},{"key":"1373_CR16","doi-asserted-by":"publisher","first-page":"19","DOI":"10.1007\/s10664-023-10385-w","volume":"29","author":"G Guizzo","year":"2023","unstructured":"Guizzo G, Zhang J, Sarro F, Treude C, Harman M (2023) Mutation analysis for evaluating code translation. Empir Softw Eng 29:19","journal-title":"Empir Softw Eng"},{"key":"1373_CR17","unstructured":"Athiwaratkun B, Gouda SK, Wang Z, Li X, Tian Y, Tan M, Ahmad WU, Wang S, Sun Q, Shang M, Gonugondla SK, Ding H, Kumar V, Fulton N, Farahani A, Jain S, Giaquinto R, Qian H, Ramanathan MK, Nallapati R, Ray B, Bhatia P, Sengupta S, Roth D, Xiang B (2023) Multi-lingual evaluation of code generation models. arXiv preprint arXiv:2210.14868"},{"key":"1373_CR18","doi-asserted-by":"crossref","unstructured":"Dahal S, Maharana A, Bansal M (2021) Analysis of tree-structured architectures for code generation. In: Findings of the association for computational linguistics: ACL-IJCNLP 2021, pp 4382\u20134391","DOI":"10.18653\/v1\/2021.findings-acl.384"},{"key":"1373_CR19","doi-asserted-by":"publisher","first-page":"11699","DOI":"10.3390\/app112411699","volume":"24","author":"P Qin","year":"2021","unstructured":"Qin P, Tan W, Guo J, Shen B, Tang Q (2021) Achieving semantic consistency for multilingual sentence representation using an explainable machine natural language parser (mparser). Appl Sci 24:11699","journal-title":"Appl Sci"},{"key":"1373_CR20","doi-asserted-by":"crossref","unstructured":"Tang Z, Shen X, Li C, Ge J, Huang L, Zhu Z, Luo B (2022) Ast-trans: code summarization with efficient tree-structured attention. In: 2022 IEEE\/ACM 44th international conference on software engineering (ICSE), pp 150\u2013162","DOI":"10.1145\/3510003.3510224"},{"key":"1373_CR21","doi-asserted-by":"crossref","unstructured":"Shin R, Lin CH, Thomson S, Chen C, Roy S, Platanios EA, Pauls A, Klein D, Eisner J, Van\u00a0Durme B (2021) Constrained language models yield few-shot semantic parsers. arXiv preprint arXiv:2104.08768","DOI":"10.18653\/v1\/2021.emnlp-main.608"},{"key":"1373_CR22","doi-asserted-by":"crossref","unstructured":"Dong L, Lapata M (2016) Language to logical form with neural attention. arXiv preprint arXiv:1601.01280","DOI":"10.18653\/v1\/P16-1004"},{"key":"1373_CR23","doi-asserted-by":"crossref","unstructured":"Yin P, Neubig G (2017) A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696","DOI":"10.18653\/v1\/P17-1041"},{"key":"1373_CR24","doi-asserted-by":"crossref","unstructured":"Rabinovich M, Stern M, Klein D (2017) Abstract syntax networks for code generation and semantic parsing. arXiv preprint arXiv:1704.07535","DOI":"10.18653\/v1\/P17-1105"},{"key":"1373_CR25","doi-asserted-by":"crossref","unstructured":"Yin P, Neubig G (2018) Tranx: A transition-based neural abstract syntax parser for semantic parsing and code generation. arXiv preprint arXiv:1810.02720","DOI":"10.18653\/v1\/D18-2002"},{"key":"1373_CR26","doi-asserted-by":"crossref","unstructured":"Yin P, Neubig G (2019) Reranking for neural semantic parsing. In: Proceedings of the 57th annual meeting of the association for computational linguistics","DOI":"10.18653\/v1\/P19-1447"},{"key":"1373_CR27","unstructured":"Shin EC, Allamanis M, Brockschmidt M, Polozov A (2019) Program synthesis and semantic parsing with learned code idioms. In: Advances in neural information processing systems, vol 32"},{"key":"1373_CR28","doi-asserted-by":"crossref","unstructured":"Sun Z, Zhu Q, Xiong Y, Sun Y, Mou L, Zhang L (2020) Treegen: a tree-based transformer architecture for code generation. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 8984\u20138991","DOI":"10.1609\/aaai.v34i05.6430"},{"key":"1373_CR29","doi-asserted-by":"crossref","unstructured":"Xu FF, Jiang Z, Yin P, Vasilescu B, Neubig G (2020) Incorporating external knowledge through pre-training for natural language to code generation. arXiv preprint arXiv:2004.09015","DOI":"10.18653\/v1\/2020.acl-main.538"},{"key":"1373_CR30","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s42979-022-01573-4","volume":"4","author":"K Lano","year":"2023","unstructured":"Lano K, Xue Q (2023) Code generation by example using symbolic machine learning. SN Comput Sci 4:1\u201323","journal-title":"SN Comput Sci"},{"key":"1373_CR31","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3383458","volume":"53","author":"THM Le","year":"2020","unstructured":"Le THM, Chen H, Babar MA (2020) Deep learning for source code modeling and generation. ACM Comput Surv (CSUR) 53:1\u201338","journal-title":"ACM Comput Surv (CSUR)"},{"key":"1373_CR32","doi-asserted-by":"crossref","unstructured":"Norouzi S, Tang K, Cao Y (2021) Code generation from natural language with less prior knowledge and more monolingual data. In: Proceedings of the 59th Annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (volume 2: short papers), pp 776\u2013785","DOI":"10.18653\/v1\/2021.acl-short.98"},{"key":"1373_CR33","doi-asserted-by":"crossref","unstructured":"Orlanski G, Gittens A (2021) Reading stackoverflow encourages cheating: adding question text improves extractive code generation. arXiv preprint arXiv:2106.04447","DOI":"10.18653\/v1\/2021.nlp4prog-1.8"},{"key":"1373_CR34","doi-asserted-by":"crossref","unstructured":"Beau N, Crabb\u00e9 B (2022) The impact of lexical and grammatical processing on generating code from natural language. arXiv preprint arXiv:2202.13972","DOI":"10.18653\/v1\/2022.findings-acl.173"},{"key":"1373_CR35","doi-asserted-by":"crossref","unstructured":"Wang Z, Cuenca G, Zhou S, Xu FF, Neubig G (2022) Mconala: a benchmark for code generation from multiple natural languages. arXiv preprint arXiv:2203.08388","DOI":"10.18653\/v1\/2023.findings-eacl.20"},{"key":"1373_CR36","unstructured":"Kusupati U, Ailavarapu VRT (2022) Natural language to code using transformers. ArXiv arXiv:2202.00367"},{"key":"1373_CR37","unstructured":"Al-Hossami E, Shaikh S (2022) A survey on artificial intelligence for source code: a dialogue systems perspective. ArXiv arXiv:2202.04847"},{"key":"1373_CR38","first-page":"1","volume":"2022","author":"P Ni","year":"2022","unstructured":"Ni P, Okhrati R, Guan S, Chang VI (2022) Knowledge graph and deep learning-based text-to-graphQL model for intelligent medical consultation chatbot. Inf Syst Front 2022:1\u201320","journal-title":"Inf Syst Front"},{"key":"1373_CR39","unstructured":"Kamath A, Das R (2018) A survey on semantic parsing. ArXiv arXiv:1812.00978"},{"key":"1373_CR40","doi-asserted-by":"crossref","unstructured":"Gu J, Lu Z, Li H, Li VOK (2016) Incorporating copying mechanism in sequence-to-sequence learning. ArXiv arXiv:1603.06393","DOI":"10.18653\/v1\/P16-1154"},{"key":"1373_CR41","doi-asserted-by":"crossref","unstructured":"Iyer S, Konstas I, Cheung A, Zettlemoyer L (2018) Mapping language to code in programmatic context. In: Conference on empirical methods in natural language processing. https:\/\/api.semanticscholar.org\/CorpusID:52125417","DOI":"10.18653\/v1\/D18-1192"},{"key":"1373_CR42","doi-asserted-by":"crossref","unstructured":"Xiao C, Dymetman M, Gardent C (2016) Sequence-based structured prediction for semantic parsing. In: Annual meeting of the association for computational linguistics. https:\/\/api.semanticscholar.org\/CorpusID:16911296","DOI":"10.18653\/v1\/P16-1127"},{"key":"1373_CR43","doi-asserted-by":"crossref","unstructured":"Krishnamurthy J, Dasigi P, Gardner M (2017) Neural semantic parsing with type constraints for semi-structured tables. In: Conference on empirical methods in natural language processing. https:\/\/api.semanticscholar.org\/CorpusID:1675452","DOI":"10.18653\/v1\/D17-1160"},{"key":"1373_CR44","doi-asserted-by":"crossref","unstructured":"Ling W, Blunsom P, Grefenstette E, Hermann KM, Kocisk\u00fd T, Wang F, Senior AW (2016) Latent predictor networks for code generation. ArXiv arXiv:1603.06744","DOI":"10.18653\/v1\/P16-1057"},{"key":"1373_CR45","doi-asserted-by":"crossref","unstructured":"Iyer S, Cheung A, Zettlemoyer L (2019) Learning programmatic idioms for scalable semantic parsing. In: Conference on empirical methods in natural language processing. https:\/\/api.semanticscholar.org\/CorpusID:125969731","DOI":"10.18653\/v1\/D19-1545"},{"key":"1373_CR46","unstructured":"Nye M, Hewitt LB, Tenenbaum JB, Solar-Lezama A (2019) Learning to infer program sketches. ArXiv arXiv:1902.06349"},{"key":"1373_CR47","doi-asserted-by":"crossref","unstructured":"Dong L, Quirk C, Lapata M (2018) Confidence modeling for neural semantic parsing. In: Annual meeting of the association for computational linguistics. https:\/\/api.semanticscholar.org\/CorpusID:13686145","DOI":"10.18653\/v1\/P18-1069"},{"key":"1373_CR48","unstructured":"Chaurasia S, Mooney RJ (2017) Dialog for language to code. In: International joint conference on natural language processing. https:\/\/api.semanticscholar.org\/CorpusID:217279086"},{"key":"1373_CR49","doi-asserted-by":"crossref","unstructured":"Andreas J, Bufe J, Burkett D, Chen CC, Clausman J, Crawford J, Crim K, DeLoach J, Dorner L, Eisner J, Fang H, Guo A, Hall DLW, Hayes KD, Hill K, Ho D, Iwaszuk W, Jha S, Klein D, Krishnamurthy J, Lanman T, Liang P, Lin CH, Lintsbakh I, McGovern A, Nisnevich A, Pauls A, Petters D, Read B, Roth D, Roy S, Rusak J, Short BA, Slomin D, Snyder B, Striplin S, Su Y, Tellman Z, Thomson S, Vorobev AA, Witoszko I, Wolfe J, Wray AG, Zhang Y, Zotov A (2020) Task-oriented dialogue as dataflow synthesis. Trans Assoc Comput Linguist 8:556\u2013571","DOI":"10.1162\/tacl_a_00333"},{"key":"1373_CR50","doi-asserted-by":"crossref","unstructured":"Polozov O, Gulwani S (2015) Flashmeta: a framework for inductive program synthesis. In: Proceedings of the 2015 ACM SIGPLAN international conference on object-oriented programming, systems, languages, and applications","DOI":"10.1145\/2814270.2814310"},{"key":"1373_CR51","unstructured":"Parisotto E, Mohamed A, Singh R, Li L, Zhou D, Kohli P (2017) Neuro-symbolic program synthesis. ArXiv arXiv:1611.01855"},{"key":"1373_CR52","unstructured":"Bhupatiraju S, Singh R, Mohamed Ar, Kohli P (2017) Deep api programmer: Learning to program with apis. ArXiv arXiv:1704.04327"},{"key":"1373_CR53","unstructured":"Balog M, Gaunt AL, Brockschmidt M, Nowozin S, Tarlow D (2017) Deepcoder: learning to write programs. ArXiv arXiv:1611.01989"},{"key":"1373_CR54","unstructured":"Devlin J, Uesato J, Bhupatiraju S, Singh R, Mohamed Ar, Kohli P (2017) Robustfill: Neural program learning under noisy i\/o. ArXiv arXiv:1703.07469"},{"key":"1373_CR55","unstructured":"Xu Y, Dai L, Singh U, Zhang K, Tu Z (2019) Neural program synthesis by self-learning. ArXiv arXiv:1910.05865"},{"key":"1373_CR56","unstructured":"Polosukhin I, Skidanov A (2018) Neural program search: Solving data processing tasks from description and examples. In: ICLR 2018"},{"key":"1373_CR57","doi-asserted-by":"publisher","first-page":"1772","DOI":"10.3390\/electronics12081772","volume":"12","author":"T Li","year":"2023","unstructured":"Li T, Zhang S, Li Z (2023) Sp-nlg: a semantic-parsing-guided natural language generation framework. Electronics 12:1772","journal-title":"Electronics"},{"key":"1373_CR58","unstructured":"Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. arXiv preprint arXiv:2005.14165"},{"key":"1373_CR59","unstructured":"Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: A robustly optimized bert pretraining approach. ArXiv arXiv:1907.11692"},{"key":"1373_CR60","unstructured":"Devlin J, Chang MW, Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv arXiv:1810.04805"},{"key":"1373_CR61","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s44147-022-00159-4","volume":"69","author":"AS Soliman","year":"2022","unstructured":"Soliman AS, Hadhoud MM, Shaheen SI (2022) Mariancg: a code generation transformer model inspired by machine translation. J Eng Appl Sci 69:1\u201323","journal-title":"J Eng Appl Sci"},{"key":"1373_CR62","unstructured":"Sanh V, Debut L, Chaumond J, Wolf T (2019) Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR arXiv:1910.01108"},{"key":"1373_CR63","unstructured":"Clark K, Luong MT, Le QV, Manning CD (2020) Electra: Pre-training text encoders as discriminators rather than generators. ArXiv arXiv:2003.10555"},{"key":"1373_CR64","doi-asserted-by":"crossref","unstructured":"Yamada I, Asai A, Shindo H, Takeda H, Matsumoto Y (2020) Luke: deep contextualized entity representations with entity-aware self-attention. In: EMNLP","DOI":"10.18653\/v1\/2020.emnlp-main.523"},{"key":"1373_CR65","doi-asserted-by":"publisher","unstructured":"Ross SI, Martinez F, Houde S, Muller M, Weisz JD (2023) The programmer\u2019s assistant: Conversational interaction with a large language model for software development. https:\/\/doi.org\/10.1145\/3581641.3584037","DOI":"10.1145\/3581641.3584037"},{"key":"1373_CR66","unstructured":"Poldrack RA, Lu T, Begu\u0161 G (2023) Ai-assisted coding: Experiments with gpt-4"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01373-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01373-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01373-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,16]],"date-time":"2024-05-16T18:20:41Z","timestamp":1715883641000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01373-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,29]]},"references-count":66,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["1373"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01373-8","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,29]]},"assertion":[{"value":"13 December 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 January 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 February 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"Not applicable.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"Implementation and availability of the code are in the following repository: .","order":6,"name":"Ethics","group":{"name":"EthicsHeading","label":"Code availability"}}]}}