{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T15:32:48Z","timestamp":1776094368969,"version":"3.50.1"},"reference-count":54,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2024,12,28]],"date-time":"2024-12-28T00:00:00Z","timestamp":1735344000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62072309"],"award-info":[{"award-number":["62072309"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"CAS Project for Young Scientists in Basic Research","award":["YSBR-040"],"award-info":[{"award-number":["YSBR-040"]}]},{"name":"ISCAS New Cultivation Project","award":["ISCAS-PYFX-202201"],"award-info":[{"award-number":["ISCAS-PYFX-202201"]}]},{"name":"ISCAS Fundamental Research Project","award":["ISCAS-JCZD-202302"],"award-info":[{"award-number":["ISCAS-JCZD-202302"]}]},{"name":"Research Foundation from NUDT","award":["ZK24-05"],"award-info":[{"award-number":["ZK24-05"]}]},{"name":"Xiaoning Du\u2019s Google Research Scholar Program Award, and National Research Foundation","award":["NRF-NRFI08-2022-0002"],"award-info":[{"award-number":["NRF-NRFI08-2022-0002"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2025,1,31]]},"abstract":"<jats:p>Currently, large pre-trained language models are widely applied in neural code completion systems. Though large code models significantly outperform their smaller counterparts, around 70% of displayed code completions from Github Copilot are not accepted by developers. Being reviewed but not accepted, their help to developer productivity is considerably limited and may conversely aggravate the workload of developers, as the code completions are automatically and actively generated in state-of-the-art code completion systems as developers type out once the service is enabled. Even worse, considering the high cost of the large code models, it is a huge waste of computing resources and energy, which severely goes against the sustainable development principle of AI technologies. However, such waste has never been realized, not to mention effectively addressed, in the research community for neural code completion. Hence, preventing such unhelpful code completions from happening in a cost-friendly way is of urgent need. To fill this significant gap, we first investigate the prompts of unhelpful code completions, called \u201clow-return prompts.\u201d We empirically identify four observable patterns in low-return prompts, each lacking necessary information, making it difficult to address through enhancements to the model\u2019s accuracy alone. This demonstrates the feasibility of identifying such low-return prompts based on the prompts themselves. Motivated by this finding, we propose an early-rejection mechanism to turn down low-return prompts by foretelling the code completion qualities. The prompts that are estimated to receive unhelpful code completions will not be sent to the model. Furthermore, we investigated five types of estimators to demonstrate the feasibility of the mechanism. The experimental results show that the estimator can reject 20% of code completion requests with a 97.4% precision. To the best of our knowledge, it is the first systemic approach to address the problem of unhelpful code completions and this work also sheds light on an important research direction of large code models.<\/jats:p>","DOI":"10.1145\/3688831","type":"journal-article","created":{"date-parts":[[2024,8,16]],"date-time":"2024-08-16T15:45:14Z","timestamp":1723823114000},"page":"1-22","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Don\u2019t Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems"],"prefix":"10.1145","volume":"34","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5393-7858","authenticated-orcid":false,"given":"Zhensu","family":"Sun","sequence":"first","affiliation":[{"name":"Singapore Management University, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3728-9541","authenticated-orcid":false,"given":"Xiaoning","family":"Du","sequence":"additional","affiliation":[{"name":"Monash University, Melbourne, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0581-2679","authenticated-orcid":false,"given":"Fu","family":"Song","sequence":"additional","affiliation":[{"name":"Key Laboratory of System Software (Chinese Academy of Sciences), State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1469-2063","authenticated-orcid":false,"given":"Shangwen","family":"Wang","sequence":"additional","affiliation":[{"name":"National University of Defense Technology, Changsha, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1220-365X","authenticated-orcid":false,"given":"Mingze","family":"Ni","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2990-1614","authenticated-orcid":false,"given":"Li","family":"Li","sequence":"additional","affiliation":[{"name":"Beihang University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4367-7201","authenticated-orcid":false,"given":"David","family":"Lo","sequence":"additional","affiliation":[{"name":"Singapore Management University, Singapore, Singapore"}]}],"member":"320","published-online":{"date-parts":[[2024,12,28]]},"reference":[{"key":"e_1_3_1_2_2","unstructured":"Oracle. 1999. Code Conventions for the Java Programming Language: 9. Naming Conventions. Retrieved December 28 2022 from https:\/\/www.oracle.com\/java\/technologies\/javase\/codeconventions-namingconventions.html"},{"key":"e_1_3_1_3_2","unstructured":"aiXcoder. 2022. Retrieved December 28 2022 from https:\/\/www.aixcoder.com\/en\/"},{"key":"e_1_3_1_4_2","unstructured":"TabNine. 2022. Code Faster with AI Completions. Retrieved December 28 2022 from https:\/\/www.tabnine.com\/"},{"key":"e_1_3_1_5_2","unstructured":"GitHub Copilot. 2022. Your AI Pair Programmer. Retrieved December 28 2022 from https:\/\/copilot.github.com\/"},{"key":"e_1_3_1_6_2","unstructured":"Amazon. 2022. ML-Powered Coding Companion \u2013 Amazon CodeWhisperer \u2013 Amazon Web Services. Retrieved December 28 2022 from https:\/\/aws.amazon.com\/codewhisperer\/"},{"key":"e_1_3_1_7_2","unstructured":"Google DeepMind AlphaCode Team. 2023. AlphaCode 2 Technical Report. Retrieved from https:\/\/storage.googleapis.com\/deepmind-media\/AlphaCode2\/AlphaCode2_Tech_Report.pdf"},{"key":"e_1_3_1_8_2","unstructured":"Ibrahim Alshubaily. 2021. Efficient neural architecture search with performance prediction. arXiv:2108.01854. Retrieved from https:\/\/arxiv.org\/abs\/2108.01854"},{"key":"e_1_3_1_9_2","unstructured":"Shraddha Barke Michael B. James and Nadia Polikarpova. 2022. Grounded copilot: How programmers interact with code-generating models. arXiv:2206.15000. Retrieved from https:\/\/arxiv.org\/abs\/2108.01854"},{"key":"e_1_3_1_10_2","unstructured":"Mohammad Bavarian Heewoo Jun Nikolas Tezak John Schulman Christine McLeavey Jerry Tworek and Mark Chen. 2022. Efficient training of language models to fill in the middle. arXiv:2207.14255. Retrieved from https:\/\/arxiv.org\/abs\/2207.14255"},{"key":"e_1_3_1_11_2","unstructured":"Christopher Canel Thomas Kim Giulio Zhou Conglong Li Hyeontaek Lim David G. Andersen Michael Kaminsky and Subramanya R. Dulloor. 2019. Scaling video analytics on constrained edge nodes. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:92991297"},{"key":"e_1_3_1_12_2","unstructured":"Mark Chen Jerry Tworek Heewoo Jun Qiming Yuan Henrique Ponde Jared Kaplan Harrison Edwards Yura Burda Nicholas Joseph Greg Brockman Alex Ray Raul Puri Gretchen Krueger Michael Petrov Heidy Khlaaf Girish Sastry Pamela Mishkin Brooke Chan Scott Gray Nick Ryder Mikhail Pavlov Alethea Power Lukasz Kaiser Mohammad Bavarian Clemens Winter Philippe Tillet Felipe Petroski Such David W. Cummings Matthias Plappert Fotios Chantzis Elizabeth Barnes Ariel Herbert-Voss William H. Guss Alex Nichol Igor Babuschkin S. Arun Balaji Shantanu Jain Andrew Carr Jan Leike Joshua Achiam Vedant Misra Evan Morikawa Alec Radford Matthew M. Knight Miles Brundage Mira Murati Katie Mayer Peter Welinder Bob McGrew Dario Amodei Sam McCandlish Ilya Sutskever and Wojciech Zaremba. 2021. Evaluating large language models trained on code. arXiv:2107.03374. Retrieved from https:\/\/arxiv.org\/abs\/2107.03374"},{"key":"e_1_3_1_13_2","unstructured":"Radosvet Desislavov Fernando Mart\u2019inez-Plumed and Jos\u2019e Hern\u2019andez-Orallo. 2021. Compute and energy consumption trends in deep learning inference. arXiv:2109.05472. Retrieved from https:\/\/arxiv.org\/abs\/2109.05472"},{"key":"e_1_3_1_14_2","first-page":"4171","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics, 4171\u20134186. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:52967399","journal-title":"North American Chapter of the Association for Computational Linguistics"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510454.3528648"},{"key":"e_1_3_1_16_2","doi-asserted-by":"crossref","unstructured":"Mikhail Evtikhiev Egor Bogomolov Yaroslav Sokolov and Timofey Bryksin. 2022. Out of the BLEU: How should we assess quality of the code generation models? arXiv:2208.03133. Retrieved from https:\/\/arxiv.org\/abs\/2208.03133","DOI":"10.2139\/ssrn.4201043"},{"key":"e_1_3_1_17_2","doi-asserted-by":"crossref","unstructured":"Zhangyin Feng Daya Guo Duyu Tang Nan Duan Xiaocheng Feng Ming Gong Linjun Shou Bing Qin Ting Liu Daxin Jiang and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. arXiv:2002.08155. Retrieved from https:\/\/arxiv.org\/abs\/2002.08155","DOI":"10.18653\/v1\/2020.findings-emnlp.139"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1006\/jcss.1997.1504"},{"key":"e_1_3_1_19_2","unstructured":"Jianping Gou B. Yu Stephen J. Maybank and Dacheng Tao. 2021. Knowledge distillation: A survey. arXiv:2006.05525. Retrieved from https:\/\/arxiv.org\/abs\/2006.05525"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3180155.3180167"},{"key":"e_1_3_1_21_2","unstructured":"Xinying Hou Yanjie Zhao Yue Liu Zhou Yang Kailong Wang Li Li Xiapu Luo David Lo John C. Grundy and Haoyu Wang. 2023. Large language models for software engineering: A systematic literature review. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:261048648"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/3196321.3196334"},{"key":"e_1_3_1_23_2","unstructured":"Hamel Husain Hongqi Wu Tiferet Gazit Miltiadis Allamanis and Marc Brockschmidt. 2019. CodeSearchNet challenge: Evaluating the state of semantic code search. arXiv:1909.09436. Retrieved from https:\/\/arxiv.org\/abs\/1909.09436"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICST.2015.7102609"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1016\/B978-1-55860-377-6.50045-1"},{"key":"e_1_3_1_26_2","doi-asserted-by":"crossref","unstructured":"Robert V. Krejcie and Daryle W. Morgan. 1970. Determining sample size for research activities. Educational and Psychological Measurement 30 (1970) 607\u2013610.","DOI":"10.1177\/001316447003000308"},{"key":"e_1_3_1_27_2","unstructured":"Jian Li Yue Wang Michael R. Lyu and Irwin King. 2018. Code completion with neural attention and pointer networks. arXiv:1711.09573. Retrieved from https:\/\/arxiv.org\/abs\/1711.09573"},{"key":"e_1_3_1_28_2","unstructured":"Raymond Li Loubna Ben Allal Yangtian Zi Niklas Muennighoff Denis Kocetkov Chenghao Mou Marc Marone Christopher Akiki Jia Li Jenny Chim Qian Liu Evgenii Zheltonozhskii Terry Yue Zhuo Thomas Wang Olivier Dehaene Mishig Davaadorj Joel Lamy-Poirier Jo\u00e3o Monteiro Oleh Shliazhko Nicolas Gontier Nicholas Meade Armel Zebaze Ming-Ho Yee Logesh Kumar Umapathi Jian Zhu Benjamin Lipkin Muhtasham Oblokulov Zhiruo Wang Rudra Murthy Jason Stillerman Siva Sankalp Patel Dmitry Abulkhanov Marco Zocca Manan Dey Zhihan Zhang Nourhan Fahmy Urvashi Bhattacharyya W. Yu Swayam Singh Sasha Luccioni Paulo Villegas Maxim Kunakov Fedor Zhdanov Manuel Romero Tony Lee Nadav Timor Jennifer Ding Claire Schlesinger Hailey Schoelkopf Jana Ebert Tri Dao Mayank Mishra Alexander Gu Jennifer Robinson Carolyn Jane Anderson Brendan Dolan-Gavitt Danish Contractor Siva Reddy Daniel Fried Dzmitry Bahdanau Yacine Jernite Carlos Mu\u00f1oz Ferrandis Sean M. Hughes Thomas Wolf Arjun Guha Leandro von Werra and Harm de Vries. 2023. StarCoder: May the source be with you! Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:258588247"},{"key":"e_1_3_1_29_2","unstructured":"Yujia Li David H. Choi Junyoung Chung Nate Kushman Julian Schrittwieser R\u00e9mi Leblond Tom Eccles James Keeling Felix Gimeno Agustin Dal Lago Thomas Hubert Peter Choy Cyprien de Masson d\u2019Autume Igor Babuschkin Xinyun Chen Po-Sen Huang Johannes Welbl Sven Gowal Alexey Cherepanov James Molloy Daniel Jaymin Mankowitz Esme Sutherland Robson Pushmeet Kohli Nando de Freitas Koray Kavukcuoglu and Oriol Vinyals. 2022. Competition-Level code generation with AlphaCode. arXiv:2203.07814. Retrieved from https:\/\/arxiv.org\/abs\/2203.07814"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.3115\/1220355.1220427"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF01211648"},{"key":"e_1_3_1_32_2","unstructured":"Shuai Lu Daya Guo Shuo Ren Junjie Huang Alexey Svyatkovskiy Ambrosio Blanco Colin B. Clement Dawn Drain Daxin Jiang Duyu Tang Ge Li Lidong Zhou Linjun Shou Long Zhou Michele Tufano Ming Gong Ming Zhou Nan Duan Neel Sundaresan Shao Kun Deng Shengyu Fu and Shujie Liu. 2021. CodeXGLUE: A machine learning benchmark dataset for code understanding and generation. arXiv:2102.04664. Retrieved from https:\/\/arxiv.org\/abs\/2102.04664"},{"key":"e_1_3_1_33_2","unstructured":"Ziyang Luo Can Xu Pu Zhao Qingfeng Sun Xiubo Geng Wenxiang Hu Chongyang Tao Jing Ma Qingwei Lin and Daxin Jiang. 2023. WizardCoder: Empowering code large language models with Evol-Instruct. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:259164815"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1093\/bioinformatics\/bti499"},{"key":"e_1_3_1_35_2","unstructured":"Erik Nijkamp Hiroaki Hayashi Caiming Xiong Silvio Savarese and Yingbo Zhou. 2023. CodeGen2: Lessons for training LLMs on programming and natural languages. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:258461229"},{"key":"e_1_3_1_36_2","unstructured":"Erik Nijkamp Bo Pang Hiroaki Hayashi Lifu Tu Haiquan Wang Yingbo Zhou Silvio Savarese and Caiming Xiong. 2022a. CodeGen: An open large language model for code with multi-turn program synthesis. arXiv:2102.04664. Retrieved from https:\/\/arxiv.org\/abs\/2203.13474"},{"key":"e_1_3_1_37_2","first-page":"311","article-title":"Bleu: A method for automatic evaluation of machine translation","author":"Papineni Kishore","year":"2002","unstructured":"Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In 40th Annual Meeting on Association for Computational Linguistics, 311\u2013318.","journal-title":"40th Annual Meeting on Association for Computational Linguistics"},{"key":"e_1_3_1_38_2","unstructured":"Alec Radford Jeff Wu Rewon Child David Luan Dario Amodei and Ilya Sutskever. 2019. Language Models Are Unsupervised Multitask Learners. OpenAI blog. Retrieved from https:\/\/insightcivic.s3.us-east-1.amazonaws.com\/language-models.pdf"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00313"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3468264.3468588"},{"key":"e_1_3_1_41_2","first-page":"291","article-title":"Compression of neural machine translation models via pruning","author":"See A.","year":"2016","unstructured":"A. See, Minh-Thang Luong, and Christopher D. Manning. 2016. Compression of neural machine translation models via pruning. In Conference on Natural Language Learning, 291\u2013301.","journal-title":"Conference on Natural Language Learning"},{"key":"e_1_3_1_42_2","doi-asserted-by":"crossref","unstructured":"Rico Sennrich Barry Haddow and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. arXiv:1508.07909. Retrieved from https:\/\/arxiv.org\/abs\/1508.07909","DOI":"10.18653\/v1\/P16-1162"},{"key":"e_1_3_1_43_2","doi-asserted-by":"crossref","unstructured":"Emma Strubell Ananya Ganesh and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. arXiv:1906.02243. Retrieved from https:\/\/arxiv.org\/abs\/1906.02243","DOI":"10.18653\/v1\/P19-1355"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510160"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330699"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3491101.3519665"},{"key":"e_1_3_1_47_2","unstructured":"Ashish Vaswani Noam M. Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N. Gomez Lukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. arXiv:1706.03762. Retrieved from https:\/\/arxiv.org\/abs\/1706.03762"},{"key":"e_1_3_1_48_2","doi-asserted-by":"crossref","unstructured":"Yue Wang Weishi Wang Shafiq R. Joty and Steven C. H. Hoi. 2021. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv:2109.00859. Retrieved from https:\/\/arxiv.org\/abs\/2109.00859","DOI":"10.18653\/v1\/2021.emnlp-main.685"},{"key":"e_1_3_1_49_2","first-page":"11058","article-title":"Meta-learning hyperparameter performance prediction with neural processes","author":"Wei Ying","year":"2021","unstructured":"Ying Wei, Peilin Zhao, and Junzhou Huang. 2021. Meta-learning hyperparameter performance prediction with neural processes. In International Conference on Machine Learning (ICML), 11058\u201311067.","journal-title":"International Conference on Machine Learning (ICML)"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-021-00043-6"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","unstructured":"M. Xia Antonios Anastasopoulos Ruochen Xu Yiming Yang and Graham Neubig. 2020. Predicting performance for natural language processing tasks. arXiv:2005.00870. Retrieved from https:\/\/doi.org\/10.48550\/arXiv.2005.00870","DOI":"10.48550\/arXiv.2005.00870"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510146"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2024.3361661"},{"key":"e_1_3_1_54_2","unstructured":"Zibin Zheng Kaiwen Ning Yanlin Wang Jingwen Zhang Dewu Zheng Mingxi Ye and Jiachi Chen. 2023. A survey of large language models for code: olution benchmarking and future trends. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:265281389"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","unstructured":"Albert Ziegler Eirini Kalliamvakou Shawn Simister Ganesh Sittampalam Alice Li Andrew SC Rice Devon Rifkin and Edward Aftandilian. 2022. Productivity assessment of neural code completion. arXiv:2205.06537. Retrieved from https:\/\/doi.org\/10.48550\/arXiv.2205.06537","DOI":"10.48550\/arXiv.2205.06537"}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3688831","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3688831","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:04:10Z","timestamp":1750291450000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3688831"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,28]]},"references-count":54,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1,31]]}},"alternative-id":["10.1145\/3688831"],"URL":"https:\/\/doi.org\/10.1145\/3688831","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,12,28]]},"assertion":[{"value":"2023-02-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-07-29","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-12-28","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}