{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T16:11:33Z","timestamp":1772554293028,"version":"3.50.1"},"reference-count":69,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T00:00:00Z","timestamp":1737072000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100003593","name":"CNPq","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100003593","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100002322","name":"CAPES","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100002322","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100004901","name":"FAPEMIG","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100004901","id-type":"DOI","asserted-by":"crossref"}]},{"name":"AWS"},{"DOI":"10.13039\/100007065","name":"NVIDIA","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100007065","id-type":"DOI","asserted-by":"crossref"}]},{"name":"CIIA-Sa\u00fade"},{"DOI":"10.13039\/501100001807","name":"FAPESP","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100001807","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Italian Ministry of University and Research"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Inf. Syst."],"published-print":{"date-parts":[[2025,3,31]]},"abstract":"<jats:p>\n            Fine-tuning transformer-based deep-learning models are currently at the forefront of natural language processing (NLP) and information retrieval (IR) tasks. However, fine-tuning these\n            <jats:italic>transformers<\/jats:italic>\n            for specific tasks, especially when dealing with ever-expanding volumes of data, constant retraining requirements, and budget constraints, can be computationally and financially costly, requiring substantial energy consumption and contributing to carbon dioxide emissions. This article focuses on advancing the state-of-the-art (SOTA) on\n            <jats:italic>instance selection<\/jats:italic>\n            (IS)\u2014a range of document filtering techniques designed to select the most representative documents for the sake of training. The objective is to either maintain or enhance classification effectiveness while reducing the overall training (fine-tuning) total processing time. In our prior research, we introduced the E2SC framework, a redundancy-oriented IS method focused on transformers and large datasets\u2014currently the state-of-the-art in IS. Nonetheless, important research questions remained unanswered in our previous work, mostly due to E2SC\u2019s sole emphasis on redundancy. In this article, we take our research a step further by proposing\n            <jats:italic>biO-IS\u2014<\/jats:italic>\n            an extended\n            <jats:bold>bi<\/jats:bold>\n            -\n            <jats:bold>o<\/jats:bold>\n            bjective\n            <jats:bold>i<\/jats:bold>\n            nstance\n            <jats:bold>s<\/jats:bold>\n            election solution, a novel IS framework aimed at simultaneously removing redundant and\n            <jats:italic>noisy<\/jats:italic>\n            instances from the training. biO-IS estimates redundancy based on scalable, fast, and calibrated weak classifiers and captures noise with the support of a new entropy-based step. We also propose a novel iterative process to estimate near-optimum reduction rates for both steps. Our extended solution is able to reduce the training sets by 41% on average (up to 60%) while maintaining the effectiveness in\n            <jats:italic>all<\/jats:italic>\n            tested datasets, with speedup gains of 1.67 on average (up to 2.46x). No other baseline, not even our previous SOTA solution, was capable of achieving results with this level of quality, considering the tradeoff among training reduction, effectiveness, and speedup. To ensure reproducibility, our documentation, code, and datasets can be accessed on GitHub\u2014\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/waashk\/bio-is\">https:\/\/github.com\/waashk\/bio-is<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3705000","type":"journal-article","created":{"date-parts":[[2024,11,20]],"date-time":"2024-11-20T15:07:22Z","timestamp":1732115242000},"page":"1-33","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":18,"title":["A Noise-Oriented and Redundancy-Aware Instance Selection Framework"],"prefix":"10.1145","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1988-8412","authenticated-orcid":false,"given":"Washington","family":"Cunha","sequence":"first","affiliation":[{"name":"Federal University of Minas Gerais, Belo Horizonte, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0377-1025","authenticated-orcid":false,"given":"Alejandro","family":"Moreo Fern\u00e1ndez","sequence":"additional","affiliation":[{"name":"Istituto di Scienza e Tecnologie dell\u2019Informazione, Consiglio Nazionale delle Ricerche, Pisa, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5725-4322","authenticated-orcid":false,"given":"Andrea","family":"Esuli","sequence":"additional","affiliation":[{"name":"Istituto di Scienza e Tecnologie dell\u2019Informazione, Consiglio Nazionale delle Ricerche, Pisa, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4221-6427","authenticated-orcid":false,"given":"Fabrizio","family":"Sebastiani","sequence":"additional","affiliation":[{"name":"Istituto di Scienza e Tecnologie dell\u2019Informazione, Consiglio Nazionale delle Ricerche, Pisa, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4913-4902","authenticated-orcid":false,"given":"Leonardo","family":"Rocha","sequence":"additional","affiliation":[{"name":"Federal University of S\u00e3o Jo\u00e3o Del-Rei, S\u00e3o Jo\u00e3o Del Rei, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2075-3363","authenticated-orcid":false,"given":"Marcos Andr\u00e9","family":"Gon\u00e7alves","sequence":"additional","affiliation":[{"name":"Federal University of Minas Gerais, Belo Horizonte, Brazil"}]}],"member":"320","published-online":{"date-parts":[[2025,1,17]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF00153759"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10515-023-00397-7"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.conll-1.31"},{"key":"e_1_3_2_5_2","unstructured":"Lasse F. Wolff Anthony Benjamin Kanding and Raghavendra Selvan. 2020. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv:2007.03051. Retrieved from https:\/\/arxiv.org\/abs\/2007.03051"},{"key":"e_1_3_2_6_2","doi-asserted-by":"crossref","unstructured":"Fabiano Bel\u00e9m Washington Cunha Celso Fran\u00e7a Claudio Andrade Leonardo Rocha and Marcos Andr\u00e9 Gon\u00e7alves. 2024. A novel two-step fine-tuning pipeline for cold-start active learning in text classification tasks. arXiv:2407.17284. Retrieved from https:\/\/arxiv.org\/abs\/2407.17284","DOI":"10.22541\/au.172469241.14369813\/v1"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1175\/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2"},{"key":"e_1_3_2_8_2","first-page":"1877","volume-title":"Advances in Neural Information Processing Systems","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, Vol. 33, 1877\u20131901."},{"key":"e_1_3_2_9_2","unstructured":"Martin Juan Jos\u00e9 Bucher and Marco Martini. 2024. Fine-tuned \u2018small\u2019 LLMs (Still) significantly outperform zero-shot generative AI models in text classification. arXiv:2406.08660. Retrieved from https:\/\/arxiv.org\/abs\/2406.08660"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W18-5406"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331239"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2020.102263"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591638"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2020.102481"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3582000"},{"key":"e_1_3_2_16_2","unstructured":"Claudio de Andrade Washington Cunha Davi Reis Adriana Silvina Pagano Leonardo Rocha and Marcos Andr\u00e9 Gon\u00e7alves. 2024. A strategy to combine 1stGen transformers and open LLMs for automatic text classification. arXiv:2408.09629. Retrieved from https:\/\/arxiv.org\/abs\/2408.09629"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1016\/J.IPM.2023.103336"},{"key":"e_1_3_2_18_2","first-page":"4171","volume-title":"Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics HLT-NAACL","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics HLT-NAACL, 4171\u20134186."},{"key":"e_1_3_2_19_2","unstructured":"Abhimanyu Dubey Abhinav Jauhri Abhinav Pandey Abhishek Kadian Ahmad Al-Dahle Aiesha Letman Akhil Mathur Alan Schelten Amy Yang Angela Fan et al. 2024. The llama 3 herd of models. arXiv:2407.21783. Retrieved from https:\/\/arxiv.org\/abs\/2407.21783"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/2516889"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/3583780.3614789"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/1321440.1321466"},{"key":"e_1_3_2_23_2","first-page":"358","volume-title":"European Conference on Information Retrieval","author":"Dacrema Maurizio Ferrari","year":"2024","unstructured":"Maurizio Ferrari Dacrema, Andrea Pasin, Paolo Cremonesi, and Nicola Ferro. 2024. Quantum computing for information retrieval and recommender systems. In European Conference on Information Retrieval. Springer, 358\u2013362."},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4899-7502-7_900-1"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3634912"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2024.128172"},{"key":"e_1_3_2_27_2","doi-asserted-by":"crossref","unstructured":"Andrea Gasparetto Matteo Marcuzzo Alessandro Zangari and Andrea Albarelli. 2022. A survey on text classification algorithms: From text to predictions. Information 13 (2022) 83. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:246840078","DOI":"10.3390\/info13020083"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.acl-industry.46"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3471158.3472261"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/75.4.800"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/160688.160758"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4614-7138-7"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/E17-2068"},{"key":"e_1_3_2_34_2","unstructured":"Zhenzhong Lan Mingda Chen Sebastian Goodman Kevin Gimpel Piyush Sharma and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv:1909.11942. Retrieved from https:\/\/arxiv.org\/abs\/1909.11942"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1002\/advs.202100707"},{"key":"e_1_3_2_36_2","unstructured":"Teven Le Scao Angela Fan Christopher Akiki Ellie Pavlick Suzana Ili\u0107 Daniel Hesslow Roman Castagn\u00e9 Alexandra Sasha Luccioni Fran\u00e7ois Yvon Matthias Gall\u00e9 et al. 2023. Bloom: A 176b-parameter open-access multilingual language model. arXiv: 2211.05100. Retrieved from https:\/\/doi.org\/10.48550\/arXiv.2211.05100"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2014.10.001"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/3495162"},{"key":"e_1_3_2_40_2","first-page":"461","volume-title":"Neurocomputing","author":"Liang Tailin","year":"2021","unstructured":"Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. 2021. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing 461 (2021), 370\u2013403."},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3616855.3635859"},{"key":"e_1_3_2_42_2","unstructured":"Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint 1907.11692. Retrieved from https:\/\/doi.org\/10.48550\/arXiv.1907.11692"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3178876.3186168"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482286"},{"key":"e_1_3_2_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401093"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2020.113297"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2889473"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.143"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401266"},{"key":"e_1_3_2_50_2","first-page":"1125","volume-title":"Proceedings of Conference on Information and Knowledge (CIKM \u201920)","author":"Mendes Luiz Felipe","year":"2020","unstructured":"Luiz Felipe Mendes, Marcos Andr\u00e9 Gon\u00e7alves, Washington Cunha, Leonardo C. da Rocha, Thierson Couto Rosa, and Wellington Martins. 2020. \u201cKeep it Simple, Lazy\u201d MetaLazy: A new metastrategy for lazy text classification. In Proceedings of Conference on Information and Knowledge (CIKM \u201920), 1125\u20131134."},{"issue":"3","key":"e_1_3_2_51_2","first-page":"40","article-title":"Deep learning\u2013based text classification: A comprehensive review","volume":"54","author":"Minaee Shervin","year":"2021","unstructured":"Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2021. Deep learning\u2013based text classification: A comprehensive review. ACM Computing Surveys 54, 3, Article 62 (Apr. 2021), 40 pages.","journal-title":"ACM Computing Surveys"},{"key":"e_1_3_2_52_2","unstructured":"Franco Maria Nardini Cosimo Rulli Salvatore Trani and Rossano Venturini. 2023. Neural network compression using binarization and few full-precision weights. arXiv:2306.08960. Retrieved from https:\/\/arxiv.org\/abs\/2306.08960"},{"key":"e_1_3_2_53_2","first-page":"1","volume-title":"Proceedings of the 14th Italian Information Retrieval Workshop","author":"Pasin Andrea","year":"2022","unstructured":"Andrea Pasin, Washington Cunha, Marcos Andr\u00e9 Gon\u00e7alves, and Nicola Ferro. 2022. A quantum annealing-based instance selection approach for transformer fine-tuning. In Proceedings of the 14th Italian Information Retrieval Workshop, 1\u20135."},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3664190.3672515"},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.1140\/epjds\/s13688-016-0085-1"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2022.108346"},{"key":"e_1_3_2_57_2","unstructured":"Noveen Sachdeva Benjamin Coleman Wang-Cheng Kang Jianmo Ni Lichan Hong Ed H. Chi James Caverlee Julian McAuley and Derek Zhiyuan Cheng. 2024. How to train data-efficient LLMs. arXiv:2402.09668. Retrieved from https:\/\/arxiv.org\/abs\/2402.09668"},{"key":"e_1_3_2_58_2","unstructured":"Victor Sanh Lysandre Debut Julien Chaumond and Thomas Wolf. 2019. DistilBERT a distilled version of BERT: Smaller faster cheaper and lighter. arXiv:1910.01108. Retrieved from https:\/\/arxiv.org\/abs\/1910.01108"},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.1145\/505282.505283"},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.is.2023.102342"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58621-8_13"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2009.03.002"},{"key":"e_1_3_2_63_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar et al. 2023. LLaMA: Open and efficient foundation language models. arXiv:2302.13971. Retrieved from https:\/\/doi.org\/10.48550\/arXiv.2302.13971"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331259"},{"key":"e_1_3_2_65_2","doi-asserted-by":"crossref","unstructured":"Felipe Viegas Antonio Pereira Washington Cunha Celso Fran\u00e7a Claudio Andrade Elisa Tuler Leonardo Rocha and Marcos Andr\u00e9 Gon\u00e7alves. 2024. Exploiting contextual embeddings in hierarchical topic modeling and investigating the limits of the current evaluation metrics. Computational Linguistics (2024) 1\u201359. Rerieved from https:\/\/doi.org\/10.1162\/coli_a_00543","DOI":"10.1162\/coli_a_00543"},{"key":"e_1_3_2_66_2","first-page":"5776","article-title":"Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers","volume":"33","author":"Wang Wenhui","year":"2020","unstructured":"Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems, Vol. 33, 5776\u20135788.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_67_2","first-page":"5754","article-title":"XLNet: Generalized autoregressive pretraining for language understanding","volume":"32","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Neural Information Processing Systems, Vol. 32, 5754\u20135764.","journal-title":"Neural Information Processing Systems"},{"key":"e_1_3_2_68_2","first-page":"649","article-title":"Character-level convolutional networks for text classification","volume":"28","author":"Zhang Xiang","year":"2016","unstructured":"Xiang Zhang, Junbo Zhao, and Yann LeCun. 2016. Character-level convolutional networks for text classification. In Neural Information Processing Systems (NIPS\u201916). Vol. 28, 649\u2013657.","journal-title":"Neural Information Processing Systems (NIPS\u201916)"},{"key":"e_1_3_2_69_2","unstructured":"Yazhou Zhang Mengyao Wang Chenyu Ren Qiuchi Li Prayag Tiwari Benyou Wang and Jing Qin. 2024. Pushing the limit of LLM capacity for text classification. arXiv:2402.07470. Retrieved from https:\/\/arxiv.org\/abs\/2402.07470"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591752"}],"container-title":["ACM Transactions on Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3705000","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3705000","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:02Z","timestamp":1750295882000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3705000"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,17]]},"references-count":69,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,3,31]]}},"alternative-id":["10.1145\/3705000"],"URL":"https:\/\/doi.org\/10.1145\/3705000","relation":{},"ISSN":["1046-8188","1558-2868"],"issn-type":[{"value":"1046-8188","type":"print"},{"value":"1558-2868","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,17]]},"assertion":[{"value":"2024-07-16","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-11","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-01-17","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}