{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:54:11Z","timestamp":1775069651526,"version":"3.50.1"},"reference-count":69,"publisher":"Association for Computing Machinery (ACM)","issue":"7","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["U2469205"],"award-info":[{"award-number":["U2469205"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities of China","doi-asserted-by":"crossref","award":["KF-20240769"],"award-info":[{"award-number":["KF-20240769"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2025,8,31]]},"abstract":"<jats:p>\n            With the increasing collection of electronic health records (EHRs), deep learning has become a crucial tool for real-time treatment analysis. However, due to patient privacy concerns, the scarcity of labeled data limits the end-to-end models that rely on large training data. Self-supervised pretraining offers a promising solution. Nevertheless, applying pretraining to EHRs faces two key issues: (1) EHRs exhibit multimodality, including monitoring data and recorded clinical note. For multimodal pretraining, designing a self-supervised task that can establish cross-modal associations while preserving all modal-unique information remains challenging. (2) Both modalities are sequential and irregular, with varying intervals between monitoring or records. Aligning monitoring times with recorded times poses a significant issue for fine-grained cross-modal pretraining. Existing pretraining models either focus on a single modality or only models regular data, failing to address them together. To fill this gap and fully utilize unlabel EHR data, we propose a\n            <jats:italic toggle=\"yes\">p<\/jats:italic>\n            retraining model to learn patient\n            <jats:italic toggle=\"yes\">r<\/jats:italic>\n            epresentation using unlabel\n            <jats:italic toggle=\"yes\">i<\/jats:italic>\n            rregular\n            <jats:italic toggle=\"yes\">m<\/jats:italic>\n            ultimodal\n            <jats:italic toggle=\"yes\">E<\/jats:italic>\n            HRs, named PRIME. We first utilize a multi-element encoding module to extract patient condition snapshots from both modalities. Then, to construct multiple aligned cross-modal positive sample pairs that span the entire treatment process from irregular data, we employ patient condition alignment modules that integrate time-aware and feature-aware components to transfer snapshots to the aligned timestamps. Next, to preserve both shared and unique information of each modality, our decoupled representation learning strategy first uses a constraint matrix to separate shared information. We then employ contrastive-based cross-modal learning and reconstruction-based intra-modal learning to model shared and complete information, respectively. Extensive experiments on two real-world tasks demonstrate the superiority of PRIME over the state-of-the-art models, especially with limited labels.\n          <\/jats:p>","DOI":"10.1145\/3744251","type":"journal-article","created":{"date-parts":[[2025,6,10]],"date-time":"2025-06-10T12:20:40Z","timestamp":1749558040000},"page":"1-39","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["PRIME: Pretraining for Patient Condition Representation with Irregular Multimodal Electronic Health Records"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-6279-6002","authenticated-orcid":false,"given":"Bohao","family":"Li","sequence":"first","affiliation":[{"name":"CCSE Lab, Beihang University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6297-5212","authenticated-orcid":false,"given":"Bowen","family":"Du","sequence":"additional","affiliation":[{"name":"School of Transportation Science and Engineering, Beihang University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2677-0751","authenticated-orcid":false,"given":"Junchen","family":"Ye","sequence":"additional","affiliation":[{"name":"School of Transportation Science and Engineering, Beihang University, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2025,8,7]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1377\/hlthaff.2015.0992"},{"key":"e_1_3_2_3_2","doi-asserted-by":"crossref","unstructured":"Emily Alsentzer John R. Murphy Willie Boag Wei-Hung Weng Di Jin Tristan Naumann and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. arXiv:1904.03323. Retrieved from https:\/\/arxiv.org\/abs\/1904.03323","DOI":"10.18653\/v1\/W19-1909"},{"key":"e_1_3_2_4_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, Vol. 33, 1877\u20131901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41597-023-01960-3"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-018-24271-9"},{"key":"e_1_3_2_7_2","article-title":"Contiformer: Continuous-time transformer for irregular time series modeling","volume":"36","author":"Chen Yuqi","year":"2024","unstructured":"Yuqi Chen, Kan Ren, Yansen Wang, Yuchen Fang, Weiwei Sun, and Dongsheng Li. 2024. Contiformer: Continuous-time transformer for irregular time series modeling. In Advances in Neural Information Processing Systems, Vol. 36.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3098126"},{"key":"e_1_3_2_9_2","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Retrieved from https:\/\/arxiv.org\/abs\/1810.04805"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-64148-1_13"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403123"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41597-019-0103-9"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3449944"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-020-78888-w"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41597-022-01899-x"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1038\/sdata.2016.35"},{"key":"e_1_3_2_17_2","unstructured":"Seyed Mehran Kazemi Rishab Goel Sepehr Eghbali Janahan Ramanan Jaspreet Sahota Sanjay Thakur Stella Wu Cathal Smyth Pascal Poupart and Marcus Brubaker. 2019. Time2vec: Learning a vector representation of time. arXiv:1907.05321. Retrieved from https:\/\/arxiv.org\/abs\/1907.05321"},{"key":"e_1_3_2_18_2","doi-asserted-by":"crossref","unstructured":"Swaraj Khadanga Karan Aggarwal Shafiq Joty and Jaideep Srivastava. 2019. Using clinical notes with time series data for ICU management. arXiv:1909.09702. Retrieved from https:\/\/arxiv.org\/abs\/1909.09702","DOI":"10.18653\/v1\/D19-1678"},{"key":"e_1_3_2_19_2","first-page":"244","volume-title":"Machine Learning for Health (ML4H)","author":"King Ryan","year":"2023","unstructured":"Ryan King, Tianbao Yang, and Bobak J. Mortazavi. 2023. Multimodal pretraining of medical time series and notes. In Machine Learning for Health (ML4H). PMLR, 244\u2013255."},{"key":"e_1_3_2_20_2","unstructured":"Yeonsu Kwon Jiho Kim Gyubok Lee Seongsu Bae Daeun Kyung Wonchul Cha Tom Pollard Alistair Johnson and Edward Choi. 2024. EHRCon: Dataset for checking consistency between unstructured notes and structured tables in electronic health records. arXiv:2406.16341. Retrieved from https:\/\/arxiv.org\/abs\/2406.16341"},{"key":"e_1_3_2_21_2","article-title":"Ecg representation learning with multi-modal ehr data","author":"Lalam Sravan Kumar","year":"2023","unstructured":"Sravan Kumar Lalam, Hari Krishna Kunderu, Shayan Ghosh, Harish Kumar, Samir Awasthi, Ashim Prasad, Francisco Lopez-Jimenez, Zachi I. Attia, Samuel Asirvatham, Paul Friedman et al. 2023. Ecg representation learning with multi-modal ehr data. Transactions on Machine Learning Research (Nov. 2023).","journal-title":"Transactions on Machine Learning Research (Nov. 2023)."},{"key":"e_1_3_2_22_2","first-page":"423","volume-title":"Machine Learning for Healthcare Conference","author":"Lee Kwanhyung","year":"2023","unstructured":"Kwanhyung Lee, Soojeong Lee, Sangchul Hahn, Heejung Hyun, Edward Choi, Byungeun Ahn, and Joohyung Lee. 2023. Learning missing modal electronic health records with unified multi-modal data embedding and modality-aware attention. In Machine Learning for Healthcare Conference. PMLR, 423\u2013442."},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-43895-0_61"},{"key":"e_1_3_2_24_2","unstructured":"Yikuan Li Ramsey M. Wehbe Faraz S. Ahmad Hanyin Wang and Yuan Luo. 2022. Clinical-long former and clinical-big bird: Transformers for long clinical sequences. arXiv:2201.11838. Retrieved from https:\/\/arxiv.org\/abs\/2201.11838"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1093\/jamia\/ocac225"},{"issue":"1","key":"e_1_3_2_26_2","first-page":"1","article-title":"HGV4risk: Hierarchical global view-guided sequence representation learning for risk prediction","volume":"18","author":"Li Youru","year":"2023","unstructured":"Youru Li, Zhenfeng Zhu, Xiaobo Guo, Shaoshuai Li, Yuchen Yang, and Yao Zhao. 2023. HGV4risk: Hierarchical global view-guided sequence representation learning for risk prediction. ACM Transactions on Knowledge Discovery from Data 18, 1 (2023), 1\u201321.","journal-title":"ACM Transactions on Knowledge Discovery from Data"},{"key":"e_1_3_2_27_2","article-title":"Time series as images: Vision transformer for irregularly sampled time series","volume":"36","author":"Li Zekun","year":"2024","unstructured":"Zekun Li, Shiyang Li, and Xifeng Yan. 2024. Time series as images: Vision transformer for irregularly sampled time series. In Advances in Neural Information Processing Systems, Vol. 36.","journal-title":"Advances in Neural Information Processing Systems, Vol"},{"key":"e_1_3_2_28_2","article-title":"Factorized contrastive learning: Going beyond multi-view redundancy","volume":"36","author":"Liang Paul Pu","year":"2024","unstructured":"Paul Pu Liang, Zihao Deng, Martin Q. Ma, James Y. Zou, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2024. Factorized contrastive learning: Going beyond multi-view redundancy. In Advances in Neural Information Processing Systems, Vol. 36.","journal-title":"Advances in Neural Information Processing Systems, Vol"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i12.29299"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.3390\/biomedicines10102594"},{"key":"e_1_3_2_31_2","unstructured":"Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv:1711.05101. Retrieved from https:\/\/arxiv.org\/abs\/1711.05101"},{"key":"e_1_3_2_32_2","unstructured":"Chang Lu Chandan K. Reddy Prithwish Chakraborty Samantha Kleinberg and Yue Ning. 2021. Collaborative graph learning with auxiliary text for temporal event prediction in healthcare. arXiv:2105.07542. Retrieved from https:\/\/arxiv.org\/abs\/2105.07542"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3449855"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5428"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/3450439.3451877"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2023.102069"},{"issue":"3","key":"e_1_3_2_37_2","first-page":"81","article-title":"Preventable adverse drug events in hospitalized patients","volume":"126","author":"Otero-L\u00f3pez Mar\u00eda Jos\u00e9","year":"2006","unstructured":"Mar\u00eda Jos\u00e9 Otero-L\u00f3pez, Pablo Alonso-Hern\u00e1ndez, Jos\u00e9 Angel Maderuelo-Fern\u00e1ndez, Beatriz Garrido-Corro, Alfonso Dom\u00ednguez-Gil, and Angel S\u00e1nchez-Rodr\u00edguez. 2006. Preventable adverse drug events in hospitalized patients. Medicina Clinica 126, 3 (2006), 81\u201387.","journal-title":"Medicina Clinica"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASRU46091.2019.9003958"},{"key":"e_1_3_2_39_2","first-page":"32","article-title":"Pytorch: An imperative style, high-performance deep learning library","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, Vol. 32.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_40_2","first-page":"8024","volume-title":"Advances in Neural Information Processing Systems","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d\u2019Alch\u00e9 Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc., 8024\u20138035. Retrieved from http:\/\/papers.neurips.cc\/paper\/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf"},{"key":"e_1_3_2_41_2","unstructured":"Mandela Patrick Yuki M. Asano Polina Kuznetsova Ruth Fong Jo\u00e3o F. Henriques Geoffrey Zweig and Andrea Vedaldi. 2021. Multi-modal self-supervision from generalized data transformations. In International Conference on Computer Vision (ICCV)."},{"key":"e_1_3_2_42_2","first-page":"8748","volume-title":"International Conference on Machine Learning","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. PMLR, 8748\u20138763."},{"key":"e_1_3_2_43_2","first-page":"28531","volume-title":"International Conference on Machine Learning","author":"Raghu Aniruddh","year":"2023","unstructured":"Aniruddh Raghu, Payal Chandak, Ridwan Alam, John Guttag, and Collin Stultz. 2023. Sequential multi-dimensional self-supervised learning for clinical time series. In International Conference on Machine Learning. PMLR, 28531\u201328548."},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1093\/shm\/5.2.183"},{"key":"e_1_3_2_45_2","unstructured":"Satya Narayan Shukla and Benjamin M. Marlin. 2021. Multi-time attention networks for irregularly sampled time series. arXiv:2101.10318. Retrieved from https:\/\/arxiv.org\/abs\/2101.10318"},{"key":"e_1_3_2_46_2","unstructured":"Yuqi Si and Kirk Roberts. 2021. Three-level hierarchical transformer networks for long-sequence and multiple clinical documents classification. arXiv:2104.08444. Retrieved from https:\/\/arxiv.org\/abs\/2104.08444"},{"key":"e_1_3_2_47_2","unstructured":"Chenxi Sun Shenda Hong Moxian Song and Hongyan Li. 2020. A review of deep learning methods for irregularly sampled medical time series data. arXiv:2010.12493. Retrieved from https:\/\/arxiv.org\/abs\/2010.12493"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5440"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/3627673.3679962"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3516367"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1656"},{"key":"e_1_3_2_52_2","first-page":"30","article-title":"Attention is all you need","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, Vol. 30.","journal-title":"Advances in Neural Information Processing Systems, Vol"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.171"},{"key":"e_1_3_2_54_2","article-title":"Contrast everything: A hierarchical contrastive framework for medical time-series","volume":"36","author":"Wang Yihe","year":"2024","unstructured":"Yihe Wang, Yu Han, Haishuai Wang, and Xiang Zhang. 2024. Contrast everything: A hierarchical contrastive framework for medical time-series. In Advances in Neural Information Processing Systems, Vol. 36.","journal-title":"Advances in Neural Information Processing Systems, Vol"},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.02.046"},{"key":"e_1_3_2_56_2","volume-title":"13th International Conference on Learning Representations","author":"Wornow Michael","unstructured":"Michael Wornow, Suhana Bedi, Miguel Angel Fuentes Hernandez, Ethan Steinberg, Jason Alan Fries, Christopher Re, Sanmi Koyejo, and Nigam Shah. n.d. Context clues: Evaluating long context models for clinical prediction tasks on EHR data. In 13th International Conference on Learning Representations."},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1093\/jamia\/ocy068"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220051"},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2023\/547"},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i4.25667"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i15.29578"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i8.20881"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539388"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i1.16152"},{"key":"e_1_3_2_65_2","first-page":"566","volume-title":"Machine Learning for Healthcare Conference","author":"Zhang Dongyu","year":"2020","unstructured":"Dongyu Zhang, Jidapa Thadajarassiri, Cansu Sen, and Elke Rundensteiner. 2020. Time-aware transformer-based network for clinical notes series prediction. In Machine Learning for Healthcare Conference. PMLR, 566\u2013588."},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3580305.3599543"},{"key":"e_1_3_2_67_2","first-page":"41300","volume-title":"International Conference on Machine Learning","author":"Zhang Xinlu","year":"2023","unstructured":"Xinlu Zhang, Shiyang Li, Zhiyu Chen, Xifeng Yan, and Linda Ruth Petzold. 2023. Improving medical predictions by irregular multimodal electronic health records modeling. In International Conference on Machine Learning. PMLR, 41300\u201341313."},{"key":"e_1_3_2_68_2","unstructured":"Xiang Zhang Marko Zeman Theodoros Tsiligkaridis and Marinka Zitnik. 2021. Graph-guided network for irregularly sampled multivariate time series. arXiv:2110.05357. Retrieved from https:\/\/arxiv.org\/abs\/2110.05357"},{"key":"e_1_3_2_69_2","unstructured":"Yilan Zhang Yingxue Xu Jianqi Chen Fengying Xie and Hao Chen. 2024. Prototypical information bottlenecking and disentangling for multimodal cancer survival prediction. arXiv:2401.01646. Retrieved from https:\/\/arxiv.org\/abs\/2401.01646"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/3627673.3679582"}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3744251","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,7]],"date-time":"2025-08-07T21:45:57Z","timestamp":1754603157000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3744251"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,7]]},"references-count":69,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2025,8,31]]}},"alternative-id":["10.1145\/3744251"],"URL":"https:\/\/doi.org\/10.1145\/3744251","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,8,7]]},"assertion":[{"value":"2024-12-26","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-06-02","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-08-07","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}