{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T05:50:10Z","timestamp":1772862610447,"version":"3.50.1"},"reference-count":59,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2025,4,9]],"date-time":"2025-04-09T00:00:00Z","timestamp":1744156800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"research project QG.23.32 of Vietnam National University"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2025,5,31]]},"abstract":"<jats:p>\n            Dynamic graph learning is a rapidly developing area of research due to its widespread application in various real-world networks. Most existing works combine graph neural networks and sequential models to exploit the graph topology and the temporal information of dynamic graphs. However, these methods exhibit certain limitations in extracting local and global information and capturing fine-grained temporal structure in dynamic graphs. In this article, we present our novel framework, Dynamic Graph Subtree Attention, which is centralized by a learnable temporal edge sampling module and a lightweight attention operator to address the aforementioned issues. Our approach first constructs a temporal union graph for each time step using an adaptive edge sampling module, which preserves relevant interactions for our graph encoder to directly exploit fine-gained interactions across different times. Based on the temporal union graph, we further propose a subtree attention module that leverages the multi-hop representation and the self-attention mechanism to properly extract the local and global information from first- to high-order neighborhoods. To further reduce the computation complexity, the subtree module is equipped with a kernelized attention operation, which scales linearly with respect to the number of edges. By performing extensive experiments, we demonstrate the superiority of our proposed model in dynamic graph representation learning, as it consistently outperforms existing methods in future link prediction tasks. The code is publicly available at:\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/minhduc1122002\/DySubTree\">https:\/\/github.com\/minhduc1122002\/DySubTree<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3720549","type":"journal-article","created":{"date-parts":[[2025,2,27]],"date-time":"2025-02-27T14:47:16Z","timestamp":1740667636000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Temporal Structural Preserving with Subtree Attention in Dynamic Graph Transformers"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-4824-311X","authenticated-orcid":false,"given":"Minh Duc","family":"Nguyen","sequence":"first","affiliation":[{"name":"HMI Laboratory, VNU University of Engineering and Technology Faculty of Information Technology, Hanoi, Vietnam"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8058-5915","authenticated-orcid":false,"given":"Viet Cuong","family":"Ta","sequence":"additional","affiliation":[{"name":"HMI Laboratory, VNU University of Engineering and Technology Faculty of Information Technology, Hanoi, Vietnam"}]}],"member":"320","published-online":{"date-parts":[[2025,4,9]]},"reference":[{"key":"e_1_3_2_2_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Alon Uri","year":"2021","unstructured":"Uri Alon and Eran Yahav. 2021. On the bottleneck of graph neural networks and its practical implications. In Proceedings of the International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=i80OPhOCVH2"},{"key":"e_1_3_2_3_2","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Arora Raman","year":"2019","unstructured":"Raman Arora and Jalaj Upadhyay. 2019. On Differentially Private Graph Sparsification and Applications. In Proceedings of the Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2019\/file\/e44e875c12109e4fa3716c05008048b2-Paper.pdf"},{"key":"e_1_3_2_4_2","unstructured":"Jimmy Lei Ba Jamie Ryan Kiros and Geoffrey E. Hinton. 2016. Layer normalization. arXiv:1607.06450. Retrieved from https:\/\/doi.org\/10.48550\/arXiv.1607.06450"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2023.127044"},{"key":"e_1_3_2_6_2","volume-title":"Proceedings of the 40th International Conference on Machine Learning (ICML\u201923)","author":"Cai Chen","year":"2023","unstructured":"Chen Cai, Truong Son Hy, Rose Yu, and Yusu Wang. 2023. On the connection between MPNN and graph transformer. In Proceedings of the 40th International Conference on Machine Learning (ICML\u201923). JMLR.org, Article 138, 23 pages."},{"key":"e_1_3_2_7_2","unstructured":"Deli Chen Yankai Lin Wei Li Peng Li Jie Zhou and Xu Sun. 2019. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. arXiv:1909.03211. Retrieved from http:\/\/arxiv.org\/abs\/1909.03211"},{"key":"e_1_3_2_8_2","unstructured":"Jinyin Chen Xuanheng Xu Yangyang Wu and Haibin Zheng. 2018. GC-LSTM: Graph convolution embedded LSTM for dynamic link prediction. arXiv:1812.04206. Retrieved from http:\/\/arxiv.org\/abs\/1812.04206"},{"key":"e_1_3_2_9_2","unstructured":"Eli Chien Jianhao Peng Pan Li and Olgica Milenkovic. 2020. Joint adaptive feature smoothing and topology extraction via generalized PageRank GNNs. arXiv:2006.07988. Retrieved from https:\/\/arxiv.org\/abs\/2006.07988"},{"key":"e_1_3_2_10_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Choromanski Krzysztof Marcin","year":"2021","unstructured":"Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J Colwell, and Adrian Weller. 2021. Rethinking attention with performers. In Proceedings of the International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=Ua6zuk0WRH"},{"key":"e_1_3_2_11_2","doi-asserted-by":"crossref","unstructured":"Qiang Cui Shu Wu Yan Huang and Liang Wang. 2019. A hierarchical contextual attention-based GRU network for sequential recommendation. Neurocomputing 358 (2019) 141\u2013149.","DOI":"10.1016\/j.neucom.2019.04.073"},{"key":"e_1_3_2_12_2","unstructured":"Micha\u00ebl Defferrard Xavier Bresson and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. arXiv:1606.09375. Retrieved from http:\/\/arxiv.org\/abs\/1606.09375"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","unstructured":"Hao Dong Pengyang Wang Meng Xiao Zhiyuan Ning Pengfei Wang and Yuanchun Zhou. 2024. Temporal inductive path neural network for temporal knowledge graph reasoning. Artificial Intelligence 329 (2024) 104085. DOI: 10.1016\/j.artint.2024.104085.","DOI":"10.1016\/j.artint.2024.104085"},{"key":"e_1_3_2_14_2","unstructured":"Chelsea Finn Pieter Abbeel and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv:1703.03400. Retrieved from http:\/\/arxiv.org\/abs\/1703.03400"},{"key":"e_1_3_2_15_2","unstructured":"William L. Hamilton Rex Ying and Jure Leskovec. 2017. Inductive representation learning on large graphs. arXiv:1706.02216. Retrieved from http:\/\/arxiv.org\/abs\/1706.02216"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","unstructured":"Shengxiang Hu Guobing Zou Shiyi Lin Liangrui Wu Chenyang Zhou Bofeng Zhang and Yixin Chen. 2023. Recurrent transformer for dynamic graph representation learning with edge temporal states. arXiv:2304.10079. Retrieved from 10.48550\/arXiv.2304.10079","DOI":"10.48550\/arXiv.2304.10079"},{"key":"e_1_3_2_17_2","unstructured":"Siyuan Huang Yunchong Song Jiayue Zhou and Zhouhan Lin. 2023. Tailoring self-attention for graph via rooted subtrees. In Proceedings of the 37nth Conference on Neural Information Processing Systems. Retrieved from https:\/\/openreview.net\/forum?id=t2hEZadBBk"},{"key":"e_1_3_2_18_2","first-page":"10","volume-title":"Proceedings of the 37th International Conference on Machine Learning (ICML\u201920). JMLR.org, Article 478","author":"Katharopoulos Angelos","year":"2020","unstructured":"Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Fran\u00e7ois Fleuret. 2020. Transformers are RNNs: fast autoregressive transformers with linear attention. In Proceedings of the 37th International Conference on Machine Learning (ICML\u201920). JMLR.org, Article 478, 10 pages."},{"key":"e_1_3_2_19_2","unstructured":"Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907. Retrieved from http:\/\/arxiv.org\/abs\/1609.02907"},{"key":"e_1_3_2_20_2","unstructured":"Johannes Klicpera Aleksandar Bojchevski and Stephan G\u00fcnnemann. 2018. Personalized embedding propagation: Combining neural networks on graphs with personalized PageRank. arXiv:1810.05997. Retrieved from http:\/\/arxiv.org\/abs\/1810.05997"},{"key":"e_1_3_2_21_2","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Kool Wouter","year":"2019","unstructured":"Wouter Kool, Herke van Hoof, and Max Welling. 2019. Stochastic beams and where to find them: The gumbel-top-k trick for sampling sequences without replacement. In Proceedings of the International Conference on Machine Learning."},{"key":"e_1_3_2_22_2","unstructured":"Devin Kreuzer Dominique Beaini William L. Hamilton Vincent L\u00e9tourneau and Prudencio Tossou. 2021. Rethinking graph transformers with spectral attention. arXiv:2106.03893. Retrieved from https:\/\/arxiv.org\/abs\/2106.03893"},{"key":"e_1_3_2_23_2","doi-asserted-by":"crossref","first-page":"333","DOI":"10.1145\/3159652.3159729","volume-title":"Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM","author":"Kumar Srijan","year":"2018","unstructured":"Srijan Kumar, Bryan Hooi, Disha Makhija, Mohit Kumar, Christos Faloutsos, and VS Subrahmanian. 2018. Rev2: Fraudulent user prediction in rating platforms. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM, 333\u2013341."},{"key":"e_1_3_2_24_2","first-page":"221","volume-title":"Proceedings of the IEEE 16th International Conference on Data Mining (ICDM \u201916)","author":"Kumar Srijan","year":"2016","unstructured":"Srijan Kumar, Francesca Spezzano, V. S. Subrahmanian, and Christos Faloutsos. 2016. Edge weight prediction in weighted signed networks. In Proceedings of the IEEE 16th International Conference on Data Mining (ICDM \u201916). IEEE, 221\u2013230."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330895"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i7.26021"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591723"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","unstructured":"Jintang Li Sheng Tian Ruofan Wu Liang Zhu Welong Zhao Changhua Meng Liang Chen Zibin Zheng and Hongzhi Yin. 2023. Less can be more: Unsupervised graph pruning for large-scale dynamic graphs. arXiv:2305.10673. Retrieved from 10.48550\/arXiv.2305.10673","DOI":"10.48550\/arXiv.2305.10673"},{"key":"e_1_3_2_29_2","volume-title":"Proceedings of the 38th Annual Conference on Neural Information Processing Systems","author":"Li Jintang","year":"2024","unstructured":"Jintang Li, Ruofan Wu, Xinzhou Jin, Boqun Ma, Liang Chen, and Zibin Zheng. 2024. State space models on temporal graphs: A first-principles study. In Proceedings of the 38th Annual Conference on Neural Information Processing Systems. Retrieved from https:\/\/openreview.net\/forum?id=UaJErAOssN"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM50108.2020.00041"},{"key":"e_1_3_2_31_2","unstructured":"Gr\u00e9goire Mialon Dexiong Chen Margot Selosse and Julien Mairal. 2021. GraphiT: Encoding graph structure in transformers. arXiv:2106.05667. Retrieved from https:\/\/arxiv.org\/abs\/2106.05667"},{"issue":"5","key":"e_1_3_2_32_2","doi-asserted-by":"crossref","first-page":"911","DOI":"10.1002\/asi.21015","article-title":"Patterns and dynamics of users\u2019 behavior and interaction: Network analysis of an online community","volume":"60","author":"Panzarasa Pietro","year":"2009","unstructured":"Pietro Panzarasa, Tore Opsahl, and Kathleen M. Carley. 2009. Patterns and dynamics of users\u2019 behavior and interaction: Network analysis of an online community. J. Am. Soc. Inf. Sci. Technol. 60, 5 (May 2009), 911\u2013932.","journal-title":"J. Am. Soc. Inf. Sci. Technol"},{"key":"e_1_3_2_33_2","unstructured":"Ashwin Paranjape Austin R. Benson and Jure Leskovec. 2016. Motifs in temporal networks. arXiv:1612.09259. Retrieved from http:\/\/arxiv.org\/abs\/1612.09259"},{"key":"e_1_3_2_34_2","article-title":"EvolveGCN: Evolving graph convolutional networks for dynamic graphs","author":"Pareja Aldo","year":"2020","unstructured":"Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, and Charles E. Leiserson. 2020. EvolveGCN: Evolving graph convolutional networks for dynamic graphs. In Proceedings of the 34th AAAI Conference on Artificial Intelligence.","journal-title":"In Proceedings of the 34th AAAI Conference on Artificial Intelligence"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.emnlp-main.473"},{"key":"e_1_3_2_36_2","volume-title":"Advances in Neural Information Processing Systems","author":"Rampasek Ladislav","year":"2022","unstructured":"Ladislav Rampasek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. 2022. Recipe for a general, powerful, scalable graph transformer. In Advances in Neural Information Processing Systems. Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.), Curran Associates, Inc. Retrieved from https:\/\/openreview.net\/forum?id=lMMaNf6oxKM"},{"key":"e_1_3_2_37_2","volume-title":"In Proceedings of the International Conference on Learning Representations","author":"Rong Yu","year":"2020","unstructured":"Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. 2020. DropEdge: Towards deep graph convolutional networks on node classification. In Proceedings of the International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=Hkx1qkrKPr"},{"key":"e_1_3_2_38_2","unstructured":"Emanuele Rossi Ben Chamberlain Fabrizio Frasca Davide Eynard Federico Monti and Michael M. Bronstein. 2020. Temporal graph networks for deep learning on dynamic graphs. arXiv:2006.10637. Retrieved from https:\/\/arxiv.org\/abs\/2006.10637"},{"key":"e_1_3_2_39_2","article-title":"The network data repository with interactive graph analytics and visualization","author":"Rossi Ryan A.","year":"2015","unstructured":"Ryan A. Rossi and Nesreen K. Ahmed. 2015. The network data repository with interactive graph analytics and visualization. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI \u201915). Retrieved from https:\/\/networkrepository.com","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence (AAAI \u201915)"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3336191.3371845"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-04167-0_33"},{"key":"e_1_3_2_42_2","volume-title":"Proceedings of the 40th International Conference on Machine Learning (ICML\u201923)","author":"Shirzad Hamed","year":"2023","unstructured":"Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J. Sutherland, and Ali Kemal Sinop. 2023. Exphormer: Sparse transformers for graphs. In Proceedings of the 40th International Conference on Machine Learning (ICML\u201923). JMLR.org, Article 1310, 20 pages."},{"key":"e_1_3_2_43_2","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"},{"key":"e_1_3_2_44_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Veli\u010dkovi\u0107 Petar","year":"2018","unstructured":"Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=rJXMpikCZ"},{"key":"e_1_3_2_45_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/425"},{"key":"e_1_3_2_46_2","unstructured":"Felix Wu Tianyi Zhang Amauri H. Souza Jr. Christopher Fifty Tao Yu and Kilian Q. Weinberger. 2019. Simplifying graph convolutional networks. arXiv:1902.07153. Retrieved from http:\/\/arxiv.org\/abs\/1902.07153"},{"key":"e_1_3_2_47_2","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Wu Qitian","year":"2022","unstructured":"Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, and Junchi Yan. 2022. NodeFormer: A scalable graph structure learning transformer for node classification. In Proceedings of the Advances in Neural Information Processing Systems. Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.), Curran Associates, Inc. Retrieved from https:\/\/openreview.net\/forum?id=sMezXGG5So"},{"key":"e_1_3_2_48_2","unstructured":"Da Xu Chuanwei Ruan Evren K\u00f6rpeoglu Sushant Kumar and Kannan Achan. 2020. Inductive representation learning on temporal graphs. arXiv:2002.07962. Retrieved from https:\/\/arxiv.org\/abs\/2002.07962"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","unstructured":"Leshanshui Yang Cl\u00e9ment Chatelain and S\u00e9bastien Adam. 2024. Dynamic graph representation learning with neural networks: A survey. IEEE Access 12 (2024) 43460\u201343484. DOI: 10.1109\/ACCESS.2024.3378111","DOI":"10.1109\/ACCESS.2024.3378111"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467422"},{"key":"e_1_3_2_51_2","unstructured":"Chengxuan Ying Tianle Cai Shengjie Luo Shuxin Zheng Guolin Ke Di He Yanming Shen and Tie-Yan Liu. 2021. Do transformers really perform bad for graph representation? arXiv:2106.05234. Retrieved from https:\/\/arxiv.org\/abs\/2106.05234"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219890"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539300"},{"key":"e_1_3_2_54_2","unstructured":"Jiaxuan You Yichen Wang Aditya Pal Pong Eksombatchai Chuck Rosenberg and Jure Leskovec. 2019. Hierarchical temporal convolutional networks for dynamic recommender systems. arXiv:1904.04381. Retrieved from http:\/\/arxiv.org\/abs\/1904.04381"},{"key":"e_1_3_2_55_2","first-page":"17283","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Zaheer Manzil","year":"2020","unstructured":"Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big Bird: Transformers for longer sequences. In Proceedings of the Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 17283\u201317297. Retrieved from https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2020\/file\/c8512d142a2d849725f31a9a7a361ab9-Paper.pdf"},{"key":"e_1_3_2_56_2","unstructured":"Biao Zhang and Rico Sennrich. 2019. Root mean square layer normalization. In Proceedings of the Advances in Neural Information Processing Systems Vol. 32. Vancouver Canada. Retrieved from https:\/\/openreview.net\/references\/pdf?id=S1qBAf6rr"},{"key":"e_1_3_2_57_2","unstructured":"Zeyang Zhang Xin Wang Ziwei Zhang Haoyang Li Zhou Qin and Wenwu Zhu. 2022. Dynamic graph neural networks under spatio-temporal distribution shift. In Proceedings of the Advances in Neural Information Processing Systems. Alice H. Oh Alekh Agarwal Danielle Belgrave and Kyunghyun Cho (Eds.) Curran Associates Inc. Retrieved from https:\/\/openreview.net\/forum?id=1tIUqrUuJxx"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2019.2935152"},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.5555\/3524938.3526000"},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1145\/3580305.3599551"}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3720549","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3720549","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:57:38Z","timestamp":1750298258000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3720549"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,9]]},"references-count":59,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,5,31]]}},"alternative-id":["10.1145\/3720549"],"URL":"https:\/\/doi.org\/10.1145\/3720549","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,9]]},"assertion":[{"value":"2024-04-10","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-02-22","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-04-09","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}