{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T19:50:38Z","timestamp":1774986638308,"version":"3.50.1"},"reference-count":62,"publisher":"Association for Computing Machinery (ACM)","issue":"3","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Manag. Data"],"published-print":{"date-parts":[[2025,6,17]]},"abstract":"<jats:p>\n                    Dynamic Graph Neural Networks (DGNNs) are effective at capturing multidimensional data and enable many important applications. As model training is computationally intensive, distributed DGNN training is employed to accommodate large data. Also, when training DGNNs, so-called sliding window training is used predominantly, as it enhances both accuracy and efficiency. However, current distributed frameworks-such as snapshot partitioning, chunk-based partitioning, and\n                    <jats:italic toggle=\"yes\">L<\/jats:italic>\n                    -hop cache-based communication-free vertex partitioning-are inherently incompatible with sliding window training. While communication-based vertex partitioning supports sliding window training, its design for static graphs limits the effectiveness in distributed DGNN training. Specifically, existing partitioning strategies fail to optimize communication across snapshots, while existing cache reuse and communication scheduling strategies ignore opportunities for optimization between sliding windows. To support distributed sliding window training, we present\n                    <jats:italic toggle=\"yes\">SWASH,<\/jats:italic>\n                    a scalable and flexible communication framework that utilizes a\n                    <jats:bold>&lt;u&gt;S&lt;\/u&gt;<\/jats:bold>\n                    liding\n                    <jats:bold>&lt;u&gt;W&lt;\/u&gt;<\/jats:bold>\n                    indow-based c\n                    <jats:bold>&lt;u&gt;A&lt;\/u&gt;<\/jats:bold>\n                    che\n                    <jats:bold>&lt;u&gt;SH&lt;\/u&gt;<\/jats:bold>\n                    aring technique. Specifically, we propose a flexible communication framework that supports ratio adjustment and timing selection, as well as hyperparameter settings and adaptive scheduling. We also propose a lightweight partitioning strategy tailored to sliding window-based DGNN training to reduce both partitioning and communication overheads. Finally, to alleviate decreases in accuracy due to reduced communication, we propose a cache-sharing technique based on sliding windows for sharing boundary vertex embeddings. Comprehensive experiments show that\n                    <jats:italic toggle=\"yes\">SWASH<\/jats:italic>\n                    is capable of training speedups of an average of 9.44\u00d7 over state-of-the-art frameworks while maintaining the accuracy of fully communicating, non-caching training frameworks.\n                  <\/jats:p>","DOI":"10.1145\/3725360","type":"journal-article","created":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T21:23:29Z","timestamp":1750281809000},"page":"1-26","source":"Crossref","is-referenced-by-count":1,"title":["SWASH: A Flexible Communication Framework with Sliding Window-Based Cache Sharing for Scalable DGNN Training"],"prefix":"10.1145","volume":"3","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-8737-2727","authenticated-orcid":false,"given":"Zhen","family":"Song","sequence":"first","affiliation":[{"name":"Northeastern University, Shenyang, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7422-6254","authenticated-orcid":false,"given":"Yu","family":"Gu","sequence":"additional","affiliation":[{"name":"Northeastern University, Shenyang, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5424-6442","authenticated-orcid":false,"given":"Tianyi","family":"Li","sequence":"additional","affiliation":[{"name":"Aalborg University, Aalborg, Denmark"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3043-3777","authenticated-orcid":false,"given":"Yushuai","family":"Li","sequence":"additional","affiliation":[{"name":"Aalborg University, Aalborg, Denmark"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-2365-9766","authenticated-orcid":false,"given":"Qing","family":"Sun","sequence":"additional","affiliation":[{"name":"Northeastern University, Shenyang, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9871-0304","authenticated-orcid":false,"given":"Yanfeng","family":"Zhang","sequence":"additional","affiliation":[{"name":"Northeastern University, Shenyang, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9697-7670","authenticated-orcid":false,"given":"Christian S.","family":"Jensen","sequence":"additional","affiliation":[{"name":"Aalborg University, Aalborg, Denmark"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3171-8889","authenticated-orcid":false,"given":"Ge","family":"Yu","sequence":"additional","affiliation":[{"name":"Northeastern University, Shenyang, China"}]}],"member":"320","published-online":{"date-parts":[[2025,6,18]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.3390\/ijgi10070485"},{"key":"e_1_2_1_2_1","unstructured":"Kaidi Cao Rui Deng Shirley Wu Edward W Huang Karthik Subbian and Jure Leskovec. 2023. Communication-Free Distributed GNN Training with Vertex Cut. arXiv preprint arXiv:2308.03209 (2023)."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3480858"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.14778\/3632093.3632108"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3626724"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-021-02518-9"},{"key":"e_1_2_1_7_1","first-page":"3837","article-title":"Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering","author":"Defferrard Micha\u00ebl","year":"2016","unstructured":"Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In NeurIPS. 3837-3845.","journal-title":"NeurIPS."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330919"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.14778\/3446095.3446097"},{"key":"e_1_2_1_10_1","volume-title":"Frameworks, Benchmarks, Experiments and Challenges. arXiv preprint arXiv:2405.00476","author":"Feng ZhengZhao","year":"2024","unstructured":"ZhengZhao Feng, Rui Wang, TianXing Wang, Mingli Song, Sai Wu, and Shuibing He. 2024. A Comprehensive Survey of Dynamic Graph Neural Networks: Models, Frameworks, Benchmarks, Experiments and Challenges. arXiv preprint arXiv:2405.00476, (2024)."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3581784.3607040"},{"key":"e_1_2_1_12_1","volume-title":"15th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 21). 551-568.","author":"Gandhi Swapnil","unstructured":"Swapnil Gandhi and Anand Padmanabha Iyer. 2021. P3: Distributed deep graph learning at scale. In 15th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 21). 551-568."},{"key":"e_1_2_1_13_1","first-page":"1","article-title":"DynaGraph: dynamic graph neural networks at scale","volume":"6","author":"Guan Mingyu","year":"2022","unstructured":"Mingyu Guan, Anand Padmanabha Iyer, and Taesoo Kim. 2022. DynaGraph: dynamic graph neural networks at scale. In GRADES-NDA. 6:1-6:10.","journal-title":"GRADES-NDA."},{"key":"e_1_2_1_14_1","first-page":"1024","article-title":"Inductive Representation Learning on Large Graphs","author":"Hamilton William L.","year":"2017","unstructured":"William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. In NeurIPS. 1024-1034.","journal-title":"NeurIPS."},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/2588555.2610495"},{"key":"e_1_2_1_16_1","volume-title":"International Conference on Machine Learning. PMLR, 14679-14690","author":"Jaiswal Ajay Kumar","year":"2023","unstructured":"Ajay Kumar Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, and Zhangyang Wang. 2023. Graph ladling: Shockingly simple parallel gnn training without intermediate communication. In International Conference on Machine Learning. PMLR, 14679-14690."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1137\/S1064827595287997"},{"key":"e_1_2_1_18_1","first-page":"1269","article-title":"Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks","author":"Kumar Srijan","year":"2019","unstructured":"Srijan Kumar, Xikun Zhang, and Jure Leskovec. 2019. Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks. In SIGKDD. 1269-1278.","journal-title":"SIGKDD."},{"key":"e_1_2_1_19_1","first-page":"937","article-title":"Cache-based GNN System for Dynamic Graphs","author":"Li Haoyang","year":"2021","unstructured":"Haoyang Li and Lei Chen. 2021. Cache-based GNN System for Dynamic Graphs. In CIKM. 937-946.","journal-title":"CIKM."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3589308"},{"key":"e_1_2_1_21_1","first-page":"1279","article-title":"Predicting Path Failure In Time-Evolving Graphs","author":"Li Jia","year":"2019","unstructured":"Jia Li, Zhichao Han, Hong Cheng, Jiao Su, Pengyun Wang, Jianfeng Zhang, and Lujia Pan. 2019. Predicting Path Failure In Time-Evolving Graphs. In SIGKDD. ACM, 1279-1289.","journal-title":"SIGKDD. ACM"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.14778\/3450980.3450987"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE53745.2022.00225"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.14778\/3384345.3384353"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3588737"},{"key":"e_1_2_1_26_1","unstructured":"Yaguang Li Rose Yu Cyrus Shahabi and Yan Liu. 2018. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In ICLR."},{"key":"e_1_2_1_27_1","unstructured":"Antonio Longa Veronica Lachi Gabriele Santin Monica Bianchini Bruno Lepri Pietro Lio Franco Scarselli and Andrea Passerini. 2023. Graph neural networks for temporal graphs: State of the art open challenges and opportunities. arXiv preprint arXiv:2302.01018 (2023)."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3480856"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i6.16616"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i6.16616"},{"key":"e_1_2_1_31_1","first-page":"5363","article-title":"EvolveGCN","author":"Pareja Aldo","year":"2020","unstructured":"Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, and Charles E. Leiserson. 2020. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs. In AAAI. 5363-5370.","journal-title":"Evolving Graph Convolutional Networks for Dynamic Graphs. In AAAI."},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.14778\/3538598.3538614"},{"key":"e_1_2_1_33_1","volume-title":"SEIGN: A Simple and Efficient Graph Neural Network for Large Dynamic Graphs. In 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2850-2863","author":"Qin Xiao","year":"2023","unstructured":"Xiao Qin, Nasrullah Sheikh, Chuan Lei, Berthold Reinwald, and Giacomo Domeniconi. 2023. SEIGN: A Simple and Efficient Graph Neural Network for Large Dynamic Graphs. In 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2850-2863."},{"key":"e_1_2_1_34_1","first-page":"4564","article-title":"PyTorch Geometric Temporal","author":"Rozemberczki Benedek","year":"2021","unstructured":"Benedek Rozemberczki, Paul Scherer, Yixuan He, George Panagopoulos, Alexander Riedel, Maria Sinziana Astefanoaei, Oliver Kiss, Ferenc B\u00e9res, Guzm\u00e1n L\u00f3pez, Nicolas Collignon, and Rik Sarkar. 2021. PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models. In CIKM. 4564-4573.","journal-title":"Spatiotemporal Signal Processing with Neural Machine Learning Models. In CIKM."},{"key":"e_1_2_1_35_1","first-page":"362","volume-title":"ICONIP","volume":"11301","author":"Seo Youngjoo","year":"2018","unstructured":"Youngjoo Seo, Micha\u00ebl Defferrard, Pierre Vandergheynst, and Xavier Bresson. 2018. Structured Sequence Modeling with Graph Convolutional Recurrent Networks. In ICONIP, Vol. 11301. 362-373."},{"key":"e_1_2_1_36_1","volume-title":"CHGNN: a semi-supervised contrastive hypergraph learning network","author":"Song Yumeng","year":"2024","unstructured":"Yumeng Song, Yu Gu, Tianyi Li, Jianzhong Qi, Zhenghao Liu, Christian S Jensen, and Ge Yu. 2024a. CHGNN: a semi-supervised contrastive hypergraph learning network. IEEE Transactions on Knowledge and Data Engineering, (2024)."},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3626716"},{"key":"e_1_2_1_38_1","first-page":"648","article-title":"EC-Graph","volume":"2022","author":"Song Zhen","year":"2022","unstructured":"Zhen Song, Yu Gu, Jianzhong Qi, Zhigang Wang, and Ge Yu. 2022. EC-Graph: A Distributed Graph Neural Network System with Error-Compensated Compression. In ICDE 2022. 648-660.","journal-title":"In ICDE"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.14778\/3681954.3682008"},{"key":"e_1_2_1_40_1","first-page":"57","article-title":"Predictive temporal embedding of dynamic graphs. In ASONAM","author":"Taheri Aynaz","year":"2019","unstructured":"Aynaz Taheri and Tanya Y. Berger-Wolf. 2019. Predictive temporal embedding of dynamic graphs. In ASONAM,. ACM, 57-64.","journal-title":"ACM"},{"key":"e_1_2_1_41_1","first-page":"495","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21)","author":"Thorpe John","year":"2021","unstructured":"John Thorpe, Yifan Qiao, Jonathan Eyolfson, Shen Teng, Guanzhou Hu, Zhihao Jia, Jinliang Wei, Keval Vora, Ravi Netravali, Miryung Kim, et al., 2021. Dorylus: Affordable, scalable, and accurate {GNN} training with distributed {CPU} servers and serverless threads. In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21). 495-514."},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3597428"},{"key":"e_1_2_1_43_1","unstructured":"Petar Veli\u010dkovi\u0107 Guillem Cucurull Arantxa Casanova Adriana Romero Pietro Lio and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903."},{"key":"e_1_2_1_44_1","first-page":"673","article-title":"Bns-gcn: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling","volume":"4","author":"Wan Cheng","year":"2022","unstructured":"Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, and Yingyan Lin. 2022. Bns-gcn: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling. Proceedings of Machine Learning and Systems, Vol. 4 (2022), 673-693.","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"e_1_2_1_45_1","first-page":"405","article-title":"PiPAD","author":"Wang Chunyang","year":"2023","unstructured":"Chunyang Wang, Desen Sun, and Yuebin Bai. 2023. PiPAD: Pipelined and Parallel Dynamic GNN Training on GPUs. In PPoPP. 405-418.","journal-title":"Pipelined and Parallel Dynamic GNN Training on GPUs. In PPoPP."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3514221.3526134"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448016.3457564"},{"key":"e_1_2_1_48_1","unstructured":"Yanbang Wang Yen-Yu Chang Yunyu Liu Jure Leskovec and Pan Li. 2021a. Inductive representation learning in temporal networks via causal anonymous walks. arXiv preprint arXiv:2101.05974 (2021)."},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3572848.3577490"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591816"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3588195.3592990"},{"key":"e_1_2_1_52_1","unstructured":"Da Xu Chuanwei Ruan Evren K\u00f6rpeoglu Sushant Kumar and Kannan Achan. 2020. Inductive representation learning on temporal graphs. In ICLR."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539300"},{"key":"e_1_2_1_54_1","volume-title":"Advances in Neural Information Processing Systems","author":"Yu Le","year":"2023","unstructured":"Le Yu, Leilei Sun, Bowen Du, and Weifeng Lv. 2023a. Towards Better Dynamic Graph Learning: New Architecture and Unified Library. Advances in Neural Information Processing Systems, (2023)."},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3579815"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2019.2935152"},{"key":"e_1_2_1_57_1","first-page":"1234","article-title":"GMAN","author":"Zheng Chuanpan","year":"2020","unstructured":"Chuanpan Zheng, Xiaoliang Fan, Cheng Wang, and Jianzhong Qi. 2020. GMAN: A Graph Multi-Attention Network for Traffic Prediction. In AAAI. 1234-1241.","journal-title":"A Graph Multi-Attention Network for Traffic Prediction. In AAAI."},{"key":"e_1_2_1_58_1","doi-asserted-by":"crossref","unstructured":"Yanping Zheng Lu Yi and Zhewei Wei. 2024. A survey of dynamic graph neural networks. arXiv preprint arXiv:2404.18211 (2024).","DOI":"10.1007\/s11704-024-3853-2"},{"key":"e_1_2_1_59_1","unstructured":"Yuchen Zhong Guangming Sheng Tianzuo Qin Minjie Wang Quan Gan and Chuan Wu. 2023. GNNFlow: A Distributed Framework for Continuous Temporal GNN Learning on Dynamic Graphs. arXiv preprint arXiv:2311.17410."},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.14778\/3529337.3529342"},{"key":"e_1_2_1_61_1","first-page":"1","article-title":"DistTGL","volume":"39","author":"Zhou Hongkuan","year":"2023","unstructured":"Hongkuan Zhou, Da Zheng, Xiang Song, George Karypis, and Viktor K. Prasanna. 2023. DistTGL: Distributed Memory-Based Temporal Graph Neural Network Training. In SC. 39:1-39:12.","journal-title":"Distributed Memory-Based Temporal Graph Neural Network Training. In SC."},{"key":"e_1_2_1_62_1","first-page":"3650","article-title":"WinGNN","author":"Zhu Yifan","year":"2023","unstructured":"Yifan Zhu, Fangpeng Cong, Dan Zhang, Wenwen Gong, Qika Lin, Wenzheng Feng, Yuxiao Dong, and Jie Tang. 2023. WinGNN: Dynamic Graph Neural Networks with Random Gradient Aggregation Window. In SIGKDD. 3650-3662.","journal-title":"In SIGKDD."}],"container-title":["Proceedings of the ACM on Management of Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3725360","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T18:58:48Z","timestamp":1774983528000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3725360"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,17]]},"references-count":62,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,6,17]]}},"alternative-id":["10.1145\/3725360"],"URL":"https:\/\/doi.org\/10.1145\/3725360","relation":{},"ISSN":["2836-6573"],"issn-type":[{"value":"2836-6573","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,17]]}}}