{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,26]],"date-time":"2025-09-26T16:45:28Z","timestamp":1758905128938,"version":"3.44.0"},"reference-count":43,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2024,5,29]],"date-time":"2024-05-29T00:00:00Z","timestamp":1716940800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Manag. Data"],"published-print":{"date-parts":[[2024,5,29]]},"abstract":"<jats:p>Dynamic graphs are essential in real-world scenarios like social media and e-commerce for tasks such as predicting links and classifying nodes. Temporal Graph Neural Networks (T-GNNs) stand out as a prime solution for managing dynamic graphs, employing temporal message passing to compute node embeddings at specific timestamps. Nonetheless, the high CPU-GPU data loading overhead has become the bottleneck for efficient training of T-GNNs over large-scale dynamic graphs. In this work, we present SIMPLE, a versatile system designed to address the major efficiency bottleneck in training existing T-GNNs on a large scale. It incorporates a dynamic data placement mechanism, which maintains a small buffer space in available GPU memory and dynamically manages its content during T-GNN training. SIMPLE is also empowered by systematic optimizations towards data processing flow. We compare SIMPLE to the state-of-the-art generic T-GNN training system TGL on four large-scale dynamic graphs with different underlying T-GNN models. Extensive experimental results show that SIMPLE effectively cuts down 80.5% ~ 96.8% data loading cost, and accelerates T-GNN training by 1.8\u00d7 ~ 3.8\u00d7 (2.6\u00d7 on average) compared to TGL.<\/jats:p>","DOI":"10.1145\/3654977","type":"journal-article","created":{"date-parts":[[2024,5,30]],"date-time":"2024-05-30T09:44:53Z","timestamp":1717062293000},"page":"1-25","source":"Crossref","is-referenced-by-count":9,"title":["SIMPLE: Efficient Temporal Graph Neural Network Training at Scale with Dynamic Data Placement"],"prefix":"10.1145","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0413-9005","authenticated-orcid":false,"given":"Shihong","family":"Gao","sequence":"first","affiliation":[{"name":"The Hong Kong University of Science and Technology, Hong Kong, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8376-6244","authenticated-orcid":false,"given":"Yiming","family":"Li","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology, Hong Kong, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8560-5006","authenticated-orcid":false,"given":"Xin","family":"Zhang","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology, Hong Kong, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8364-3674","authenticated-orcid":false,"given":"Yanyan","family":"Shen","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8559-2628","authenticated-orcid":false,"given":"Yingxia","family":"Shao","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8257-5806","authenticated-orcid":false,"given":"Lei","family":"Chen","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology (GZ), Guangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2024,5,30]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"[2023]. Stack-Overflow. https:\/\/snap.stanford.edu\/data\/sx-stackoverflow.html"},{"key":"e_1_2_1_2_1","unstructured":"[2023]. Wiki-Talk. http:\/\/snap.stanford.edu\/data\/wiki-talk-temporal.html"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cor.2021.105693"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3480858"},{"key":"e_1_2_1_5_1","volume-title":"International Conference on Learning Representations.","author":"Cong Weilin","year":"2023","unstructured":"Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, and Mehrdad Mahdavi. 2023. Do We Really Need Complicated Model Architectures For Temporal Networks?. In International Conference on Learning Representations."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0377-2217(03)00274-1"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.14778\/3641204.3641215"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2019.06.024"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3534540.3534691"},{"key":"e_1_2_1_10_1","volume-title":"Variational graph recurrent neural networks. Advances in neural information processing systems","author":"Hajiramezanali Ehsan","year":"2019","unstructured":"Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. 2019. Variational graph recurrent neural networks. Advances in neural information processing systems, Vol. 32 (2019)."},{"key":"e_1_2_1_11_1","volume-title":"Inductive representation learning on large graphs. Advances in neural information processing systems","author":"Hamilton Will","year":"2017","unstructured":"Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, Vol. 30 (2017)."},{"key":"e_1_2_1_12_1","unstructured":"Ming Jin Yuan-Fang Li and Shirui Pan. 2022. Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_1_13_1","volume-title":"Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.","author":"Kipf Thomas N","year":"2017","unstructured":"Thomas N Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330895"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/2833157.2833162"},{"key":"e_1_2_1_16_1","volume-title":"Yookun Cho, and Chong Sang Kim.","author":"Lee Donghee","year":"2001","unstructured":"Donghee Lee, Jongmoo Choi, Jong-Hun Kim, Sam H Noh, Sang Lyul Min, Yookun Cho, and Chong Sang Kim. 2001. LRFU: A spectrum of policies that subsumes the least recently used and least frequently used policies. IEEE transactions on Computers, Vol. 50, 12 (2001), 1352--1361."},{"key":"e_1_2_1_17_1","volume-title":"Gdelt: Global data on events, location, and tone","author":"Leetaru Kalev","year":"2013","unstructured":"Kalev Leetaru and Philip A Schrodt. 2013. Gdelt: Global data on events, location, and tone, 1979--2012. In ISA annual convention, Vol. 2. Citeseer, 1--49."},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448016.3457542"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482237"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3588737"},{"key":"e_1_2_1_21_1","volume-title":"Zebra: When Temporal Graph Neural Networks Meet Temporal Personalized PageRank. Proceedings of the VLDB Endowment","volume":"16","author":"Li Yiming","year":"2023","unstructured":"Yiming Li, Yanyan Shen, Lei Chen, and Mingxuan Yuan. 2023 b. Zebra: When Temporal Graph Neural Networks Meet Temporal Personalized PageRank. Proceedings of the VLDB Endowment, Vol. 16, 6 (2023), 1332--1345."},{"key":"e_1_2_1_22_1","first-page":"1364","article-title":"DAHA","volume":"17","author":"Li Zhiyuan","year":"2024","unstructured":"Zhiyuan Li, Xun Jian, Yue Wang, Yingxia Shao, and Lei Chen. 2024. DAHA: Accelerating GNN Training with Data and Hardware Aware Execution Planning., Vol. 17, 6 (2024), 1364--1376.","journal-title":"Accelerating GNN Training with Data and Hardware Aware Execution Planning."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3419111.3421281"},{"volume-title":"Knapsack problems: algorithms and computer implementations","author":"Martello Silvano","key":"e_1_2_1_24_1","unstructured":"Silvano Martello and Paolo Toth. 1990. Knapsack problems: algorithms and computer implementations. John Wiley & Sons, Inc."},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539038"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.14778\/3476249.3476264"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/170036.170081"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.5984"},{"key":"e_1_2_1_29_1","volume-title":"Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, Vol. 32 (2019)."},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.14778\/3538598.3538614"},{"key":"e_1_2_1_31_1","volume-title":"Temporal Graph Networks for Deep Learning on Dynamic Graphs. In ICML 2020 Workshop on Graph Representation Learning.","author":"Rossi Emanuele","year":"2020","unstructured":"Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. 2020. Temporal Graph Networks for Deep Learning on Dynamic Graphs. In ICML 2020 Workshop on Graph Representation Learning."},{"key":"e_1_2_1_32_1","volume-title":"International conference on learning representations.","author":"Trivedi Rakshit","year":"2019","unstructured":"Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. 2019. Dyrep: Learning representations over dynamic graphs. In International conference on learning representations."},{"key":"e_1_2_1_33_1","volume-title":"Graph Attention Networks. In International Conference on Learning Representations.","author":"Petar Velivc","year":"2018","unstructured":"Petar Velivc kovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. In International Conference on Learning Representations."},{"key":"e_1_2_1_34_1","volume-title":"ICLR workshop on representation learning on graphs and manifolds.","author":"Wang Minjie Yu","year":"2019","unstructured":"Minjie Yu Wang. 2019. Deep graph library: Towards efficient and scalable deep learning on graphs. In ICLR workshop on representation learning on graphs and manifolds."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448016.3457564"},{"key":"e_1_2_1_36_1","volume-title":"International Conference on Learning Representations.","author":"Wang Yanbang","year":"2021","unstructured":"Yanbang Wang, Yen-Yu Chang, Yunyu Liu, Jure Leskovec, and Pan Li. 2021a. Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks. In International Conference on Learning Representations."},{"key":"e_1_2_1_37_1","volume-title":"International Conference on Learning Representations.","author":"Xu Da","year":"2020","unstructured":"Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. 2020. Inductive representation learning on temporal graphs. In International Conference on Learning Representations."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3492321.3519557"},{"key":"e_1_2_1_39_1","volume-title":"Feature-Oriented Sampling for Fast and Scalable GNN Training. In 2022 IEEE International Conference on Data Mining (ICDM). IEEE, 723--732","author":"Zhang Xin","year":"2022","unstructured":"Xin Zhang, Yanyan Shen, and Lei Chen. 2022. Feature-Oriented Sampling for Fast and Scalable GNN Training. In 2022 IEEE International Conference on Data Mining (ICDM). IEEE, 723--732."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3589311"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.14778\/3514061.3514069"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.14778\/3529337.3529342"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.14778\/3352063.3352127"}],"container-title":["Proceedings of the ACM on Management of Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3654977","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3654977","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T14:36:43Z","timestamp":1755787003000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3654977"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,5,29]]},"references-count":43,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,5,29]]}},"alternative-id":["10.1145\/3654977"],"URL":"https:\/\/doi.org\/10.1145\/3654977","relation":{},"ISSN":["2836-6573"],"issn-type":[{"type":"electronic","value":"2836-6573"}],"subject":[],"published":{"date-parts":[[2024,5,29]]}}}