{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T19:01:25Z","timestamp":1774983685947,"version":"3.50.1"},"reference-count":96,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2025,2,10]],"date-time":"2025-02-10T00:00:00Z","timestamp":1739145600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100006374","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62472400 and 62072428"],"award-info":[{"award-number":["62472400 and 62072428"]}],"id":[{"id":"10.13039\/501100006374","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Manag. Data"],"published-print":{"date-parts":[[2025,2,10]]},"abstract":"<jats:p>Cutting-edge platforms of graph neural networks (GNNs), such as DGL and PyG, harness the parallel processing power of GPUs to extract structural information from graph data, achieving state-of-the-art (SOTA) performance in fields such as recommendation systems, knowledge graphs, and bioinformatics. Despite the computational advantages provided by GPUs, these GNN platforms struggle with scalability challenges due to the colossal graphical structures processed and the limited memory capacities of GPUs. In response, this work introduces Capsule, a new out-of-core mechanism for large-scale GNN training. Unlike existing out-of-core GNN systems, which use main or secondary memory as operative memory and use CPU kernels during non-backpropagation computation, Capsule uses GPU memory and GPU kernels. By substantially leveraging the parallelization capabilities of GPUs, Capsule significantly enhances GNN training efficiency. In addition, Capsule can be smoothly integrated to mainstream open-source GNN frameworks, DGL and PyG, in a play-and-plug manner. Through a prototype implementation and comprehensive experiments on real datasets, we demonstrate that Capsule can achieve up to a 12.02\u00d7 improvement in runtime efficiency, while using only 22.24% of the main memory, compared to SOTA out-of-core GNN systems.<\/jats:p>","DOI":"10.1145\/3709669","type":"journal-article","created":{"date-parts":[[2025,2,11]],"date-time":"2025-02-11T15:45:06Z","timestamp":1739288706000},"page":"1-30","source":"Crossref","is-referenced-by-count":4,"title":["Capsule: An Out-of-Core Training Mechanism for Colossal GNNs"],"prefix":"10.1145","volume":"3","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-0572-5479","authenticated-orcid":false,"given":"Yongan","family":"Xiang","sequence":"first","affiliation":[{"name":"School of Computer Science and Technology, University of Science and Technology of China (USTC), Hefei, Anhui, China, &amp; Data Darkness Lab, Suzhou Institute for Advanced Research, USTC, Suzhou, Jiangsu, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6286-8679","authenticated-orcid":false,"given":"Zezhong","family":"Ding","sequence":"additional","affiliation":[{"name":"School of Artificial Intelligence and Data Science, USTC, Hefei, Anhui, China, &amp; Data Darkness Lab, Suzhou Institute for Advanced Research, USTC, Suzhou, Jiangsu, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-4033-3816","authenticated-orcid":false,"given":"Rui","family":"Guo","sequence":"additional","affiliation":[{"name":"School of Artificial Intelligence and Data Science, USTC, Hefei, Anhui, China, &amp; Data Darkness Lab, Suzhou Institute for Advanced Research, USTC, Suzhou, Jiangsu, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-5738-985X","authenticated-orcid":false,"given":"Shangyou","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Artificial Intelligence and Data Science, USTC, Hefei, Anhui, China, &amp; Data Darkness Lab, Suzhou Institute for Advanced Research, USTC, Suzhou, Jiangsu, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5290-5408","authenticated-orcid":false,"given":"Xike","family":"Xie","sequence":"additional","affiliation":[{"name":"School of Biomedical Engineering, USTC, Suzhou, Jiangsu, China, &amp; Data Darkness Lab, Suzhou Institute for Advanced Research, USTC, Suzhou, Jiangsu, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6881-4444","authenticated-orcid":false,"given":"S. Kevin","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Biomedical Engineering, USTC, Suzhou, Jiangsu, China, &amp; MIRACLE Center, Suzhou Institute for Advanced Research, USTC, Suzhou, Jiangsu, China"}]}],"member":"320","published-online":{"date-parts":[[2025,2,11]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i10.28948"},{"key":"e_1_2_1_2_1","unstructured":"Amazon. 2024. Best Sellers in Computer Graphics Cards. https:\/\/www.amazon.com\/Best-Sellers-Computer-Graphics-Cards\/zgbs\/pc\/284822"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2021.3065737"},{"key":"e_1_2_1_4_1","volume-title":"Layer-Neighbor Sampling - Defusing Neighborhood Explosion in GNNs. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems, NeurIPS 2023","author":"Fatih Muhammed","year":"2023","unstructured":"Muhammed Fatih Balin and \u00dcmit V. \u00c7ataly\u00fcrek. 2023. Layer-Neighbor Sampling - Defusing Neighborhood Explosion in GNNs. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. https:\/\/papers.nips.cc\/paper_files\/paper\/2023\/hash\/51f9036d5e7ae822da8f6d4adda1fb39-Abstract-Conference.html"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3160017"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/1963405.1963488"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/988672.988752"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.socnet.2007.04.002"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2623330.2623660"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1080\/0022250X.2001.9990249"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0169--7552(98)00110-X"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3572848.3577528"},{"key":"e_1_2_1_13_1","volume-title":"International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rytstxWAW","author":"Chen Jie","year":"2018","unstructured":"Jie Chen, Tengfei Ma, and Cao Xiao. 2018a. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rytstxWAW"},{"key":"e_1_2_1_14_1","volume-title":"Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research","volume":"950","author":"Chen Jianfei","year":"2018","unstructured":"Jianfei Chen, Jun Zhu, and Le Song. 2018b. Stochastic Training of Graph Convolutional Networks with Variance Reduction. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80). PMLR, 942--950. https:\/\/proceedings.mlr.press\/v80\/chen18p.html"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330925"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403192"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403192"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.5555\/3157382.3157527"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2024.3475568"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3654965"},{"key":"e_1_2_1_21_1","article-title":"Improving Lipschitz-Constrained Neural Networks by Learning Activation Functions","volume":"25","author":"Ducotterd Stanislas","year":"2024","unstructured":"Stanislas Ducotterd, Alexis Goujon, Pakshal Bohra, Dimitris Perdios, Sebastian Neumayer, and Michael Unser. 2024. Improving Lipschitz-Constrained Neural Networks by Learning Activation Functions. J. Mach. Learn. Res., Vol. 25 (2024), 65:1--65:30. https:\/\/jmlr.org\/papers\/v25\/22--1347.html","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_2_1_22_1","unstructured":"Leo Egghe et al. 2006. An improvement of the h-index: The g-index. ISSI newsletter Vol. 2 1 (2006) 8--9. http:\/\/www2.stat-athens.aueb.gr\/ jpan\/Egghe-ISSI-2006.pdf"},{"key":"e_1_2_1_23_1","volume-title":"Fast graph representation learning with PyTorch Geometric. (2019). arxiv","author":"Fey Matthias","year":"1903","unstructured":"Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. (2019). arxiv: 1903.02428 [cs.LG] http:\/\/arxiv.org\/abs\/1903.02428"},{"key":"e_1_2_1_24_1","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21)","author":"Gandhi Swapnil","year":"2021","unstructured":"Swapnil Gandhi and Anand Padmanabha Iyer. 2021. P3: Distributed Deep Graph Learning at Scale. In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21). USENIX Association, 551--568. https:\/\/www.usenix.org\/conference\/osdi21\/presentation\/gandhi"},{"key":"e_1_2_1_25_1","volume-title":"Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research","volume":"323","author":"Glorot Xavier","year":"2011","unstructured":"Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep Sparse Rectifier Neural Networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 15). PMLR, Fort Lauderdale, FL, USA, 315--323. https:\/\/proceedings.mlr.press\/v15\/glorot11a.html"},{"key":"e_1_2_1_26_1","volume-title":"PowerGraph: Distributed Graph-Parallel Computation on Natural Graphs. In 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI 12)","author":"Gonzalez Joseph E.","year":"2012","unstructured":"Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. 2012. PowerGraph: Distributed Graph-Parallel Computation on Natural Graphs. In 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI 12). USENIX Association, Hollywood, CA, 17--30. https:\/\/www.usenix.org\/conference\/osdi12\/technical-sessions\/presentation\/gonzalez"},{"key":"e_1_2_1_27_1","volume-title":"Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research","volume":"5209","author":"Gower Robert Mansel","year":"2019","unstructured":"Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, and Peter Richt\u00e1rik. 2019. SGD: General Analysis and Improved Rates. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 97). PMLR, 5200--5209. https:\/\/proceedings.mlr.press\/v97\/qian19b.html"},{"key":"e_1_2_1_28_1","volume-title":"Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, December 4--9, 2017","author":"Hamilton William L.","year":"2017","unstructured":"William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, December 4--9, 2017, Long Beach, CA, USA. 1024--1034. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.14778\/3358701.3358706"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591647"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401063"},{"key":"e_1_2_1_32_1","volume-title":"Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems, NeurIPS 2020","author":"Hu Weihua","year":"2020","unstructured":"Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems, NeurIPS 2020, December 6--12, 2020, virtual. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2018.2890515"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591720"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3534540.3534710"},{"key":"e_1_2_1_36_1","volume-title":"Adaptive Sampling Towards Fast Graph Representation Learning. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS 2018","author":"Zhang Tong","year":"2018","unstructured":"Wen-bing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. 2018. Adaptive Sampling Towards Fast Graph Representation Learning. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS 2018, December 3--8, 2018, Montr\u00e9al, Canada. 4563--4572. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/01eee509ee2f68dc6014898c309e86bf-Abstract.html"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-022--12201--9"},{"key":"e_1_2_1_38_1","volume-title":"Proceedings of Machine Learning and Systems 2020","author":"Jia Zhihao","year":"2020","unstructured":"Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020a. Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc. In Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2--4, 2020. mlsys.org. https:\/\/proceedings.mlsys.org\/paper_files\/paper\/2020\/hash\/91fc23ceccb664ebb0cf4257e1ba9c51-Abstract.html"},{"key":"e_1_2_1_39_1","first-page":"187","article-title":"Improving the accuracy, scalability, and performance of graph neural networks with roc","volume":"2","author":"Jia Zhihao","year":"2020","unstructured":"Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020b. Improving the accuracy, scalability, and performance of graph neural networks with roc. Proceedings of Machine Learning and Systems , Vol. 2 (2020), 187--198. https:\/\/proceedings.mlsys.org\/paper_files\/paper\/2020\/hash\/91fc23ceccb664ebb0cf4257e1ba9c51-Abstract.html","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1016\/J.JCSS.2015.06.003"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1137\/S1064827595287997"},{"key":"e_1_2_1_42_1","volume-title":"Kipf and Max Welling","author":"Thomas","year":"2017","unstructured":"Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24--26, 2017, Conference Track Proceedings. OpenReview.net. https:\/\/openreview.net\/forum?id=SJU4ayYgl"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE53745.2022.00049"},{"key":"e_1_2_1_44_1","unstructured":"Jure Leskovec and Andrej Krevl. 2014. SNAP Datasets: Stanford Large Network Dataset Collection. http:\/\/snap.stanford.edu\/data."},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE55515.2023.00379"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3419111.3421281"},{"key":"e_1_2_1_47_1","unstructured":"Renjie Liu Yichuan Wang Xiao Yan Zhenkun Cai Minjie Wang Haitian Jiang Bo Tang and Jinyang Li. 2024. DiskGNN: Bridging I\/O Efficiency and Model Accuracy for Out-of-Core GNN Training. (2024). arxiv: 2405.05231 [cs.LG] https:\/\/arxiv.org\/abs\/2405.05231"},{"key":"e_1_2_1_48_1","volume-title":"20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23)","author":"Liu Tianfeng","year":"2023","unstructured":"Tianfeng Liu, Yangrui Chen, Dan Li, Chuan Wu, Yibo Zhu, Jun He, Yanghua Peng, Hongzheng Chen, Hongzhi Chen, and Chuanxiong Guo. 2023. BGL: GPU-Efficient GNN Training by Optimizing Graph Data I\/O and Preprocessing. In 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23). USENIX Association, Boston, MA, 103--118. https:\/\/www.usenix.org\/conference\/nsdi23\/presentation\/liu-tianfeng"},{"key":"e_1_2_1_49_1","volume-title":"NeuGraph: Parallel Deep Neural Network Computation on Large Graphs. In 2019 USENIX Annual Technical Conference (USENIX ATC 19)","author":"Ma Lingxiao","year":"2019","unstructured":"Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, and Yafei Dai. 2019. NeuGraph: Parallel Deep Neural Network Computation on Large Graphs. In 2019 USENIX Annual Technical Conference (USENIX ATC 19). USENIX Association, Renton, WA, 443--458. https:\/\/www.usenix.org\/conference\/atc19\/presentation\/ma"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDCS.2018.00072"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448016.3457300"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE53745.2022.00242"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3480856"},{"key":"e_1_2_1_54_1","volume-title":"Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5--8, 2013","author":"Mikolov Tom\u00e1s","year":"2013","unstructured":"Tom\u00e1s Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5--8, 2013, Lake Tahoe, Nevada, United States,, Christopher J. C. Burges, L\u00e9on Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger (Eds.). 3111--3119. https:\/\/proceedings.neurips.cc\/paper\/2013\/hash\/9aa42b31882ec039965f3c4923ce901b-Abstract.html"},{"key":"e_1_2_1_55_1","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21)","author":"Mohoney Jason","year":"2021","unstructured":"Jason Mohoney, Roger Waleffe, Henry Xu, Theodoros Rekatsinas, and Shivaram Venkataraman. 2021. Marius: Learning Massive Graph Embeddings on a Single Machine. In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21). USENIX Association, 533--549. https:\/\/www.usenix.org\/conference\/osdi21\/presentation\/mohoney"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/2739008"},{"key":"e_1_2_1_57_1","unstructured":"NVIDIA. 2016. GeForce GTX 1050 Ti. https:\/\/www.nvidia.com\/en-gb\/geforce\/graphics-cards\/geforce-gtx-1050-ti\/specifications\/. Accessed: 2024-07--12."},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jpdc.2012.01.004"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3--540--69311--621"},{"key":"e_1_2_1_60_1","volume-title":"Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021","author":"Paetzold Johannes C.","year":"2021","unstructured":"Johannes C. Paetzold, Julian McGinnis, Suprosanna Shit, Ivan Ezhov, Paul B\u00fcschl, Chinmay Prabhakar, Anjany Sekuboyina, Mihail I. Todorov, Georgios Kaissis, Ali Ert\u00fcrk, Stephan G\u00fcnnemann, and Bjoern H. Menze. 2021. Whole Brain Vessel Graphs: A Dataset and Benchmark for Graph Learning and Neuroscience. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, Joaquin Vanschoren and Sai-Kit Yeung (Eds.). https:\/\/datasets-benchmarks-proceedings.neurips.cc\/paper\/2021\/hash\/c9f0f895fb98ab9159f51fd0297e236d-Abstract-round2.html"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403280"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.14778\/3551793.3551819"},{"key":"e_1_2_1_63_1","volume-title":"High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS 2019","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS 2019, December 8--14, 2019, Vancouver, BC, Canada. 8024--8035. https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/bdbca288fee7f92f2bfa9f7012727740-Abstract.html"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/2806416.2806424"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE55515.2023.00083"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.14778\/2536258.2536264"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1038\/323533a0"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS.2007.56"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2310.00837"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-019-0257--5"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41592-020-0792--1"},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1016\/J.ARTINT.2016.08.001"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.hm.2020.04.003"},{"key":"e_1_2_1_74_1","volume-title":"6th International Conference on Learning Representations, ICLR","author":"Velickovic Petar","year":"2018","unstructured":"Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https:\/\/openreview.net\/forum?id=rJXMpikCZ"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3552326.3567501"},{"key":"e_1_2_1_76_1","volume-title":"Proceedings of Machine Learning and Systems, MLSys 2022","author":"Wan Cheng","year":"2022","unstructured":"Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, and Yingyan Lin. 2022. BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling. In Proceedings of Machine Learning and Systems, MLSys 2022, Santa Clara, CA, USA, August 29 - September 1, 2022. mlsys.org. https:\/\/proceedings.mlsys.org\/paper_files\/paper\/2022\/hash\/676638b91bc90529e09b22e58abb01d6-Abstract.html"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3589288"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1162\/QSS_A_00021"},{"key":"e_1_2_1_79_1","volume-title":"Highly-Performant Package for Graph Neural Networks.","author":"Wang Minjie","year":"2019","unstructured":"Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. 2019. Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks. (2019). arxiv: 1909.01315 [cs.LG] http:\/\/arxiv.org\/abs\/1909.01315"},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591634"},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.1992.287150"},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2023.3257507"},{"key":"e_1_2_1_83_1","volume-title":"Capsule: An Out-of-Core Training Mechanism for Colossal GNNs (Supplementary Materials). Technical Report. GitHub Repository. https:\/\/github.com\/USTC-DataDarknessLab\/Capsule\/blob\/master\/supp.pdf","author":"Xiang Yongan","year":"2025","unstructured":"Yongan Xiang, Zezhong Ding, Rui Guo, Shangyou Wang, Xike Xie, and S. Kevin Zhou. 2025. Capsule: An Out-of-Core Training Mechanism for Colossal GNNs (Supplementary Materials). Technical Report. GitHub Repository. https:\/\/github.com\/USTC-DataDarknessLab\/Capsule\/blob\/master\/supp.pdf"},{"key":"e_1_2_1_84_1","volume-title":"Proceedings of the 27th International Conference on Neural Information Processing Systems -","volume":"1","author":"Xie Cong","year":"2014","unstructured":"Cong Xie, Ling Yan, Wu-Jun Li, and Zhihua Zhang. 2014. Distributed power-law graph computing: theoretical and empirical analysis. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1 (Montreal, Canada) (NIPS'14). MIT Press, Cambridge, MA, USA, 1673--1681."},{"key":"e_1_2_1_85_1","volume-title":"7th International Conference on Learning Representations, ICLR 2019","author":"Xu Keyulu","year":"2019","unstructured":"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How Powerful are Graph Neural Networks?. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6--9, 2019. OpenReview.net. https:\/\/openreview.net\/forum?id=ryGs6iA5Km"},{"key":"e_1_2_1_86_1","doi-asserted-by":"publisher","DOI":"10.1145\/2350190.2350193"},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3--319--49178--336"},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219890"},{"key":"e_1_2_1_89_1","volume-title":"The Eleventh International Conference on Learning Representations, ICLR 2023","author":"Zaidi Sheheryar","year":"2023","unstructured":"Sheheryar Zaidi, Michael Schaarschmidt, James Martens, Hyunjik Kim, Yee Whye Teh, Alvaro Sanchez-Gonzalez, Peter W. Battaglia, Razvan Pascanu, and Jonathan Godwin. 2023. Pre-training via Denoising for Molecular Property Prediction. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1--5, 2023. OpenReview.net. https:\/\/openreview.net\/forum?id=tYIMtogyee"},{"key":"e_1_2_1_90_1","volume-title":"GraphSAINT: Graph Sampling Based Inductive Learning Method. In 8th International Conference on Learning Representations, ICLR 2020","author":"Zeng Hanqing","year":"2020","unstructured":"Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor K. Prasanna. 2020. GraphSAINT: Graph Sampling Based Inductive Learning Method. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020. OpenReview.net. https:\/\/openreview.net\/forum?id=BJe8pkHFwS"},{"key":"e_1_2_1_91_1","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3098033"},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1145\/3580305.3599404"},{"key":"e_1_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1109\/IA351965.2020.00011"},{"key":"e_1_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.14778\/3352063.3352127"},{"key":"e_1_2_1_95_1","volume-title":"Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. In Annual Conference on Neural Information Processing Systems (NeurIPS 2019","author":"Zou Difan","year":"2019","unstructured":"Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. 2019. Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. In Annual Conference on Neural Information Processing Systems (NeurIPS 2019). Vancouver, BC, Canada, 11247--11256. https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/91ba4a4478a66bee9812b0804b6f9d1b-Abstract.html"},{"key":"e_1_2_1_96_1","doi-asserted-by":"publisher","DOI":"10.1145\/3533702.3534920"}],"container-title":["Proceedings of the ACM on Management of Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3709669","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3709669","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T18:19:10Z","timestamp":1774981150000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3709669"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,10]]},"references-count":96,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,2,10]]}},"alternative-id":["10.1145\/3709669"],"URL":"https:\/\/doi.org\/10.1145\/3709669","relation":{},"ISSN":["2836-6573"],"issn-type":[{"value":"2836-6573","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,10]]}}}