{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T15:46:14Z","timestamp":1775231174938,"version":"3.50.1"},"reference-count":56,"publisher":"Association for Computing Machinery (ACM)","issue":"11","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2022,7]]},"abstract":"<jats:p>\n            Graph Neural Networks (GNNs) are receiving a spotlight as a powerful tool that can effectively serve various inference tasks on graph structured data. As the size of real-world graphs continues to scale, the GNN training system faces a scalability challenge. Distributed training is a popular approach to address this challenge by scaling out CPU nodes. However, not much attention has been paid to\n            <jats:italic>disk-based<\/jats:italic>\n            GNN training, which can scale up the single-node system in a more cost-effective manner by leveraging high-performance storage devices like NVMe SSDs. We observe that the data movement between the main memory and the disk is the primary bottleneck in the SSD-based training system, and that the conventional GNN training pipeline is sub-optimal without taking this overhead into account. Thus, we propose Ginex, the first SSD-based GNN training system that can process billion-scale graph datasets on a single machine. Inspired by the inspector-executor execution model in compiler optimization, Ginex restructures the GNN training pipeline by separating\n            <jats:italic>sample<\/jats:italic>\n            and\n            <jats:italic>gather<\/jats:italic>\n            stages. This separation enables Ginex to realize a provably optimal replacement algorithm, known as\n            <jats:italic>Belady's algorithm<\/jats:italic>\n            , for caching feature vectors in memory, which account for the dominant portion of I\/O accesses. According to our evaluation with four billion-scale graph datasets and two GNN models, Ginex achieves 2.11X higher training throughput on average (2.67X at maximum) than the SSD-extended PyTorch Geometric.\n          <\/jats:p>","DOI":"10.14778\/3551793.3551819","type":"journal-article","created":{"date-parts":[[2022,9,29]],"date-time":"2022-09-29T22:25:03Z","timestamp":1664490303000},"page":"2626-2639","source":"Crossref","is-referenced-by-count":37,"title":["Ginex"],"prefix":"10.14778","volume":"15","author":[{"given":"Yeonhong","family":"Park","sequence":"first","affiliation":[{"name":"Seoul National University, Seoul, Korea"}]},{"given":"Sunhong","family":"Min","sequence":"additional","affiliation":[{"name":"Seoul National University, Seoul, Korea"}]},{"given":"Jae W.","family":"Lee","sequence":"additional","affiliation":[{"name":"Seoul National University, Seoul, Korea"}]}],"member":"320","published-online":{"date-parts":[[2022,9,29]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks. In 19th USENIX Conference on File and Storage Technologies (FAST 21)","author":"Bae Jonghyun","unstructured":"Jonghyun Bae , Jongsung Lee , Yunho Jin , Sam Son , Shine Kim , Hakbeom Jang , Tae Jun Ham , and Jae W. Lee . 2021 . FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks. In 19th USENIX Conference on File and Storage Technologies (FAST 21) . USENIX Association, 387--401. https:\/\/www.usenix.org\/conference\/fast21\/presentation\/bae Jonghyun Bae, Jongsung Lee, Yunho Jin, Sam Son, Shine Kim, Hakbeom Jang, Tae Jun Ham, and Jae W. Lee. 2021. FlashNeuron: SSD-Enabled Large-Batch Training of Very Deep Neural Networks. In 19th USENIX Conference on File and Storage Technologies (FAST 21). USENIX Association, 387--401. https:\/\/www.usenix.org\/conference\/fast21\/presentation\/bae"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA51647.2021.00062"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1147\/sj.52.0078"},{"key":"e_1_2_1_4_1","volume-title":"Scalable realistic recommendation datasets through fractal expansions. arXiv preprint arXiv:1901.08910","author":"Belletti Francois","year":"2019","unstructured":"Francois Belletti , Karthik Lakshmanan , Walid Krichene , Yi-Fan Chen , and John Anderson . 2019. Scalable realistic recommendation datasets through fractal expansions. arXiv preprint arXiv:1901.08910 ( 2019 ). Francois Belletti, Karthik Lakshmanan, Walid Krichene, Yi-Fan Chen, and John Anderson. 2019. Scalable realistic recommendation datasets through fractal expansions. arXiv preprint arXiv:1901.08910 (2019)."},{"key":"e_1_2_1_5_1","volume-title":"International Conference on Learning Representations.","author":"Chen Jie","year":"2018","unstructured":"Jie Chen , Tengfei Ma , and Cao Xiao . 2018 . FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling . In International Conference on Learning Representations. Jie Chen, Tengfei Ma, and Cao Xiao. 2018. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In International Conference on Learning Representations."},{"key":"e_1_2_1_6_1","volume-title":"Stochastic Training of Graph Convolutional Networks with Variance Reduction. In International Conference on Machine Learning. 941--949","author":"Chen Jianfei","year":"2018","unstructured":"Jianfei Chen , Jun Zhu , and Le Song . 2018 . Stochastic Training of Graph Convolutional Networks with Variance Reduction. In International Conference on Machine Learning. 941--949 . Jianfei Chen, Jun Zhu, and Le Song. 2018. Stochastic Training of Graph Convolutional Networks with Variance Reduction. In International Conference on Machine Learning. 941--949."},{"key":"e_1_2_1_7_1","doi-asserted-by":"crossref","unstructured":"Weilin Cong Rana Forsati Mahmut Kandemir and Mehrdad Mahdavi. 2020. Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks. 1393--1403.  Weilin Cong Rana Forsati Mahmut Kandemir and Mehrdad Mahdavi. 2020. Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks. 1393--1403.","DOI":"10.1145\/3394486.3403192"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.5555\/3157382.3157527"},{"key":"e_1_2_1_9_1","volume-title":"Domain-Specialized Cache Management for Graph Analytics. In International Symposium on High-Performance Computer Architecture (HPCA).","author":"Faldu Priyank","year":"2020","unstructured":"Priyank Faldu , Jeff Diamond , and Boris Grot . 2020 . Domain-Specialized Cache Management for Graph Analytics. In International Symposium on High-Performance Computer Architecture (HPCA). Priyank Faldu, Jeff Diamond, and Boris Grot. 2020. Domain-Specialized Cache Management for Graph Analytics. In International Symposium on High-Performance Computer Architecture (HPCA)."},{"key":"e_1_2_1_10_1","volume-title":"Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds.","author":"Fey Matthias","unstructured":"Matthias Fey and Jan E. Lenssen . 2019 . Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds. Matthias Fey and Jan E. Lenssen. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds."},{"key":"e_1_2_1_11_1","volume-title":"Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 551--568","author":"Gandhi Swapnil","year":"2021","unstructured":"Swapnil Gandhi and Anand Padmanabha Iyer . 2021 . P3: Distributed Deep Graph Learning at Scale . In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 551--568 . Swapnil Gandhi and Anand Padmanabha Iyer. 2021. P3: Distributed Deep Graph Learning at Scale. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 551--568."},{"key":"e_1_2_1_12_1","unstructured":"Will Hamilton Zhitao Ying and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems I. Guyon U. V. Luxburg S. Bengio H. Wallach R. Fergus S. Vishwanathan and R. Garnett (Eds.).  Will Hamilton Zhitao Ying and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems I. Guyon U. V. Luxburg S. Bengio H. Wallach R. Fergus S. Vishwanathan and R. Garnett (Eds.)."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-020-2649-2"},{"key":"e_1_2_1_14_1","unstructured":"Weihua Hu Matthias Fey Marinka Zitnik Yuxiao Dong Hongyu Ren Bowen Liu Michele Catasta and Jure Leskovec. 2021. Open Graph Benchmark: Datasets for Machine Learning on Graphs. arXiv:2005.00687 [cs.LG]  Weihua Hu Matthias Fey Marinka Zitnik Yuxiao Dong Hongyu Ren Bowen Liu Michele Catasta and Jure Leskovec. 2021. Open Graph Benchmark: Datasets for Machine Learning on Graphs. arXiv:2005.00687 [cs.LG]"},{"key":"e_1_2_1_15_1","unstructured":"Yaochen Hu Amit Levi Ishaan Kumar Yingxue Zhang and Mark Coates. 2021. On Batch-size Selection for Stochastic Training for Graph Neural Networks. https:\/\/openreview.net\/forum?id=HeEzgm-f4g1  Yaochen Hu Amit Levi Ishaan Kumar Yingxue Zhang and Mark Coates. 2021. On Batch-size Selection for Stochastic Training for Graph Neural Networks. https:\/\/openreview.net\/forum?id=HeEzgm-f4g1"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2016.17"},{"key":"e_1_2_1_17_1","volume-title":"Proceedings of Machine Learning and Systems, I. Dhillon, D. Papailiopoulos, and V. Sze (Eds.)","volume":"2","author":"Jia Zhihao","year":"2020","unstructured":"Zhihao Jia , Sina Lin , Mingyu Gao , Matei Zaharia , and Alex Aiken . 2020 . Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc . In Proceedings of Machine Learning and Systems, I. Dhillon, D. Papailiopoulos, and V. Sze (Eds.) , Vol. 2 . 187--198. https:\/\/proceedings.mlsys.org\/paper\/2020\/file\/fe9fc289c3ff0af142b6d3bead98a923-Paper.pdf Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020. Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc. In Proceedings of Machine Learning and Systems, I. Dhillon, D. Papailiopoulos, and V. Sze (Eds.), Vol. 2. 187--198. https:\/\/proceedings.mlsys.org\/paper\/2020\/file\/fe9fc289c3ff0af142b6d3bead98a923-Paper.pdf"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1137\/S1064827595287997"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2882903.2915204"},{"key":"e_1_2_1_20_1","volume-title":"Behemoth: A Flash-centric Training Accelerator for Extreme-scale DNNs. In 19th USENIX Conference on File and Storage Technologies (FAST 21)","author":"Kim Shine","unstructured":"Shine Kim , Yunho Jin , Gina Sohn , Jonghyun Bae , Tae Jun Ham , and Jae W. Lee . 2021 . Behemoth: A Flash-centric Training Accelerator for Extreme-scale DNNs. In 19th USENIX Conference on File and Storage Technologies (FAST 21) . USENIX Association, 371--385. https:\/\/www.usenix.org\/conference\/fast21\/presentation\/kim Shine Kim, Yunho Jin, Gina Sohn, Jonghyun Bae, Tae Jun Ham, and Jae W. Lee. 2021. Behemoth: A Flash-centric Training Accelerator for Extreme-scale DNNs. In 19th USENIX Conference on File and Storage Technologies (FAST 21). USENIX Association, 371--385. https:\/\/www.usenix.org\/conference\/fast21\/presentation\/kim"},{"key":"e_1_2_1_21_1","volume-title":"Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR).","author":"Thomas","unstructured":"Thomas N. Kipf and Max Welling. 2017 . Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR). Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_1_22_1","volume-title":"Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 31--46","author":"Kyrola Aapo","year":"2012","unstructured":"Aapo Kyrola , Guy Blelloch , and Carlos Guestrin . 2012 . GraphChi: Large-Scale Graph Computation on Just a PC . In Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 31--46 . Aapo Kyrola, Guy Blelloch, and Carlos Guestrin. 2012. GraphChi: Large-Scale Graph Computation on Just a PC. In Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 31--46."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/LCA.2021.3098943"},{"key":"e_1_2_1_24_1","article-title":"Kronecker graphs: an approach to modeling networks","volume":"11","author":"Leskovec Jure","year":"2010","unstructured":"Jure Leskovec , Deepayan Chakrabarti , Jon Kleinberg , Christos Faloutsos , and Zoubin Ghahramani . 2010 . Kronecker graphs: an approach to modeling networks . Journal of Machine Learning Research 11 , 2 (2010). Jure Leskovec, Deepayan Chakrabarti, Jon Kleinberg, Christos Faloutsos, and Zoubin Ghahramani. 2010. Kronecker graphs: an approach to modeling networks. Journal of Machine Learning Research 11, 2 (2010).","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_1_25_1","unstructured":"Jure Leskovec and Andrej Krevl. 2014. SNAP Datasets: Stanford large network dataset collection.  Jure Leskovec and Andrej Krevl. 2014. SNAP Datasets: Stanford large network dataset collection."},{"key":"e_1_2_1_26_1","volume-title":"Proceedings of the 2021 USENIX Annual Technical Conference. USENIX Association, 225--238","author":"Li Cangyuan","year":"2021","unstructured":"Cangyuan Li , Ying Wang , Cheng Liu , Shengwen Liang , Huawei Li , and Xiaowei Li . 2021 . GLIST: Towards In-Storage Graph Learning . In Proceedings of the 2021 USENIX Annual Technical Conference. USENIX Association, 225--238 . Cangyuan Li, Ying Wang, Cheng Liu, Shengwen Liang, Huawei Li, and Xiaowei Li. 2021. GLIST: Towards In-Storage Graph Learning. In Proceedings of the 2021 USENIX Annual Technical Conference. USENIX Association, 225--238."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3419111.3421281"},{"key":"e_1_2_1_28_1","volume-title":"Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13--18","volume":"119","author":"Liu Evan Zheran","year":"2020","unstructured":"Evan Zheran Liu , Milad Hashemi , Kevin Swersky , Parthasarathy Ranganathan , and Junwhan Ahn . 2020 . An Imitation Learning Approach for Cache Replacement . In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13--18 July 2020, Virtual Event (Proceedings of Machine Learning Research) , Vol. 119 . PMLR, 6237--6247. http:\/\/proceedings.mlr.press\/v119\/liu20f.html Evan Zheran Liu, Milad Hashemi, Kevin Swersky, Parthasarathy Ranganathan, and Junwhan Ahn. 2020. An Imitation Learning Approach for Cache Replacement. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13--18 July 2020, Virtual Event (Proceedings of Machine Learning Research), Vol. 119. PMLR, 6237--6247. http:\/\/proceedings.mlr.press\/v119\/liu20f.html"},{"key":"e_1_2_1_29_1","volume-title":"Proceedings of the 2019 USENIX Annual Technical Conference. USENIX Association, 443--458","author":"Ma Lingxiao","year":"2019","unstructured":"Lingxiao Ma , Zhi Yang , Youshan Miao , Jilong Xue , Ming Wu , Lidong Zhou , and Yafei Dai . 2019 . NeuGraph: Parallel Deep Neural Network Computation on Large Graphs . In Proceedings of the 2019 USENIX Annual Technical Conference. USENIX Association, 443--458 . Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, and Yafei Dai. 2019. NeuGraph: Parallel Deep Neural Network Computation on Large Graphs. In Proceedings of the 2019 USENIX Annual Technical Conference. USENIX Association, 443--458."},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/3064176.3064191"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2022.3157525"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/SC.2018.00035"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.14778\/3476249.3476264"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/12.88484"},{"key":"e_1_2_1_35_1","volume-title":"Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 533--549","author":"Mohoney Jason","year":"2021","unstructured":"Jason Mohoney , Roger Waleffe , Henry Xu , Theodoros Rekatsinas , and Shivaram Venkataraman . 2021 . Marius: Learning Massive Graph Embeddings on a Single Machine . In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 533--549 . Jason Mohoney, Roger Waleffe, Henry Xu, Theodoros Rekatsinas, and Shivaram Venkataraman. 2021. Marius: Learning Massive Graph Embeddings on a Single Machine. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association, 533--549."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403280"},{"key":"e_1_2_1_37_1","volume-title":"Garnett (Eds.)","volume":"32","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , Alban Desmaison , Andreas Kopf , Edward Yang , Zachary DeVito , Martin Raison , Alykhan Tejani , Sasank Chilamkurthy , Benoit Steiner , Lu Fang , Junjie Bai , and Soumith Chintala . 2019 . PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R . Garnett (Eds.) , Vol. 32 . Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/ 2019\/file\/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/2019\/file\/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/2517349.2522740"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3352460.3358319"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2018.2857721"},{"key":"e_1_2_1_41_1","volume-title":"Graph Attention Networks. International Conference on Learning Representations","author":"Veli\u010dkovi\u0107 Petar","year":"2018","unstructured":"Petar Veli\u010dkovi\u0107 , Guillem Cucurull , Arantxa Casanova , Adriana Romero , Pietro Li\u00f2 , and Yoshua Bengio . 2018 . Graph Attention Networks. International Conference on Learning Representations (2018). Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations (2018)."},{"key":"e_1_2_1_42_1","volume-title":"Highly-Performant Package for Graph Neural Networks. arXiv preprint arXiv:1909.01315","author":"Wang Minjie","year":"2019","unstructured":"Minjie Wang , Da Zheng , Zihao Ye , Quan Gan , Mufei Li , Xiang Song , Jinjing Zhou , Chao Ma , Lingfan Yu , Yu Gai , Tianjun Xiao , Tong He , George Karypis , Jinyang Li , and Zheng Zhang . 2019. Deep Graph Library: A Graph-Centric , Highly-Performant Package for Graph Neural Networks. arXiv preprint arXiv:1909.01315 ( 2019 ). Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. 2019. Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks. arXiv preprint arXiv:1909.01315 (2019)."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.2978386"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3340404"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219890"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.14778\/3415478.3415539"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.5555\/3327345.3327423"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2020.2981333"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/BigData52589.2021.9671931"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/2808233"},{"key":"e_1_2_1_51_1","doi-asserted-by":"crossref","unstructured":"Da Zheng Chao Ma Minjie Wang Jinjing Zhou Qidong Su Xiang Song Quan Gan Zheng Zhang and George Karypis. 2021. DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs. arXiv:2010.05337 [cs.LG]  Da Zheng Chao Ma Minjie Wang Jinjing Zhou Qidong Su Xiang Song Quan Gan Zheng Zhang and George Karypis. 2021. DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs. arXiv:2010.05337 [cs.LG]","DOI":"10.1109\/IA351965.2020.00011"},{"key":"e_1_2_1_52_1","volume-title":"Szalay","author":"Zheng Da","year":"2015","unstructured":"Da Zheng , Disa Mhembere , Randal Burns , Joshua Vogelstein , Carey E. Priebe , and Alexander S . Szalay . 2015 . FlashGraph: Processing Billion-Node Graphs on an Array of Commodity SSDs. In Proceedings of the 13th USENIX Conference on File and Storage Technologies. USENIX Association , 45--58. Da Zheng, Disa Mhembere, Randal Burns, Joshua Vogelstein, Carey E. Priebe, and Alexander S. Szalay. 2015. FlashGraph: Processing Billion-Node Graphs on an Array of Commodity SSDs. In Proceedings of the 13th USENIX Conference on File and Storage Technologies. USENIX Association, 45--58."},{"key":"e_1_2_1_53_1","doi-asserted-by":"crossref","unstructured":"Da Zheng Xiang Song Chengru Yang Qidong Su Minjie Wang Chao Ma and George Karypis. 2022. Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs. arXiv:2112.15345 [cs.DC]  Da Zheng Xiang Song Chengru Yang Qidong Su Minjie Wang Chao Ma and George Karypis. 2022. Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs. arXiv:2112.15345 [cs.DC]","DOI":"10.1145\/3534678.3539177"},{"key":"e_1_2_1_54_1","unstructured":"Jie Zhou Ganqu Cui Shengding Hu Zhengyan Zhang Cheng Yang Zhiyuan Liu Lifeng Wang Changcheng Li and Maosong Sun. 2021. Graph Neural Networks: A Review of Methods and Applications. arXiv:1812.08434 [cs.LG]  Jie Zhou Ganqu Cui Shengding Hu Zhengyan Zhang Cheng Yang Zhiyuan Liu Lifeng Wang Changcheng Li and Maosong Sun. 2021. Graph Neural Networks: A Review of Methods and Applications. arXiv:1812.08434 [cs.LG]"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.14778\/3352063.3352127"},{"key":"e_1_2_1_56_1","volume-title":"Proceedings of the 2015 USENIX Annual Technical Conference. USENIX Association, 375--386","author":"Zhu Xiaowei","year":"2015","unstructured":"Xiaowei Zhu , Wentao Han , and Wenguang Chen . 2015 . GridGraph: Large-Scale Graph Processing on a Single Machine Using 2-Level Hierarchical Partitioning . In Proceedings of the 2015 USENIX Annual Technical Conference. USENIX Association, 375--386 . Xiaowei Zhu, Wentao Han, and Wenguang Chen. 2015. GridGraph: Large-Scale Graph Processing on a Single Machine Using 2-Level Hierarchical Partitioning. In Proceedings of the 2015 USENIX Annual Technical Conference. USENIX Association, 375--386."}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3551793.3551819","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,28]],"date-time":"2022-12-28T10:33:45Z","timestamp":1672223625000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3551793.3551819"}},"subtitle":["SSD-enabled billion-scale graph neural network training on a single machine via provably optimal in-memory caching"],"short-title":[],"issued":{"date-parts":[[2022,7]]},"references-count":56,"journal-issue":{"issue":"11","published-print":{"date-parts":[[2022,7]]}},"alternative-id":["10.14778\/3551793.3551819"],"URL":"https:\/\/doi.org\/10.14778\/3551793.3551819","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2022,7]]}}}