{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T23:08:30Z","timestamp":1776380910058,"version":"3.51.2"},"reference-count":58,"publisher":"Association for Computing Machinery (ACM)","issue":"8","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2024,4]]},"abstract":"<jats:p>Graph Neural Networks (GNNs) have shown exceptional performance across a wide range of applications. Current frameworks leverage CPU-GPU heterogeneous environments for GNN model training, incorporating mini-batch and sampling techniques to mitigate GPU memory constraints. In such settings, sample-based GNN training can be divided into three phases: sampling, gathering, and training. Existing GNN systems deploy various task orchestration methods to execute each phase on either the CPU or GPU. However, through comprehensive experimentation and analysis, we observe that these task orchestration approaches do not optimally exploit the available heterogeneous resources, hindered by either inefficient CPU processing or GPU resource bottlenecks.<\/jats:p>\n          <jats:p>In this paper, we propose NeutronOrch, a system for sample-based GNN training that ensures balanced utilization of the CPU and GPU. NeutronOrch decouples the training process by layer and pushes down the training task of the bottom layer to the CPU. This significantly reduces the computational load and memory footprint of GPU training. To avoid inefficient CPU processing, NeutronOrch only offloads the training of frequently accessed vertices to the CPU and lets GPU reuse their embeddings with bounded staleness. Furthermore, NeutronOrch provides a fine-grained pipeline design for the layer-based task orchestrating method. The experimental results show that compared with the state-of-the-art GNN systems, NeutronOrch can achieve up to 11.51\u00d7 performance speedup.<\/jats:p>","DOI":"10.14778\/3659437.3659453","type":"journal-article","created":{"date-parts":[[2024,5,31]],"date-time":"2024-05-31T16:22:27Z","timestamp":1717172547000},"page":"1995-2008","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["NeutronOrch: Rethinking Sample-Based GNN Training under CPU-GPU Heterogeneous Environments"],"prefix":"10.14778","volume":"17","author":[{"given":"Xin","family":"Ai","sequence":"first","affiliation":[{"name":"Northeastern Univ., China"}]},{"given":"Qiange","family":"Wang","sequence":"additional","affiliation":[{"name":"National University of Singapore, Singapore"}]},{"given":"Chunyu","family":"Cao","sequence":"additional","affiliation":[{"name":"Northeastern Univ., China"}]},{"given":"Yanfeng","family":"Zhang","sequence":"additional","affiliation":[{"name":"Northeastern Univ., China"}]},{"given":"Chaoyi","family":"Chen","sequence":"additional","affiliation":[{"name":"Northeastern Univ., China"}]},{"given":"Hao","family":"Yuan","sequence":"additional","affiliation":[{"name":"Northeastern Univ., China"}]},{"given":"Yu","family":"Gu","sequence":"additional","affiliation":[{"name":"Northeastern Univ., China"}]},{"given":"Ge","family":"Yu","sequence":"additional","affiliation":[{"name":"Northeastern Univ., China"}]}],"member":"320","published-online":{"date-parts":[[2024,5,31]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD'06","author":"Backstrom Lars","year":"2006","unstructured":"Lars Backstrom, Daniel P. Huttenlocher, Jon M. Kleinberg, and Xiangyang Lan. 2006. Group formation in large social networks: membership, growth, and evolution. In Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD'06, Philadelphia, PA, USA. 44--54."},{"key":"e_1_2_1_2_1","volume-title":"Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP'17","author":"Bastings Jasmijn","year":"2017","unstructured":"Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP'17, Copenhagen, Denmark. Association for Computational Linguistics, 1957--1967."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3572848.3577528"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611972740.43"},{"key":"e_1_2_1_5_1","volume-title":"6th International Conference on Learning Representations, ICLR'18","author":"Chen Jie","year":"2018","unstructured":"Jie Chen, Tengfei Ma, and Cao Xiao. 2018. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In 6th International Conference on Learning Representations, ICLR'18, Vancouver, BC, Canada. OpenReview.net."},{"key":"e_1_2_1_6_1","volume-title":"Proceedings of the 35th International Conference on Machine Learning, ICML'18, Stockholmsm\u00e4ssan","volume":"80","author":"Chen Jianfei","year":"2018","unstructured":"Jianfei Chen, Jun Zhu, and Le Song. 2018. Stochastic Training of Graph Convolutional Networks with Variance Reduction. In Proceedings of the 35th International Conference on Machine Learning, ICML'18, Stockholmsm\u00e4ssan, Stockholm, Sweden (Proceedings of Machine Learning Research), Vol. 80. PMLR, 941--949."},{"key":"e_1_2_1_7_1","volume-title":"Graph-based Representation Learning for Web-scale Recommender Systems. In The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD'22","author":"El-Kishky Ahmed","year":"2022","unstructured":"Ahmed El-Kishky, Michael M. Bronstein, Ying Xiao, and Aria Haghighi. 2022. Graph-based Representation Learning for Web-scale Recommender Systems. In The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD'22, Washington, DC, USA, Aidong Zhang and Huzefa Rangwala (Eds.). ACM, 4784--4785."},{"key":"e_1_2_1_8_1","volume-title":"Graph Neural Networks for Social Recommendation. In The World Wide Web Conference, WWW'19","author":"Fan Wenqi","year":"2019","unstructured":"Wenqi Fan, Yao Ma, Qing Li, Yuan He, Yihong Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph Neural Networks for Social Recommendation. In The World Wide Web Conference, WWW'19, San Francisco, CA, USA. ACM, 417--426."},{"key":"e_1_2_1_9_1","volume-title":"Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18--24","volume":"139","author":"Fey Matthias","year":"2021","unstructured":"Matthias Fey, Jan Eric Lenssen, Frank Weichert, and Jure Leskovec. 2021. GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18--24 July 2021, Virtual Event (Proceedings of Machine Learning Research), Marina Meila and Tong Zhang (Eds.), Vol. 139. PMLR, 3294--3304."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3470496.3527403"},{"key":"e_1_2_1_11_1","volume-title":"Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems","author":"Hamilton William L.","year":"2017","unstructured":"William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, NeurIPS'17 Long Beach, CA, USA. 1024--1034."},{"key":"e_1_2_1_12_1","volume-title":"Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems","author":"Ho Qirong","year":"2013","unstructured":"Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B. Gibbons, Garth A. Gibson, Gregory R. Ganger, and Eric P. Xing. 2013. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. NeurIPS'12, Lake Tahoe, Nevada, United States. 1223--1231."},{"key":"e_1_2_1_13_1","volume-title":"Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems","author":"Hu Weihua","year":"2020","unstructured":"Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS'20, December 6--12."},{"key":"e_1_2_1_14_1","volume-title":"ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training. CoRR abs\/2301.07482","author":"Huang Kezhao","year":"2023","unstructured":"Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang Song, Quan Gan, Zengfeng Huang, Jidong Zhai, and Zheng Zhang. 2023. ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training. CoRR abs\/2301.07482 (2023)."},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3437801.3441585"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3437801.3441585"},{"key":"e_1_2_1_17_1","volume-title":"International Conference on Management of Data, SIGMOD'22","author":"Ilyas Ihab F.","unstructured":"Ihab F. Ilyas, Theodoros Rekatsinas, Vishnu Konda, Jeffrey Pound, Xiaoguang Qi, and Mohamed A. Soliman. 2022. Saga: A Platform for Continuous Construction and Serving of Knowledge at Scale. In International Conference on Management of Data, SIGMOD'22, Philadelphia, PA, USA. ACM, 2259--2272."},{"key":"e_1_2_1_18_1","unstructured":"Intel. 2022. Analyzing CPU Utilization. https:\/\/www.intel.com\/content\/www\/us\/en\/developer\/articles\/tool\/performance-counter-monitor.html."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447786.3456244"},{"key":"e_1_2_1_20_1","volume-title":"Proceedings of Machine Learning and Systems","author":"Jia Zhihao","year":"2020","unstructured":"Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020. Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc. In Proceedings of Machine Learning and Systems 2020, MLSys'20, Austin, TX, USA, Inderjit S. Dhillon, Dimitris S. Papailiopoulos, and Vivienne Sze (Eds.). mlsys.org."},{"key":"e_1_2_1_21_1","volume-title":"Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR'17","author":"Thomas","unstructured":"Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR'17, Toulon, France, Conference Track Proceedings. OpenReview.net."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2487788.2488173"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3419111.3421281"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3064176.3064191"},{"key":"e_1_2_1_25_1","unstructured":"Linux man pages. 2023. htop(1) --- Linux manual page. https:\/\/man7.org\/linux\/man-pages\/man1\/htop.1.html."},{"key":"e_1_2_1_26_1","unstructured":"Linux man pages. 2023. top(1) --- Linux manual page. https:\/\/man7.org\/linux\/man-pages\/man1\/top.1.html."},{"key":"e_1_2_1_27_1","unstructured":"Microsoft. 2020. Extreme-scale model training for everyone. https:\/\/www.microsoft.com\/en-us\/research\/blog\/deepspeed-extreme-scalemodel-training-for-everyone."},{"key":"e_1_2_1_28_1","volume-title":"Graph Neural Network Training and Data Tiering. In The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD'22","author":"Min Seungwon","year":"2022","unstructured":"Seungwon Min, Kun Wu, Mert Hidayetoglu, Jinjun Xiong, Xiang Song, and Wen-Mei Hwu. 2022. Graph Neural Network Training and Data Tiering. In The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD'22, Washington, DC, USA. ACM, 3555--3565."},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.14778\/3476249.3476264"},{"key":"e_1_2_1_30_1","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation, OSDI'21","author":"Mohoney Jason","year":"2021","unstructured":"Jason Mohoney, Roger Waleffe, Henry Xu, Theodoros Rekatsinas, and Shivaram Venkataraman. 2021. Marius: Learning Massive Graph Embeddings on a Single Machine. In 15th USENIX Symposium on Operating Systems Design and Implementation, OSDI'21. USENIX Association, 533--549."},{"key":"e_1_2_1_31_1","unstructured":"NVIDIA. 2018. gpu-monitoring-tools. https:\/\/github.com\/NVIDIA\/gpu-monitoring-tools."},{"key":"e_1_2_1_32_1","unstructured":"NVIDIA. 2022. DGX Systems. https:\/\/www.nvidia.com\/en-sg\/data-center\/dgx-systems\/dgx-1."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.14778\/3538598.3538614"},{"key":"e_1_2_1_34_1","volume-title":"Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC'20","author":"Rajbhandari Samyam","year":"2020","unstructured":"Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. ZeRO: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC'20, Virtual Event \/ Atlanta, Georgia, USA. IEEE\/ACM, 20."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3476205"},{"key":"e_1_2_1_36_1","volume-title":"Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems","author":"Ramezani Morteza","year":"2020","unstructured":"Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, and Mahmut T. Kandemir. 2020. GCN meets GPU: Decoupling \"When to Sample\" from \"How to Sample\". In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS'20, virtual, Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.)."},{"key":"e_1_2_1_37_1","volume-title":"ZeRO-Offload: Democratizing Billion-Scale Model Training. In 2021 USENIX Annual Technical Conference, ATC'21","author":"Ren Jie","year":"2021","unstructured":"Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, and Yuxiong He. 2021. ZeRO-Offload: Democratizing Billion-Scale Model Training. In 2021 USENIX Annual Technical Conference, ATC'21. USENIX Association, 551--564."},{"key":"e_1_2_1_38_1","volume-title":"Graph Neural Network-Based Short-Term Load Forecasting with Temporal Convolution. Data Science and Engineering","author":"Sun Chenchen","year":"2023","unstructured":"Chenchen Sun, Yan Ning, Derong Shen, and Tiezheng Nie. 2023. Graph Neural Network-Based Short-Term Load Forecasting with Temporal Convolution. Data Science and Engineering (2023), 1--20."},{"key":"e_1_2_1_39_1","volume-title":"Legion: Automatically Pushing the Envelope of Multi-GPU System for Billion-Scale GNN Training. In USENIX Annual Technical Conference, USENIX ATC 2023","author":"Sun Jie","year":"2023","unstructured":"Jie Sun, Li Su, Zuocheng Shi, Wenting Shen, Zeke Wang, Lei Wang, Jie Zhang, Yong Li, Wenyuan Yu, Jingren Zhou, and Fei Wu. 2023. Legion: Automatically Pushing the Envelope of Multi-GPU System for Billion-Scale GNN Training. In USENIX Annual Technical Conference, USENIX ATC 2023, Boston, MA, USA, July 10--12, 2023, Julia Lawall and Dan Williams (Eds.). USENIX Association, 165--179."},{"key":"e_1_2_1_40_1","volume-title":"Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving with Workload Awareness. CoRR abs\/2305.10863","author":"Tan Zeyuan","year":"2023","unstructured":"Zeyuan Tan, Xiulong Yuan, Congjie He, Man-Kit Sit, Guo Li, Xiaoze Liu, Baole Ai, Kai Zeng, Peter R. Pietzuch, and Luo Mai. 2023. Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving with Workload Awareness. CoRR abs\/2305.10863 (2023)."},{"key":"e_1_2_1_41_1","volume-title":"Graph Attention Networks. In 6th International Conference on Learning Representations, ICLR'18","author":"Velickovic Petar","year":"2018","unstructured":"Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. In 6th International Conference on Learning Representations, ICLR'18, Vancouver, BC, Canada, Conference Track Proceedings. OpenReview.net."},{"key":"e_1_2_1_42_1","volume-title":"Large-Scale Training of Graph Neural Networks on a Single Machine. CoRR abs\/2202.02365","author":"Waleffe Roger","year":"2022","unstructured":"Roger Waleffe, Jason Mohoney, Theodoros Rekatsinas, and Shivaram Venkataraman. 2022. Marius++: Large-Scale Training of Graph Neural Networks on a Single Machine. CoRR abs\/2202.02365 (2022)."},{"key":"e_1_2_1_43_1","volume-title":"Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. CoRR abs\/1909.01315","author":"Wang Minjie","year":"2019","unstructured":"Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J. Smola, and Zheng Zhang. 2019. Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. CoRR abs\/1909.01315 (2019)."},{"key":"e_1_2_1_44_1","volume-title":"HongTu: Scalable Full-Graph GNN Training on Multiple GPUs (via communication-optimized CPU data offloading). CoRR abs\/2311.14898","author":"Wang Qiange","year":"2023","unstructured":"Qiange Wang, Yao Chen, Weng-Fai Wong, and Bingsheng He. 2023. HongTu: Scalable Full-Graph GNN Training on Multiple GPUs (via communication-optimized CPU data offloading). CoRR abs\/2311.14898 (2023)."},{"key":"e_1_2_1_45_1","volume-title":"NeutronStar: Distributed GNN Training with Hybrid Dependency Management. In International Conference on Management of Data","author":"Wang Qiange","year":"2022","unstructured":"Qiange Wang, Yanfeng Zhang, Hao Wang, Chaoyi Chen, Xiaodong Zhang, and Ge Yu. 2022. NeutronStar: Distributed GNN Training with Hybrid Dependency Management. In International Conference on Management of Data, Philadelphia, SIGMOD'22, PA, USA, Zachary Ives, Angela Bonifati, and Amr El Abbadi (Eds.). ACM, 1301--1315."},{"key":"e_1_2_1_46_1","volume-title":"Hashing-Accelerated Graph Neural Networks for Link Prediction. In The Web Conference 2021, WWW'21, Virtual Event \/ Ljubljana, Slovenia. ACM \/ IW3C2, 2910--2920","author":"Wu Wei","year":"2021","unstructured":"Wei Wu, Bin Li, Chuan Luo, and Wolfgang Nejdl. 2021. Hashing-Accelerated Graph Neural Networks for Link Prediction. In The Web Conference 2021, WWW'21, Virtual Event \/ Ljubljana, Slovenia. ACM \/ IW3C2, 2910--2920."},{"key":"e_1_2_1_47_1","volume-title":"TurboGNN: Improving the End-to-End Performance for Sampling-Based GNN Training on GPUs","author":"Wu Wenchao","year":"2023","unstructured":"Wenchao Wu, Xuanhua Shi, Ligang He, and Hai Jin. 2023. TurboGNN: Improving the End-to-End Performance for Sampling-Based GNN Training on GPUs. IEEE Trans. Comput. (2023)."},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.2978386"},{"key":"e_1_2_1_49_1","volume-title":"7th International Conference on Learning Representations, ICLR 2019","author":"Xu Keyulu","year":"2019","unstructured":"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How Powerful are Graph Neural Networks?. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6--9, 2019. https:\/\/openreview.net\/forum?id=ryGs6iA5Km"},{"key":"e_1_2_1_50_1","volume-title":"International Conference for High Performance Computing, Networking, Storage and Analysis, SC'22","author":"Yang Dongxu","unstructured":"Dongxu Yang, Junhong Liu, Jiaxing Qi, and Junjie Lai. 2022. WholeGraph: A Fast Graph Neural Network Training Framework with Multi-GPU Distributed Shared Memory Architecture. In International Conference for High Performance Computing, Networking, Storage and Analysis, SC'22, Dallas, TX, USA. IEEE, 1--14."},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-013-0693-z"},{"key":"e_1_2_1_52_1","volume-title":"Seventeenth European Conference on Computer Systems, EuroSys '22","author":"Yang Jianbang","year":"2022","unstructured":"Jianbang Yang, Dahai Tang, Xiaoniu Song, Lei Wang, Qiang Yin, Rong Chen, Wenyuan Yu, and Jingren Zhou. 2022. GNNLab: a factored system for sample-based GNN training over GPUs. In Seventeenth European Conference on Computer Systems, EuroSys '22, Rennes, France. ACM, 417--434."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219890"},{"key":"e_1_2_1_54_1","volume-title":"Link Prediction Based on Graph Neural Networks. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems","author":"Zhang Muhan","year":"2018","unstructured":"Muhan Zhang and Yixin Chen. 2018. Link Prediction Based on Graph Neural Networks. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS'18, Montr\u00e9al, Canada. 5171--5181."},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3589311"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2020.2981333"},{"key":"e_1_2_1_57_1","volume-title":"DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs. In 10th IEEE\/ACM Workshop on Irregular Applications: Architectures and Algorithms, IA3'20","author":"Zheng Da","year":"2020","unstructured":"Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, and George Karypis. 2020. DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs. In 10th IEEE\/ACM Workshop on Irregular Applications: Architectures and Algorithms, IA3'20, Atlanta, GA, USA. IEEE, 36--44."},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.aiopen.2021.01.001"}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3659437.3659453","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,31]],"date-time":"2024-05-31T16:32:07Z","timestamp":1717173127000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3659437.3659453"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4]]},"references-count":58,"journal-issue":{"issue":"8","published-print":{"date-parts":[[2024,4]]}},"alternative-id":["10.14778\/3659437.3659453"],"URL":"https:\/\/doi.org\/10.14778\/3659437.3659453","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2024,4]]},"assertion":[{"value":"2024-05-31","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}