{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,3]],"date-time":"2026-05-03T09:43:03Z","timestamp":1777801383528,"version":"3.51.4"},"reference-count":174,"publisher":"Association for Computing Machinery (ACM)","issue":"8","license":[{"start":{"date-parts":[[2024,4,10]],"date-time":"2024-04-10T00:00:00Z","timestamp":1712707200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100018537","name":"National Science and Technology Major Project","doi-asserted-by":"crossref","award":["2022ZD0116315"],"award-info":[{"award-number":["2022ZD0116315"]}],"id":[{"id":"10.13039\/501100018537","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62272054, 62192784, U23B2048, and U22B2037"],"award-info":[{"award-number":["62272054, 62192784, U23B2048, and U22B2037"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100005090","name":"Beijing Nova Program","doi-asserted-by":"crossref","award":["20230484319"],"award-info":[{"award-number":["20230484319"]}],"id":[{"id":"10.13039\/501100005090","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Xiaomi Young Talents Program"},{"DOI":"10.13039\/501100001809","name":"National Science Foundation of China","doi-asserted-by":"crossref","award":["U22B2060"],"award-info":[{"award-number":["U22B2060"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Hong Kong RGC GRF Project","award":["16209519"],"award-info":[{"award-number":["16209519"]}]},{"name":"CRF Project","award":["C6030-18G, C2004-21GF"],"award-info":[{"award-number":["C6030-18G, C2004-21GF"]}]},{"name":"AOE Project","award":["AoE\/E-603\/18"],"award-info":[{"award-number":["AoE\/E-603\/18"]}]},{"name":"RIF Project","award":["R6020-19"],"award-info":[{"award-number":["R6020-19"]}]},{"name":"Theme-based project","award":["TRS T41-603\/20R"],"award-info":[{"award-number":["TRS T41-603\/20R"]}]},{"DOI":"10.13039\/501100021171","name":"Guangdong Basic and Applied Basic Research Foundation","doi-asserted-by":"crossref","award":["2019B151530001"],"award-info":[{"award-number":["2019B151530001"]}],"id":[{"id":"10.13039\/501100021171","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Hong Kong ITC ITF","award":["MHX\/078\/21 and PRP\/004\/22FX"],"award-info":[{"award-number":["MHX\/078\/21 and PRP\/004\/22FX"]}]},{"name":"Microsoft Research Asia Collaborative Research Grant, HKUST-Webank joint research lab grant and HKUST Global Strategic Partnership Fund","award":["2021 SJTU-HKUST"],"award-info":[{"award-number":["2021 SJTU-HKUST"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,8,31]]},"abstract":"<jats:p>Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains. Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs. As a remedy, distributed computing becomes a promising solution of training large-scale GNNs, since it is able to provide abundant computing resources. However, the dependency of graph structure increases the difficulty of achieving high-efficiency distributed GNN training, which suffers from the massive communication and workload imbalance. In recent years, many efforts have been made on distributed GNN training, and an array of training algorithms and systems have been proposed. Yet, there is a lack of systematic review of the optimization techniques for the distributed execution of GNN training. In this survey, we analyze three major challenges in distributed GNN training: massive feature communication, the loss of model accuracy, and workload imbalance. Then, we introduce a new taxonomy for the optimization techniques in distributed GNN training that address the above challenges. The new taxonomy classifies existing techniques into four categories: GNN data partition, GNN batch generation, GNN execution model, and GNN communication protocol. We carefully discuss the techniques in each category. In the conclusion, we summarize existing distributed GNN systems for multi\u2013graphics processing units (GPUs), GPU-clusters and central processing unit (CPU)-clusters, respectively, and present a discussion about the future direction of distributed GNN training.<\/jats:p>","DOI":"10.1145\/3648358","type":"journal-article","created":{"date-parts":[[2024,2,16]],"date-time":"2024-02-16T12:32:59Z","timestamp":1708086779000},"page":"1-39","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":74,"title":["Distributed Graph Neural Network Training: A Survey"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8559-2628","authenticated-orcid":false,"given":"Yingxia","family":"Shao","sequence":"first","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8377-5367","authenticated-orcid":false,"given":"Hongzheng","family":"Li","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-8883-6139","authenticated-orcid":false,"given":"Xizhi","family":"Gu","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2888-7630","authenticated-orcid":false,"given":"Hongbo","family":"Yin","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2662-3444","authenticated-orcid":false,"given":"Yawen","family":"Li","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9371-8358","authenticated-orcid":false,"given":"Xupeng","family":"Miao","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, Pittsburgh, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7532-5550","authenticated-orcid":false,"given":"Wentao","family":"Zhang","sequence":"additional","affiliation":[{"name":"Mila \u2013 Qu\u00e9bec AI Institute, HEC Montr\u00e9al, Montreal, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1681-4677","authenticated-orcid":false,"given":"Bin","family":"Cui","sequence":"additional","affiliation":[{"name":"Peking University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8257-5806","authenticated-orcid":false,"given":"Lei","family":"Chen","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2024,4,10]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477141"},{"key":"e_1_3_3_3_2","first-page":"265","volume-title":"12th USENIX Symposium on Operating Systems Design and Implementation","author":"Abadi Martin","year":"2016","unstructured":"Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation. 265\u2013283."},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.3390\/s21144758"},{"key":"e_1_3_3_5_2","first-page":"1","article-title":"Distributed training of graph convolutional networks using subgraph approximation","author":"Angerd Alexandra","year":"2020","unstructured":"Alexandra Angerd, Keshav Balasubramanian, and Murali Annavaram. 2020. Distributed training of graph convolutional networks using subgraph approximation. arXiv preprint arXiv:2012.04930 (2020), 1\u201314.","journal-title":"arXiv preprint arXiv:2012.04930"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00937"},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2021.3065737"},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/1458082.1458122"},{"key":"e_1_3_3_9_2","first-page":"13","volume-title":"2021 IEEE International Symposium on Performance Analysis of Systems and Software","author":"Baruah Trinayan","year":"2021","unstructured":"Trinayan Baruah, Kaustubh Shivdikar, Shi Dong, Yifan Sun, Saiful A. Mojumder, Kihoon Jung, Jos\u00e9 L Abell\u00e1n, Yash Ukidave, Ajay Joshi, John Kim, et\u00a0al. 2021. GNNMark: A benchmark suite to characterize graph neural network training on GPUs. In 2021 IEEE International Symposium on Performance Analysis of Systems and Software. 13\u201323."},{"key":"e_1_3_3_10_2","first-page":"1","article-title":"Relational inductive biases, deep learning, and graph networks","author":"Battaglia Peter W.","year":"2018","unstructured":"Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et\u00a0al. 2018. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 (2018), 1\u201340.","journal-title":"arXiv preprint arXiv:1806.01261"},{"key":"e_1_3_3_11_2","first-page":"1","article-title":"Parallel and distributed graph neural networks: An in-depth concurrency analysis","author":"Besta Maciej","year":"2022","unstructured":"Maciej Besta and Torsten Hoefler. 2022. Parallel and distributed graph neural networks: An in-depth concurrency analysis. arXiv preprint arXiv:2205.09702 (2022), 1\u201327.","journal-title":"arXiv preprint arXiv:2205.09702"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/2503210.2503293"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.04.039"},{"key":"e_1_3_3_14_2","first-page":"1","article-title":"GPU performance analysis and optimisation","author":"Bradley Thomas","year":"2012","unstructured":"Thomas Bradley. 2012. GPU performance analysis and optimisation. NVIDIA Corporation (2012), 1\u2013117.","journal-title":"NVIDIA Corporation"},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447786.3456233"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3572848.3577528"},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/71.780863"},{"key":"e_1_3_3_18_2","first-page":"1","article-title":"CogDL: An extensive toolkit for deep learning on graphs","author":"Cen Yukuo","year":"2021","unstructured":"Yukuo Cen, Zhenyu Hou, Yan Wang, Qibin Chen, Yizhen Luo, Xingcheng Yao, Aohan Zeng, Shiguang Guo, Peng Zhang, Guohao Dai, et\u00a0al. 2021. CogDL: An extensive toolkit for deep learning on graphs. arXiv preprint arXiv:2103.00959 (2021), 1\u201311.","journal-title":"arXiv preprint arXiv:2103.00959"},{"key":"e_1_3_3_19_2","first-page":"1","article-title":"Distributed graph neural network training with periodic historical embedding synchronization","author":"Chai Zheng","year":"2022","unstructured":"Zheng Chai, Guangji Bai, Liang Zhao, and Yue Cheng. 2022. Distributed graph neural network training with periodic historical embedding synchronization. arXiv preprint arXiv:2206.00057 (2022), 1\u201320.","journal-title":"arXiv preprint arXiv:2206.00057"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.107611"},{"key":"e_1_3_3_21_2","first-page":"1","volume-title":"International Conference on Learning Representations","author":"Chen Jie","year":"2018","unstructured":"Jie Chen, Tengfei Ma, and Cao Xiao. 2018. FastGCN: Fast learning with graph convolutional networks via importance sampling. In International Conference on Learning Representations. 1\u201315."},{"key":"e_1_3_3_22_2","first-page":"1","article-title":"Revisiting distributed synchronous SGD","author":"Chen Jianmin","year":"2016","unstructured":"Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016), 1\u201310.","journal-title":"arXiv preprint arXiv:1604.00981"},{"key":"e_1_3_3_23_2","series-title":"Proceedings of Machine Learning Research","first-page":"942","volume-title":"Proceedings of the 35th International Conference on Machine Learning","volume":"80","author":"Chen Jianfei","year":"2018","unstructured":"Jianfei Chen, Jun Zhu, and Le Song. 2018. Stochastic training of graph convolutional networks with variance reduction. In Proceedings of the 35th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80). 942\u2013950."},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2021.3079142"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330925"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5400"},{"issue":"101","key":"e_1_3_3_27_2","first-page":"102","article-title":"Dawnbench: An end-to-end deep learning benchmark and competition","volume":"100","author":"Coleman Cody","year":"2017","unstructured":"Cody Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris R\u00e9, and Matei Zaharia. 2017. Dawnbench: An end-to-end deep learning benchmark and competition. Training 100, 101 (2017), 102.","journal-title":"Training"},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403192"},{"key":"e_1_3_3_29_2","first-page":"13260","volume-title":"Advances in Neural Information Processing Systems","author":"Corso Gabriele","year":"2020","unstructured":"Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Li\u00f2, and Petar Velickovic. 2020. Principal neighbourhood aggregation for graph nets. In Advances in Neural Information Processing Systems. 13260\u201313271."},{"key":"e_1_3_3_30_2","first-page":"711","volume-title":"Proceedings of the VLDB Endowment","volume":"16","author":"Demirci Gunduz Vehbi","year":"2023","unstructured":"Gunduz Vehbi Demirci, Aparajita Haldar, and Hakan Ferhatosmanoglu. 2023. Scalable graph convolutional network training on distributed-memory systems. In Proceedings of the VLDB Endowment, Vol. 16. 711\u2013724."},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/320"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330958"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467437"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3038228.3038239"},{"key":"e_1_3_3_35_2","first-page":"1","volume-title":"35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)","author":"Du Yuanqi","year":"2021","unstructured":"Yuanqi Du, Shiyu Wang, Xiaojie Guo, Hengning Cao, Shujie Hu, Junji Jiang, Aishwarya Varala, Abhinav Angirekula, and Liang Zhao. 2021. GraphGT: Machine learning datasets for graph generation and transformation. In 35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). 1\u201317."},{"key":"e_1_3_3_36_2","first-page":"1","article-title":"Benchmarking graph neural networks","author":"Dwivedi Vijay Prakash","year":"2020","unstructured":"Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. 2020. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982 (2020), 1\u201347.","journal-title":"arXiv preprint arXiv:2003.00982"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3389745"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313488"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICTAI50040.2020.00198"},{"key":"e_1_3_3_40_2","first-page":"1","article-title":"Fast graph representation learning with PyTorch geometric","author":"Fey Matthias","year":"2019","unstructured":"Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with PyTorch geometric. arXiv preprint arXiv:1903.02428 (2019), 1\u20139.","journal-title":"arXiv preprint arXiv:1903.02428"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295399"},{"key":"e_1_3_3_42_2","first-page":"1","article-title":"SIGN: Scalable inception graph neural networks","author":"Frasca Fabrizio","year":"2020","unstructured":"Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Ben Chamberlain, Michael Bronstein, and Federico Monti. 2020. SIGN: Scalable inception graph neural networks. arXiv preprint arXiv:2004.11198 (2020), 1\u201317.","journal-title":"arXiv preprint arXiv:2004.11198"},{"key":"e_1_3_3_43_2","first-page":"1","volume-title":"Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)","author":"Freitas Scott","year":"2021","unstructured":"Scott Freitas, Yuxiao Dong, Joshua Neil, and Duen Horng Chau. 2021. A large-scale database for graph representation learning. In Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1). 1\u201313."},{"key":"e_1_3_3_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3502181.3531467"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380297"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41019-023-00222-x"},{"key":"e_1_3_3_47_2","first-page":"551","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation","author":"Gandhi Swapnil","year":"2021","unstructured":"Swapnil Gandhi and Anand Padmanabha Iyer. 2021. P3: Distributed deep graph learning at scale. In 15th USENIX Symposium on Operating Systems Design and Implementation. 551\u2013568."},{"key":"e_1_3_3_48_2","first-page":"1263-1272","volume-title":"Proceedings of the 34th International Conference on Machine Learning","author":"Gilmer Justin","year":"2017","unstructured":"Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning. 1263-1272."},{"key":"e_1_3_3_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/3470496.3527403"},{"key":"e_1_3_3_50_2","first-page":"17","volume-title":"10th USENIX Symposium on Operating Systems Design and Implementation (OSDI 12)","author":"Gonzalez Joseph E.","year":"2012","unstructured":"Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. 2012. PowerGraph: Distributed graph-parallel computation on natural graphs. In 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI 12). 17\u201330."},{"key":"e_1_3_3_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/MCI.2020.3039072"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534540.3534691"},{"key":"e_1_3_3_53_2","doi-asserted-by":"publisher","DOI":"10.5555\/3294771.3294869"},{"key":"e_1_3_3_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401063"},{"key":"e_1_3_3_55_2","first-page":"1","volume-title":"Proceedings of the 1st MLSys Workshop on Graph Neural Networks and Systems","author":"Hoang Loc","year":"2021","unstructured":"Loc Hoang, Xuhao Chen, Hochan Lee, Roshan Dathathri, Gurbinder Gill, and Keshav Pingali. 2021. Efficient distribution for deep learning on large graphs. In Proceedings of the 1st MLSys Workshop on Graph Neural Networks and Systems. 1\u20139."},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.392"},{"key":"e_1_3_3_57_2","first-page":"22118","article-title":"Open graph benchmark: Datasets for machine learning on graphs","volume":"33","author":"Hu Weihua","year":"2020","unstructured":"Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. Advances in Neural Information Processing Systems 33 (2020), 22118\u201322133.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/SC41405.2020.00075"},{"key":"e_1_3_3_59_2","doi-asserted-by":"publisher","DOI":"10.1109\/SC41405.2020.00076"},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.1145\/3437801.3441585"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2022.06.097"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467408"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.5555\/3327345.3327367"},{"key":"e_1_3_3_64_2","first-page":"1","volume-title":"Advances in Neural Information Processing Systems","author":"Huang Yanping","year":"2019","unstructured":"Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Mia Xu Chen, Dehao Chen, Hyoukjoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. 2019. GPipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems, Vol. 32. 1\u201310."},{"key":"e_1_3_3_65_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447786.3456244"},{"key":"e_1_3_3_66_2","first-page":"187","volume-title":"Proceedings of Machine Learning and Systems","author":"Jia Zhihao","year":"2020","unstructured":"Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020. Improving the accuracy, scalability, and performance of graph neural networks with Roc. In Proceedings of Machine Learning and Systems. 187\u2013198."},{"key":"e_1_3_3_67_2","first-page":"1","article-title":"Communication-efficient sampling for distributed training of graph convolutional networks","author":"Jiang Peng","year":"2021","unstructured":"Peng Jiang and Masuma Akter Rumi. 2021. Communication-efficient sampling for distributed training of graph convolutional networks. arXiv preprint arXiv:2101.07706 (2021), 1\u201311.","journal-title":"arXiv preprint arXiv:2101.07706"},{"key":"e_1_3_3_68_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.117921"},{"key":"e_1_3_3_69_2","unstructured":"Chaitanya K. Joshi. 2022. Recent advances in efficient and scalable graph neural networks. Retrieved from https:\/\/www.chaitjo.com\/post\/efficient-gnns\/ (2022)."},{"key":"e_1_3_3_70_2","first-page":"1","volume-title":"Proceedings of Machine Learning and Systems","author":"Kaler Tim","year":"2023","unstructured":"Tim Kaler, Alexandros Iliopoulos, Philip Murzynowski, Tao Schardl, Charles E. Leiserson, and Jie Chen. 2023. Communication-efficient graph neural networks with probabilistic neighborhood expansion analysis and caching. In Proceedings of Machine Learning and Systems. 1\u201314."},{"key":"e_1_3_3_71_2","doi-asserted-by":"publisher","DOI":"10.1137\/S1064827595287997"},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/IPDPS49936.2021.00108"},{"key":"e_1_3_3_73_2","first-page":"512","volume-title":"IEEE International Parallel and Distributed Processing Symposium","author":"Kurt S\u00fcreyya Emre","year":"2023","unstructured":"S\u00fcreyya Emre Kurt, Jinghua Yan, Aravind Sukumaran-Rajam, Prashant Pandey, and P. Sadayappan. 2023. Communication optimization for distributed execution of graph neural networks. In IEEE International Parallel and Distributed Processing Symposium. 512\u2013523."},{"key":"e_1_3_3_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2020.3003307"},{"key":"e_1_3_3_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482237"},{"key":"e_1_3_3_76_2","first-page":"1","article-title":"GraphTheta: A distributed graph neural network learning system with flexible training strategy","author":"Li Houyi","year":"2021","unstructured":"Houyi Li, Yongchao Liu, Yongyong Li, Bin Huang, Peng Zhang, Guowei Zhang, Xintan Zeng, Kefeng Deng, Wenguang Chen, and Changhua He. 2021. GraphTheta: A distributed graph neural network learning system with flexible training strategy. arXiv preprint arXiv:2104.10569 (2021), 1\u201318.","journal-title":"arXiv preprint arXiv:2104.10569"},{"key":"e_1_3_3_77_2","doi-asserted-by":"publisher","DOI":"10.14778\/3529337.3529346"},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41019-023-00207-w"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.14778\/3415478.3415530"},{"key":"e_1_3_3_80_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467311"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/3419111.3421281"},{"key":"e_1_3_3_82_2","first-page":"1","article-title":"DIG: A turnkey library for diving into graph deep learning research","volume":"22","author":"Liu Meng","year":"2021","unstructured":"Meng Liu, Youzhi Luo, Limei Wang, Yaochen Xie, Hao Yuan, Shurui Gui, Haiyang Yu, Zhao Xu, Jingtun Zhang, Yi Liu, et\u00a0al. 2021. DIG: A turnkey library for diving into graph deep learning research. Journal of Machine Learning Research 22 (2021), 1\u20139.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_3_83_2","first-page":"103","volume-title":"Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation","author":"Liu Tianfeng","year":"2023","unstructured":"Tianfeng Liu, Yangrui Chen, Dan Li, Chuan Wu, Yibo Zhu, Jun He, Yanghua Peng, Hongzheng Chen, Hongzhi Chen, and Chuanxiong Guo. 2023. BGL: GPU-efficient GNN training by optimizing graph data I\/O and preprocessing. In Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation. 103\u2013118."},{"key":"e_1_3_3_84_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2022\/772"},{"key":"e_1_3_3_85_2","first-page":"1","volume-title":"International Conference on Learning Representations","author":"Liu Zirui","year":"2021","unstructured":"Zirui Liu, Kaixiong Zhou, Fan Yang, Li Li, Rui Chen, and Xia Hu. 2021. EXACT: Scalable graph neural networks training via extreme activation compression. In International Conference on Learning Representations. 1\u201332."},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","DOI":"10.14778\/2212351.2212354"},{"key":"e_1_3_3_87_2","first-page":"443","volume-title":"2019 USENIX Annual Technical Conference","author":"Ma Lingxiao","year":"2019","unstructured":"Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, and Yafei Dai. 2019. NeuGraph: Parallel deep neural network computation on large graphs. In 2019 USENIX Annual Technical Conference. 443\u2013458."},{"key":"e_1_3_3_88_2","doi-asserted-by":"publisher","DOI":"10.1145\/1807167.1807184"},{"key":"e_1_3_3_89_2","first-page":"336","volume-title":"Proceedings of Machine Learning and Systems","author":"Mattson Peter","year":"2020","unstructured":"Peter Mattson, Christine Cheng, Gregory Diamos, Cody Coleman, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debo Dutta, Udit Gupta, Kim Hazelwood, Andy Hock, Xinyuan Huang, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St. John, Carole-Jean Wu, Lingjie Xu, Cliff Young, and Matei Zaharia. 2020. MLPerf training benchmark. In Proceedings of Machine Learning and Systems. 336\u2013349."},{"key":"e_1_3_3_90_2","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3480856"},{"key":"e_1_3_3_91_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539038"},{"key":"e_1_3_3_92_2","first-page":"265","volume-title":"Proceedings of Machine Learning and Systems","author":"Mostafa Hesham","year":"2022","unstructured":"Hesham Mostafa. 2022. Sequential aggregation and rematerialization: Distributed full-batch training of graph neural networks on large graphs. In Proceedings of Machine Learning and Systems. 265\u2013275."},{"key":"e_1_3_3_93_2","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359646"},{"key":"e_1_3_3_94_2","doi-asserted-by":"publisher","DOI":"10.1109\/SC41405.2020.00060"},{"key":"e_1_3_3_95_2","doi-asserted-by":"publisher","DOI":"10.14778\/3538598.3538614"},{"key":"e_1_3_3_96_2","first-page":"256","volume-title":"2021 IEEE International Parallel and Distributed Processing Symposium","author":"Rahman Md Khaledur","year":"2021","unstructured":"Md Khaledur Rahman, Majedul Haque Sujon, and Ariful Azad. 2021. FusedMM: A unified SDDMM-SpMM kernel for graph embedding and graph neural networks. In 2021 IEEE International Parallel and Distributed Processing Symposium. 256\u2013266."},{"key":"e_1_3_3_97_2","first-page":"1","article-title":"Learn locally, correct globally: A distributed algorithm for training graph neural networks","author":"Ramezani Morteza","year":"2021","unstructured":"Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Mahmut T. Kandemir, and Anand Sivasubramaniam. 2021. Learn locally, correct globally: A distributed algorithm for training graph neural networks. arXiv preprint arXiv:2111.08202 (2021), 1\u201332.","journal-title":"arXiv preprint arXiv:2111.08202"},{"key":"e_1_3_3_98_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.isci.2021.102393"},{"key":"e_1_3_3_99_2","first-page":"1","volume-title":"Advances in Neural Information Processing Systems","author":"Recht Benjamin","year":"2011","unstructured":"Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. 2011. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems. 1\u20139."},{"key":"e_1_3_3_100_2","first-page":"4470","volume-title":"Proceedings of the 35th International Conference on Machine Learning","author":"Sanchez-Gonzalez Alvaro","year":"2018","unstructured":"Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. 2018. Graph networks as learnable physics engines for inference and control. In Proceedings of the 35th International Conference on Machine Learning. 4470\u20134479."},{"key":"e_1_3_3_101_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447818.3461472"},{"key":"e_1_3_3_102_2","doi-asserted-by":"publisher","DOI":"10.1145\/3524059.3532384"},{"key":"e_1_3_3_103_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICBAIE52039.2021.9390066"},{"key":"e_1_3_3_104_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE53745.2022.00053"},{"key":"e_1_3_3_105_2","doi-asserted-by":"publisher","DOI":"10.1145\/2339530.2339722"},{"key":"e_1_3_3_106_2","first-page":"165\u2014179","volume-title":"Proceedings of the USENIX Annual Technical Conference","author":"Sun Jie","year":"2023","unstructured":"Jie Sun, Li Su, Zuocheng Shi, Wenting Shen, Zeke Wang, Lei Wang, Jie Zhang, Yong Li, Wenyuan Yu, Jingren Zhou, and Fei Wu. 2023. Legion: Automatically pushing the envelope of multi-GPU system for billion-scale GNN training. In Proceedings of the USENIX Annual Technical Conference. 165\u2014179."},{"key":"e_1_3_3_107_2","first-page":"1","article-title":"Degree-Quant: Quantization-aware training for graph neural networks","author":"Tailor Shyam A.","year":"2020","unstructured":"Shyam A. Tailor, Javier Fernandez-Marques, and Nicholas D. Lane. 2020. Degree-Quant: Quantization-aware training for graph neural networks. arXiv preprint arXiv:2008.05000 (2020), 1\u201322.","journal-title":"arXiv preprint arXiv:2008.05000"},{"key":"e_1_3_3_108_2","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2019.00002"},{"key":"e_1_3_3_109_2","first-page":"495","volume-title":"USENIX Symposium on Operating Systems Design and Implementation","author":"Thorpe John","year":"2021","unstructured":"John Thorpe, Yifan Qiao, Jonathan Eyolfson, Shen Teng, Guanzhou Hu, Zhihao Jia, Jinliang Wei, Keval Vora, Ravi Netravali, Miryung Kim, et\u00a0al. 2021. Dorylus: Affordable, scalable, and accurate GNN training with distributed CPU servers and serverless threads. In USENIX Symposium on Operating Systems Design and Implementation. 495\u2013514."},{"key":"e_1_3_3_110_2","doi-asserted-by":"publisher","DOI":"10.1109\/IPDPS47924.2020.00100"},{"key":"e_1_3_3_111_2","doi-asserted-by":"publisher","DOI":"10.5555\/3433701.3433794"},{"key":"e_1_3_3_112_2","first-page":"1","article-title":"The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey","author":"Vatter Jana","year":"2023","unstructured":"Jana Vatter, Ruben Mayer, and Hans-Arno Jacobsen. 2023. The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey. Comput. Surveys (2023), 1\u201335.","journal-title":"Comput. Surveys"},{"key":"e_1_3_3_113_2","first-page":"1","volume-title":"International Conference on Learning Representations","year":"2018","unstructured":"Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. 1\u201312."},{"key":"e_1_3_3_114_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377454"},{"key":"e_1_3_3_115_2","doi-asserted-by":"publisher","DOI":"10.1145\/3352460.3358307"},{"key":"e_1_3_3_116_2","first-page":"1","volume-title":"Proceedings of Machine Learning and Systems","author":"Wan Borui","year":"2023","unstructured":"Borui Wan, Juntao Zhao, and Chuan Wu. 2023. Adaptive message quantization and parallelization for distributed full-graph GNN training. In Proceedings of Machine Learning and Systems. 1\u201315."},{"key":"e_1_3_3_117_2","first-page":"673","volume-title":"Proceedings of Machine Learning and Systems","author":"Wan Cheng","year":"2022","unstructured":"Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, and Yingyan Lin. 2022. BNS-GCN: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling. In Proceedings of Machine Learning and Systems. 673\u2013693."},{"key":"e_1_3_3_118_2","first-page":"1","volume-title":"International Conference on Learning Representations","author":"Wan Cheng","year":"2022","unstructured":"Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, and Yingyan Lin. 2022. PipeGCN: Efficient full-graph training of graph convolutional networks with pipelined feature communication. In International Conference on Learning Representations. 1\u201324."},{"key":"e_1_3_3_119_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589288"},{"key":"e_1_3_3_120_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11280-021-00878-3"},{"key":"e_1_3_3_121_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00161"},{"key":"e_1_3_3_122_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447786.3456229"},{"key":"e_1_3_3_123_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2020.3015777"},{"key":"e_1_3_3_124_2","first-page":"1","volume-title":"ICLR Workshop on Representation Learning on Graphs and Manifolds","author":"Wang Minjie","year":"2019","unstructured":"Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J. Smola, and Zheng Zhang. 2019. Deep graph library: Towards efficient and scalable deep learning on graphs. In ICLR Workshop on Representation Learning on Graphs and Manifolds. 1\u20137."},{"key":"e_1_3_3_125_2","first-page":"304","volume-title":"Proceedings of the International Conference on Parallel Architectures and Compilation Techniques","author":"Wang Pengyu","year":"2021","unstructured":"Pengyu Wang, Chao Li, Jing Wang, Taolei Wang, Lu Zhang, Jingwen Leng, Quan Chen, and Minyi Guo. 2021. Skywalker: Efficient alias-method-based graph sampling and random walk on GPUs. In Proceedings of the International Conference on Parallel Architectures and Compilation Techniques. 304\u2013317."},{"key":"e_1_3_3_126_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313562"},{"key":"e_1_3_3_127_2","doi-asserted-by":"publisher","DOI":"10.1145\/3503221.3508408"},{"key":"e_1_3_3_128_2","first-page":"515","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation","author":"Wang Yuke","year":"2021","unstructured":"Yuke Wang, Boyuan Feng, Gushu Li, Shuangchen Li, Lei Deng, Yuan Xie, and Yufei Ding. 2021. GNNAdvisor: An adaptive and efficient runtime system for GNN acceleration on GPUs. In 15th USENIX Symposium on Operating Systems Design and Implementation. 515\u2013531."},{"key":"e_1_3_3_129_2","first-page":"1","volume-title":"J. International Conference on Learning Representations","author":"Welling Max","year":"2017","unstructured":"Max Welling and Thomas N. Kipf. 2017. Semi-supervised classification with graph convolutional networks. In J. International Conference on Learning Representations. 1\u201314."},{"key":"e_1_3_3_130_2","first-page":"1","article-title":"GIST: Distributed training for large-scale graph convolutional networks","author":"Wolfe Cameron R.","year":"2021","unstructured":"Cameron R. Wolfe, Jingkang Yang, Arindam Chowdhury, Chen Dun, Artun Bayer, Santiago Segarra, and Anastasios Kyrillidis. 2021. GIST: Distributed training for large-scale graph convolutional networks. arXiv preprint arXiv:2102.10424 (2021), 1\u201328.","journal-title":"arXiv preprint arXiv:2102.10424"},{"key":"e_1_3_3_131_2","first-page":"6861","volume-title":"Proceedings of the 36th International Conference on Machine Learning","author":"Wu Felix","year":"2019","unstructured":"Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning. 6861\u20136871."},{"key":"e_1_3_3_132_2","first-page":"1","article-title":"Graph neural networks in recommender systems: A survey","author":"Wu Shiwen","year":"2022","unstructured":"Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, and Bin Cui. 2022. Graph neural networks in recommender systems: A survey. Comput. Surveys (2022), 1\u201337.","journal-title":"Comput. Surveys"},{"key":"e_1_3_3_133_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5455"},{"key":"e_1_3_3_134_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447786.3456247"},{"key":"e_1_3_3_135_2","doi-asserted-by":"publisher","DOI":"10.1109\/tnnls.2020.2978386"},{"key":"e_1_3_3_136_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41019-023-00226-7"},{"issue":"1","key":"e_1_3_3_137_2","first-page":"1","article-title":"Self-supervised learning of graph neural networks: A unified review","author":"Xie Yaochen","year":"2022","unstructured":"Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. 2022. Self-supervised learning of graph neural networks: A unified review. IEEE Transactions on Pattern Analysis and Machine Intelligence1 (2022), 1\u20131.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_3_3_138_2","first-page":"515","volume-title":"Proceedings of Machine Learning and Systems","author":"Xie Zhiqiang","year":"2022","unstructured":"Zhiqiang Xie, Minjie Wang, Zihao Ye, Zheng Zhang, and Rui Fan. 2022. Graphiler: Optimizing graph neural networks with message passing data flow graph. In Proceedings of Machine Learning and Systems. 515\u2013528."},{"key":"e_1_3_3_139_2","first-page":"1","volume-title":"International Conference on Learning Representations","author":"Xu Keyulu","year":"2019","unstructured":"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks?. In International Conference on Learning Representations. 1\u201317."},{"key":"e_1_3_3_140_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2014.2377743"},{"key":"e_1_3_3_141_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2202.00075"},{"key":"e_1_3_3_142_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403236"},{"key":"e_1_3_3_143_2","doi-asserted-by":"publisher","DOI":"10.1145\/3492321.3519557"},{"key":"e_1_3_3_144_2","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359634"},{"key":"e_1_3_3_145_2","doi-asserted-by":"publisher","DOI":"10.1145\/3511808.3557443"},{"key":"e_1_3_3_146_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00220"},{"key":"e_1_3_3_147_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373087.3375312"},{"key":"e_1_3_3_148_2","first-page":"1","volume-title":"International Conference on Learning Representations","author":"Zeng Hanqing","year":"2020","unstructured":"Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. 2020. GraphSAINT: Graph sampling based inductive learning method. In International Conference on Learning Representations. 1\u201319."},{"key":"e_1_3_3_149_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330961"},{"key":"e_1_3_3_150_2","doi-asserted-by":"publisher","DOI":"10.14778\/3415478.3415539"},{"key":"e_1_3_3_151_2","first-page":"7364","volume-title":"Proceedings of the 36th International Conference on Machine Learning","author":"Zhang Guo","year":"2019","unstructured":"Guo Zhang, Hao He, and Dina Katabi. 2019. Circuit-GNN: Graph neural networks for distributed circuit design. In Proceedings of the 36th International Conference on Machine Learning. 7364\u20137373."},{"key":"e_1_3_3_152_2","first-page":"467","volume-title":"Proceedings of Machine Learning and Systems","author":"Zhang Hengrui","year":"2022","unstructured":"Hengrui Zhang, Zhongming Yu, Guohao Dai, Guyue Huang, Yufei Ding, Yuan Xie, and Yu Wang. 2022. Understanding GNN computational graph: A coordinated computation, IO, and memory perspective. In Proceedings of Machine Learning and Systems. 467\u2013484."},{"key":"e_1_3_3_153_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5471"},{"key":"e_1_3_3_154_2","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3389706"},{"key":"e_1_3_3_155_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589311"},{"key":"e_1_3_3_156_2","first-page":"1","article-title":"Graph neural networks and their current applications in bioinformatics","volume":"12","author":"Zhang Xiao-Meng","year":"2021","unstructured":"Xiao-Meng Zhang, Li Liang, Lin Liu, and Ming-Jing Tang. 2021. Graph neural networks and their current applications in bioinformatics. Frontiers in Genetics 12 (2021), 1\u201322.","journal-title":"Frontiers in Genetics"},{"key":"e_1_3_3_157_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.31"},{"key":"e_1_3_3_158_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2020.2981333"},{"key":"e_1_3_3_159_2","doi-asserted-by":"publisher","DOI":"10.1109\/BigData52589.2021.9671931"},{"key":"e_1_3_3_160_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i5.16600"},{"key":"e_1_3_3_161_2","first-page":"1","article-title":"Distributed optimization of graph convolutional network using subgraph variance","author":"Zhao Taige","year":"2021","unstructured":"Taige Zhao, Xiangyu Song, Jianxin Li, Wei Luo, and Imran Razzak. 2021. Distributed optimization of graph convolutional network using subgraph variance. arXiv preprint arXiv:2110.02987 (2021), 1\u201312.","journal-title":"arXiv preprint arXiv:2110.02987"},{"key":"e_1_3_3_162_2","first-page":"1","article-title":"Learned low precision graph neural networks","author":"Zhao Yiren","year":"2020","unstructured":"Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, and Pietro Lio. 2020. Learned low precision graph neural networks. arXiv preprint arXiv:2009.09232 (2020), 1\u201314.","journal-title":"arXiv preprint arXiv:2009.09232"},{"key":"e_1_3_3_163_2","doi-asserted-by":"publisher","DOI":"10.14778\/3514061.3514069"},{"key":"e_1_3_3_164_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5477"},{"key":"e_1_3_3_165_2","doi-asserted-by":"publisher","DOI":"10.1109\/IA351965.2020.00011"},{"key":"e_1_3_3_166_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539177"},{"key":"e_1_3_3_167_2","first-page":"3414","volume-title":"The 23rd International Conference on Artificial Intelligence and Statistics","author":"Zheng Xun","year":"2020","unstructured":"Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, and Eric P. Xing. 2020. Learning sparse nonparametric DAGs. In The 23rd International Conference on Artificial Intelligence and Statistics. 3414\u20133425."},{"key":"e_1_3_3_168_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41019-023-00230-x"},{"key":"e_1_3_3_169_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.549"},{"key":"e_1_3_3_170_2","doi-asserted-by":"publisher","DOI":"10.14778\/3461535.3461547"},{"key":"e_1_3_3_171_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.aiopen.2021.01.001"},{"key":"e_1_3_3_172_2","first-page":"1","article-title":"GcCNear: A hybrid architecture for efficient GCN training with near-memory processing","author":"Zhou Zhe","year":"2021","unstructured":"Zhe Zhou, Cong Li, Xuechao Wei, and Guangyu Sun. 2021. GcCNear: A hybrid architecture for efficient GCN training with near-memory processing. arXiv preprint arXiv:2111.00680 (2021), 1\u201315.","journal-title":"arXiv preprint arXiv:2111.00680"},{"key":"e_1_3_3_173_2","doi-asserted-by":"publisher","DOI":"10.1109\/IISWC.2018.8573476"},{"key":"e_1_3_3_174_2","doi-asserted-by":"publisher","DOI":"10.14778\/3352063.3352127"},{"key":"e_1_3_3_175_2","doi-asserted-by":"publisher","DOI":"10.5555\/3454287.3455296"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3648358","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3648358","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:04:13Z","timestamp":1750291453000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3648358"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,10]]},"references-count":174,"journal-issue":{"issue":"8","published-print":{"date-parts":[[2024,8,31]]}},"alternative-id":["10.1145\/3648358"],"URL":"https:\/\/doi.org\/10.1145\/3648358","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,4,10]]},"assertion":[{"value":"2022-10-31","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-31","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-04-10","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}