{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T09:42:22Z","timestamp":1775122942303,"version":"3.50.1"},"reference-count":45,"publisher":"Association for Computing Machinery (ACM)","issue":"9","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2022,5]]},"abstract":"<jats:p>Graph neural networks (GNNs) have emerged due to their success at modeling graph data. Yet, it is challenging for GNNs to efficiently scale to large graphs. Thus, distributed GNNs come into play. To avoid communication caused by expensive data movement between workers, we propose Sancus, a staleness-aware communication-avoiding decentralized GNN system. By introducing a set of novel bounded embedding staleness metrics and adaptively skipping broadcasts, Sancus abstracts decentralized GNN processing as sequential matrix multiplication and uses historical embeddings via cache. Theoretically, we show bounded approximation errors of embeddings and gradients with convergence guarantee. Empirically, we evaluate Sancus with common GNN models via different system setups on large-scale benchmark datasets. Compared to SOTA works, Sancus can avoid up to 74% communication with at least 1.86X faster throughput on average without accuracy loss.<\/jats:p>","DOI":"10.14778\/3538598.3538614","type":"journal-article","created":{"date-parts":[[2022,7,27]],"date-time":"2022-07-27T17:12:31Z","timestamp":1658941951000},"page":"1937-1950","source":"Crossref","is-referenced-by-count":66,"title":["Sancus"],"prefix":"10.14778","volume":"15","author":[{"given":"Jingshu","family":"Peng","sequence":"first","affiliation":[{"name":"The Hong Kong University of Science and Technology"}]},{"given":"Zhao","family":"Chen","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology"}]},{"given":"Yingxia","family":"Shao","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications"}]},{"given":"Yanyan","family":"Shen","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University"}]},{"given":"Lei","family":"Chen","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology"}]},{"given":"Jiannong","family":"Cao","sequence":"additional","affiliation":[{"name":"The Hong Kong Polytechnic University"}]}],"member":"320","published-online":{"date-parts":[[2022,7,27]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"2022. Lambda. Retrieved May 15 2022 from https:\/\/aws.amazon.com\/lambda\/  2022. Lambda. Retrieved May 15 2022 from https:\/\/aws.amazon.com\/lambda\/"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3477141"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447786.3456233"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.5555\/1285358.1285359"},{"key":"e_1_2_1_5_1","volume-title":"Stochastic Training of Graph Convolutional Networks with Variance Reduction. In ICML 2018","author":"Chen Jianfei","year":"2018","unstructured":"Jianfei Chen , Jun Zhu , and Le Song . 2018 . Stochastic Training of Graph Convolutional Networks with Variance Reduction. In ICML 2018 , Stockholmsm\u00e4ssan, Stockholm, Sweden , July 10-15, 2018 (Proceedings of Machine Learning Research), Vol. 80. PMLR, 941--949. http:\/\/proceedings.mlr.press\/v80\/chen18p.html Jianfei Chen, Jun Zhu, and Le Song. 2018. Stochastic Training of Graph Convolutional Networks with Variance Reduction. In ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10-15, 2018 (Proceedings of Machine Learning Research), Vol. 80. PMLR, 941--949. http:\/\/proceedings.mlr.press\/v80\/chen18p.html"},{"key":"e_1_2_1_6_1","article-title":"PowerLyra: Differentiated Graph Computation and Partitioning on Skewed Graphs","volume":"5","author":"Chen Rong","year":"2018","unstructured":"Rong Chen , Jiaxin Shi , Yanzhe Chen , Binyu Zang , Haibing Guan , and Haibo Chen . 2018 . PowerLyra: Differentiated Graph Computation and Partitioning on Skewed Graphs . ACM Trans. Parallel Comput. 5 , 3 (2018). Rong Chen, Jiaxin Shi, Yanzhe Chen, Binyu Zang, Haibing Guan, and Haibo Chen. 2018. PowerLyra: Differentiated Graph Computation and Partitioning on Skewed Graphs. ACM Trans. Parallel Comput. 5, 3 (2018).","journal-title":"ACM Trans. Parallel Comput."},{"key":"e_1_2_1_7_1","volume-title":"Seunghak Lee, Gregory R. Ganger, Garth Gibson, Kimberly Keeton, and Eric P. Xing. [n.d.]. Solving the Straggler Problem with Bounded Staleness. In HotOS XIV","author":"Cipar James","year":"2013","unstructured":"James Cipar , Qirong Ho , Jin Kyu Kim , Seunghak Lee, Gregory R. Ganger, Garth Gibson, Kimberly Keeton, and Eric P. Xing. [n.d.]. Solving the Straggler Problem with Bounded Staleness. In HotOS XIV , Santa Ana Pueblo, New Mexico, USA , May 13-15, 2013 . USENIX Association . James Cipar, Qirong Ho, Jin Kyu Kim, Seunghak Lee, Gregory R. Ganger, Garth Gibson, Kimberly Keeton, and Eric P. Xing. [n.d.]. Solving the Straggler Problem with Bounded Staleness. In HotOS XIV, Santa Ana Pueblo, New Mexico, USA, May 13-15, 2013. USENIX Association."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403192"},{"key":"e_1_2_1_9_1","volume-title":"Toward Understanding the Impact of Staleness in Distributed Machine Learning. In ICLR 2019","author":"Dai Wei","year":"2019","unstructured":"Wei Dai , Yi Zhou , Nanqing Dong , Hao Zhang , and Eric P. Xing . 2019 . Toward Understanding the Impact of Staleness in Distributed Machine Learning. In ICLR 2019 , New Orleans, LA, USA , May 6-9, 2019 . OpenReview.net. https:\/\/openreview.net\/forum?id=BylQV305YQ Wei Dai, Yi Zhou, Nanqing Dong, Hao Zhang, and Eric P. Xing. 2019. Toward Understanding the Impact of Staleness in Distributed Machine Learning. In ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. https:\/\/openreview.net\/forum?id=BylQV305YQ"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.14778\/3137765.3137801"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3035918.3035942"},{"key":"e_1_2_1_12_1","volume-title":"ICML 2021","volume":"139","author":"Fey Matthias","year":"2021","unstructured":"Matthias Fey , Jan Eric Lenssen , Frank Weichert , and Jure Leskovec . 2021 . GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings . In ICML 2021 , 18-24 July 2021, Virtual Event (Proceedings of Machine Learning Research) , Vol. 139 . PMLR, 3294--3304. http:\/\/proceedings.mlr.press\/v139\/fey21a.html Matthias Fey, Jan Eric Lenssen, Frank Weichert, and Jure Leskovec. 2021. GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings. In ICML 2021, 18-24 July 2021, Virtual Event (Proceedings of Machine Learning Research), Vol. 139. PMLR, 3294--3304. http:\/\/proceedings.mlr.press\/v139\/fey21a.html"},{"key":"e_1_2_1_13_1","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2021","author":"Gandhi Swapnil","year":"2021","unstructured":"Swapnil Gandhi and Anand Padmanabha Iyer . 2021 . P3: Distributed Deep Graph Learning at Scale . In 15th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2021 , July 14-16, 2021. USENIX Association, 551--568. https:\/\/www.usenix.org\/conference\/osdi21\/presentation\/gandhi Swapnil Gandhi and Anand Padmanabha Iyer. 2021. P3: Distributed Deep Graph Learning at Scale. In 15th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2021, July 14-16, 2021. USENIX Association, 551--568. https:\/\/www.usenix.org\/conference\/osdi21\/presentation\/gandhi"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3210377.3210394"},{"key":"e_1_2_1_15_1","volume-title":"Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020","author":"Hu Weihua","year":"2020","unstructured":"Weihua Hu , Matthias Fey , Marinka Zitnik , Yuxiao Dong , Hongyu Ren , Bowen Liu , Michele Catasta , and Jure Leskovec . [n.d.]. Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , December 6-12, 2020, virtual. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. [n.d.]. Open Graph Benchmark: Datasets for Machine Learning on Graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html"},{"key":"e_1_2_1_16_1","volume-title":"Proceedings of Machine Learning and Systems 2020","author":"Jia Zhihao","year":"2020","unstructured":"Zhihao Jia , Sina Lin , Mingyu Gao , Matei Zaharia , and Alex Aiken . 2020 . Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc . In Proceedings of Machine Learning and Systems 2020 , MLSys 2020, Austin, TX, USA , March 2-4, 2020. mlsys.org. https:\/\/proceedings.mlsys.org\/book\/300.pdf Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020. Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc. In Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2-4, 2020. mlsys.org. https:\/\/proceedings.mlsys.org\/book\/300.pdf"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3035918.3035933"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1137\/S1064827595287997"},{"key":"e_1_2_1_19_1","doi-asserted-by":"crossref","unstructured":"Won Kim and Frederick H. Lochovsky (Eds.). 1989. Object-Oriented Concepts Databases and Applications. ACM Press and Addison-Wesley.  Won Kim and Frederick H. Lochovsky (Eds.). 1989. Object-Oriented Concepts Databases and Applications. ACM Press and Addison-Wesley.","DOI":"10.1145\/63320"},{"key":"e_1_2_1_20_1","volume-title":"Kipf and Max Welling","author":"Thomas","year":"2017","unstructured":"Thomas N. Kipf and Max Welling . 2017 . Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview .net. https:\/\/openreview.net\/forum?id=SJU4ayYgl Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. https:\/\/openreview.net\/forum?id=SJU4ayYgl"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISPASS.2019.00029"},{"key":"e_1_2_1_22_1","volume-title":"EuroSys 2015","author":"Li Hao","year":"2015","unstructured":"Hao Li , Asim Kadav , Erik Kruus , and Cristian Ungureanu . [n.d.]. MALT : distributed data-parallelism for existing ML applications . In EuroSys 2015 , Bordeaux, France , April 21-24, 2015 . ACM. Hao Li, Asim Kadav, Erik Kruus, and Cristian Ungureanu. [n.d.]. MALT: distributed data-parallelism for existing ML applications. In EuroSys 2015, Bordeaux, France, April 21-24, 2015. ACM."},{"key":"e_1_2_1_23_1","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"Lian Xiangru","year":"2017","unstructured":"Xiangru Lian , Ce Zhang , Huan Zhang , Cho-Jui Hsieh , Wei Zhang , and Ji Liu . 2017 . Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent . In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 , December 4-9, 2017, Long Beach, CA, USA. 5330--5340. https:\/\/proceedings.neurips.cc\/paper\/ 2017\/hash\/f75526659f31040afeb61cb7133e4e6d-Abstract.html Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. 2017. Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA. 5330--5340. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/f75526659f31040afeb61cb7133e4e6d-Abstract.html"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3419111.3421281"},{"key":"e_1_2_1_25_1","volume-title":"Proc. VLDB Endow. 13","author":"Liu Husong","year":"2020","unstructured":"Husong Liu , Shengliang Lu , Xinyu Chen , and Bingsheng He . 2020 . G3: When Graph Neural Networks Meet Parallel Graph Processing Systems on GPUs . Proc. VLDB Endow. 13 , 12 (2020). http:\/\/www.vldb.org\/pvldb\/vol13\/p2813-liu.pdf Husong Liu, Shengliang Lu, Xinyu Chen, and Bingsheng He. 2020. G3: When Graph Neural Networks Meet Parallel Graph Processing Systems on GPUs. Proc. VLDB Endow. 13, 12 (2020). http:\/\/www.vldb.org\/pvldb\/vol13\/p2813-liu.pdf"},{"key":"e_1_2_1_26_1","first-page":"2020","article-title":"Prague: High-Performance Heterogeneity-Aware Asynchronous Decentralized Training. In ASPLOS '20: Architectural Support for Programming Languages and Operating Systems, Lausanne, Switzerland","volume":"16","author":"Luo Qinyi","year":"2020","unstructured":"Qinyi Luo , Jiaao He , Youwei Zhuo , and Xuehai Qian . 2020 . Prague: High-Performance Heterogeneity-Aware Asynchronous Decentralized Training. In ASPLOS '20: Architectural Support for Programming Languages and Operating Systems, Lausanne, Switzerland , March 16-20 , 2020 . ACM, 401--416. Qinyi Luo, Jiaao He, Youwei Zhuo, and Xuehai Qian. 2020. Prague: High-Performance Heterogeneity-Aware Asynchronous Decentralized Training. In ASPLOS '20: Architectural Support for Programming Languages and Operating Systems, Lausanne, Switzerland, March 16-20, 2020. ACM, 401--416.","journal-title":"March"},{"key":"e_1_2_1_27_1","volume-title":"NeuGraph: Parallel Deep Neural Network Computation on Large Graphs. In 2019 USENIX Annual Technical Conference, USENIX ATC 2019","author":"Ma Lingxiao","year":"2019","unstructured":"Lingxiao Ma , Zhi Yang , Youshan Miao , Jilong Xue , Ming Wu , Lidong Zhou , and Yafei Dai . 2019 . NeuGraph: Parallel Deep Neural Network Computation on Large Graphs. In 2019 USENIX Annual Technical Conference, USENIX ATC 2019 , Renton, WA, USA , July 10-12, 2019. USENIX Association. Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, and Yafei Dai. 2019. NeuGraph: Parallel Deep Neural Network Computation on Large Graphs. In 2019 USENIX Annual Technical Conference, USENIX ATC 2019, Renton, WA, USA, July 10-12, 2019. USENIX Association."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/1807167.1807184"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448016.3452773"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.14778\/3476249.3476264"},{"key":"e_1_2_1_31_1","volume-title":"High-Performance Deep Learning Library. In NeurIPS 2019","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , Alban Desmaison , Andreas K\u00f6pf , Edward Yang , Zachary DeVito , Martin Raison , Alykhan Tejani , Sasank Chilamkurthy , Benoit Steiner , Lu Fang , Junjie Bai , and Soumith Chintala . 2019 . PyTorch: An Imperative Style , High-Performance Deep Learning Library. In NeurIPS 2019 , December 8-14, 2019, Vancouver, BC, Canada. 8024--8035. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada. 8024--8035."},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.14778\/3151113.3151122"},{"key":"e_1_2_1_33_1","volume-title":"OSDI 2021","author":"Thorpe John","year":"2021","unstructured":"John Thorpe , Yifan Qiao , Jonathan Eyolfson , Shen Teng , Guanzhou Hu , Zhihao Jia , Jinliang Wei , Keval Vora , Ravi Netravali , Miryung Kim , and Guoqing Harry Xu. [n.d.]. Dorylus : Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads . In OSDI 2021 , July 14-16, 2021 . USENIX Association, 495--514. John Thorpe, Yifan Qiao, Jonathan Eyolfson, Shen Teng, Guanzhou Hu, Zhihao Jia, Jinliang Wei, Keval Vora, Ravi Netravali, Miryung Kim, and Guoqing Harry Xu. [n.d.]. Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads. In OSDI 2021, July 14-16, 2021. USENIX Association, 495--514."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/SC41405.2020.00074"},{"key":"e_1_2_1_35_1","volume-title":"6th International Conference on Learning Representations, ICLR","author":"Velickovic Petar","year":"2018","unstructured":"Petar Velickovic , Guillem Cucurull , Arantxa Casanova , Adriana Romero , Pietro Li\u00f2 , and Yoshua Bengio . [n.d.]. Graph Attention Networks . In 6th International Conference on Learning Representations, ICLR 2018 , Vancouver, BC , Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview .net. https:\/\/openreview.net\/forum?id=rJXMpikCZ Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. [n.d.]. Graph Attention Networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https:\/\/openreview.net\/forum?id=rJXMpikCZ"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-15-8135-9_6"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.2978386"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3060154"},{"key":"e_1_2_1_39_1","volume-title":"GraphSAINT: Graph Sampling Based Inductive Learning Method. In 8th International Conference on Learning Representations, ICLR 2020","author":"Zeng Hanqing","year":"2020","unstructured":"Hanqing Zeng , Hongkuan Zhou , Ajitesh Srivastava , Rajgopal Kannan , and Viktor K. Prasanna . 2020 . GraphSAINT: Graph Sampling Based Inductive Learning Method. In 8th International Conference on Learning Representations, ICLR 2020 , Addis Ababa, Ethiopia , April 26-30, 2020 . OpenReview.net. https:\/\/openreview.net\/forum?id=BJe8pkHFwS Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor K. Prasanna. 2020. GraphSAINT: Graph Sampling Based Inductive Learning Method. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. https:\/\/openreview.net\/forum?id=BJe8pkHFwS"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.14778\/3415478.3415539"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3389706"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.14778\/3476249.3476295"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/IA351965.2020.00011"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.14778\/3461535.3461547"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.14778\/3352063.3352127"}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3538598.3538614","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,28]],"date-time":"2022-12-28T09:30:04Z","timestamp":1672219804000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3538598.3538614"}},"subtitle":["<i>s<\/i>\n            t\n            <i>a<\/i>\n            le\n            <i>n<\/i>\n            ess-aware\n            <i>c<\/i>\n            omm\n            <i>u<\/i>\n            nication-avoiding full-graph decentralized training in large-scale graph neural networks"],"short-title":[],"issued":{"date-parts":[[2022,5]]},"references-count":45,"journal-issue":{"issue":"9","published-print":{"date-parts":[[2022,5]]}},"alternative-id":["10.14778\/3538598.3538614"],"URL":"https:\/\/doi.org\/10.14778\/3538598.3538614","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2022,5]]}}}