{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T15:25:57Z","timestamp":1774365957165,"version":"3.50.1"},"reference-count":38,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2022,6,11]],"date-time":"2022-06-11T00:00:00Z","timestamp":1654905600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62132017"],"award-info":[{"award-number":["62132017"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Natural Science Funding of Zhejiang Province","award":["LZ22F020015"],"award-info":[{"award-number":["LZ22F020015"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2022,10,31]]},"abstract":"<jats:p>\n            Distributed processing and analysis of large-scale graph data remain challenging because of the high-level discrepancy among graphs. This study investigates a novel subproblem: the distributed multi-task learning on the graph, which jointly learns multiple analysis tasks from decentralized graphs. We propose a\n            <jats:bold>federated multi-task graph learning (FMTGL)<\/jats:bold>\n            framework to solve the problem within a privacy-preserving and scalable scheme. Its core is an innovative data-fusion mechanism and a low-latency distributed optimization method. The former captures multi-source data relatedness and generates universal task representation for local task analysis. The latter enables the quick update of our framework with gradients sparsification and tree-based aggregation. As a theoretical result, the proposed optimization method has a convergence rate interpolates between\n            <jats:inline-formula content-type=\"math\/tex\">\n              <jats:tex-math notation=\"LaTeX\" version=\"MathJax\">\\( \\mathcal {O}(1\/T) \\)<\/jats:tex-math>\n            <\/jats:inline-formula>\n            and\n            <jats:inline-formula content-type=\"math\/tex\">\n              <jats:tex-math notation=\"LaTeX\" version=\"MathJax\">\\( \\mathcal {O}(1\/\\sqrt {T}) \\)<\/jats:tex-math>\n            <\/jats:inline-formula>\n            , up to logarithmic terms. Unlike previous studies, our work analyzes the convergence behavior with adaptive stepsize selection and non-convex assumption. Experimental results on three graph datasets verify the effectiveness and scalability of FMTGL.\n          <\/jats:p>","DOI":"10.1145\/3527622","type":"journal-article","created":{"date-parts":[[2022,4,22]],"date-time":"2022-04-22T15:36:39Z","timestamp":1650641799000},"page":"1-27","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":9,"title":["Federated Multi-task Graph Learning"],"prefix":"10.1145","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8420-2213","authenticated-orcid":false,"given":"Yijing","family":"Liu","sequence":"first","affiliation":[{"name":"State Key Laboratory of CAD and CG, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4969-3007","authenticated-orcid":false,"given":"Dongming","family":"Han","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD and CG, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8358-6278","authenticated-orcid":false,"given":"Jianwei","family":"Zhang","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD and CG, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4782-5654","authenticated-orcid":false,"given":"Haiyang","family":"Zhu","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD and CG, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6885-3451","authenticated-orcid":false,"given":"Mingliang","family":"Xu","sequence":"additional","affiliation":[{"name":"School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8365-4741","authenticated-orcid":false,"given":"Wei","family":"Chen","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD and CG, Hangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2022,6,11]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/2556195.2556264"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2016.0012"},{"key":"e_1_3_2_4_2","article-title":"Practical secure aggregation for federated learning on user-held data","author":"Bonawitz Keith","year":"2016","unstructured":"Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2016. Practical secure aggregation for federated learning on user-held data. arXiv preprint arXiv:1611.04482 (2016).","journal-title":"arXiv preprint arXiv:1611.04482"},{"key":"e_1_3_2_5_2","article-title":"FastGCN: Fast learning with graph convolutional networks via importance sampling","author":"Chen Jie","year":"2018","unstructured":"Jie Chen, Tengfei Ma, and Cao Xiao. 2018. FastGCN: Fast learning with graph convolutional networks via importance sampling. arXiv preprint arXiv:1801.10247 (2018).","journal-title":"arXiv preprint arXiv:1801.10247"},{"key":"e_1_3_2_6_2","article-title":"Revisiting distributed synchronous SGD","author":"Chen Jianmin","year":"2016","unstructured":"Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016).","journal-title":"arXiv preprint arXiv:1604.00981"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNN.2010.2095882"},{"issue":"7","key":"e_1_3_2_8_2","article-title":"Adaptive subgradient methods for online learning and stochastic optimization.","volume":"12","author":"Duchi John","year":"2011","unstructured":"John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12, 7 (2011).","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.169"},{"key":"e_1_3_2_10_2","article-title":"On the convergence of local descent methods in federated learning","author":"Haddadpour Farzin","year":"2019","unstructured":"Farzin Haddadpour and Mehrdad Mahdavi. 2019. On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425 (2019).","journal-title":"arXiv preprint arXiv:1910.14425"},{"key":"e_1_3_2_11_2","first-page":"1024","volume-title":"Advances in Neural Information Processing Systems","author":"Hamilton Will","year":"2017","unstructured":"Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. 1024\u20131034."},{"key":"e_1_3_2_12_2","article-title":"Federated learning for mobile keyboard prediction","author":"Hard Andrew","year":"2018","unstructured":"Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Fran\u00e7oise Beaufays, Sean Augenstein, Hubert Eichner, Chlo\u00e9 Kiddon, and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018).","journal-title":"arXiv preprint arXiv:1811.03604"},{"key":"e_1_3_2_13_2","article-title":"FedGraphNN: A federated learning system and benchmark for graph neural networks","author":"He Chaoyang","year":"2021","unstructured":"Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, and Salman Avestimehr. 2021. FedGraphNN: A federated learning system and benchmark for graph neural networks. arXiv preprint arXiv:2104.07145 (2021).","journal-title":"arXiv preprint arXiv:2104.07145"},{"key":"e_1_3_2_14_2","first-page":"2525","volume-title":"Advances in Neural Information Processing Systems","author":"Jiang Peng","year":"2018","unstructured":"Peng Jiang and Gagan Agrawal. 2018. A linear speedup analysis of distributed deep learning with sparse and quantized communication. In Advances in Neural Information Processing Systems. 2525\u20132536."},{"key":"e_1_3_2_15_2","article-title":"Semi-supervised classification with graph convolutional networks","author":"Kipf Thomas N.","year":"2016","unstructured":"Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).","journal-title":"arXiv preprint arXiv:1609.02907"},{"key":"e_1_3_2_16_2","article-title":"Federated optimization: Distributed optimization beyond the datacenter","author":"Kone\u010dn\u1ef3 Jakub","year":"2015","unstructured":"Jakub Kone\u010dn\u1ef3, Brendan McMahan, and Daniel Ramage. 2015. Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575 (2015).","journal-title":"arXiv preprint arXiv:1511.03575"},{"key":"e_1_3_2_17_2","article-title":"Federated learning: Strategies for improving communication efficiency","author":"Kone\u010dn\u1ef3 Jakub","year":"2016","unstructured":"Jakub Kone\u010dn\u1ef3, H. Brendan McMahan, Felix X. Yu, Peter Richt\u00e1rik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).","journal-title":"arXiv preprint arXiv:1610.05492"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8683546"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/1081870.1081893"},{"key":"e_1_3_2_20_2","first-page":"539","volume-title":"Advances in Neural Information Processing Systems","author":"Leskovec Jure","year":"2012","unstructured":"Jure Leskovec and Julian J. Mcauley. 2012. Learning to discover social circles in ego networks. In Advances in Neural Information Processing Systems. 539\u2013547."},{"key":"e_1_3_2_21_2","article-title":"On the convergence of stochastic gradient descent with adaptive stepsizes","author":"Li Xiaoyu","year":"2018","unstructured":"Xiaoyu Li and Francesco Orabona. 2018. On the convergence of stochastic gradient descent with adaptive stepsizes. arXiv preprint arXiv:1805.08114 (2018).","journal-title":"arXiv preprint arXiv:1805.08114"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3098136"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/2766462.2767755"},{"key":"e_1_3_2_24_2","first-page":"1273","volume-title":"Artificial Intelligence and Statistics","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273\u20131282."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/BigData47090.2019.9005983"},{"key":"e_1_3_2_26_2","volume-title":"Introductory Lectures on Convex Optimization: A Basic Course","author":"Nesterov Yurii","year":"2003","unstructured":"Yurii Nesterov. 2003. Introductory Lectures on Convex Optimization: A Basic Course. Vol. 87. Springer Science & Business Media."},{"key":"e_1_3_2_27_2","article-title":"An overview of multi-task learning in deep neural networks","author":"Ruder Sebastian","year":"2017","unstructured":"Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017).","journal-title":"arXiv preprint arXiv:1706.05098"},{"key":"e_1_3_2_28_2","article-title":"On the convergence of federated optimization in heterogeneous networks","volume":"3","author":"Sahu Anit Kumar","year":"2018","unstructured":"Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. 2018. On the convergence of federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127 3 (2018).","journal-title":"arXiv preprint arXiv:1812.06127"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2019.2944481"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDCS.2019.00220"},{"key":"e_1_3_2_31_2","first-page":"3411","volume-title":"IJCAI","author":"Shi Shaohuai","year":"2019","unstructured":"Shaohuai Shi, Kaiyong Zhao, Qiang Wang, Zhenheng Tang, and Xiaowen Chu. 2019. A convergence analysis of distributed SGD with communication-efficient gradient sparsification. In IJCAI. 3411\u20133417."},{"key":"e_1_3_2_32_2","first-page":"4424","volume-title":"Advances in Neural Information Processing Systems","author":"Smith Virginia","year":"2017","unstructured":"Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S. Talwalkar. 2017. Federated multi-task learning. In Advances in Neural Information Processing Systems. 4424\u20134434."},{"key":"e_1_3_2_33_2","article-title":"Towards federated graph learning for collaborative financial crimes detection","author":"Suzumura Toyotaro","year":"2019","unstructured":"Toyotaro Suzumura, Yi Zhou, Natahalie Barcardo, Guangnan Ye, Keith Houck, Ryo Kawahara, Ali Anwar, Lucia Larise Stavarache, Daniel Klyashtorny, Heiko Ludwig, et\u00a0al. 2019. Towards federated graph learning for collaborative financial crimes detection. arXiv preprint arXiv:1909.12946 (2019).","journal-title":"arXiv preprint arXiv:1909.12946"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/1401890.1402008"},{"key":"e_1_3_2_35_2","article-title":"The EU general data protection regulation (GDPR)","author":"Voigt Paul","year":"2017","unstructured":"Paul Voigt and Axel Von dem Bussche. 2017. The EU general data protection regulation (GDPR). A Practical Guide, 1st Ed., Cham: Springer International Publishing (2017).","journal-title":"A Practical Guide, 1st Ed.,"},{"key":"e_1_3_2_36_2","article-title":"GraphFL: A federated learning framework for semi-supervised node classification on graphs","author":"Wang Binghui","year":"2020","unstructured":"Binghui Wang, Ang Li, Hai Li, and Yiran Chen. 2020. GraphFL: A federated learning framework for semi-supervised node classification on graphs. arXiv preprint arXiv:2012.04187 (2020).","journal-title":"arXiv preprint arXiv:2012.04187"},{"key":"e_1_3_2_37_2","first-page":"751","volume-title":"Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain, May 9\u201311, 2016 (JMLR Workshop and Conference Proceedings)","volume":"51","author":"Wang Jialei","year":"2016","unstructured":"Jialei Wang, Mladen Kolar, and Nathan Srebro. 2016. Distributed multi-task learning. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain, May 9\u201311, 2016 (JMLR Workshop and Conference Proceedings), Arthur Gretton and Christian C. Robert (Eds.), Vol. 51. JMLR.org, 751\u2013760. http:\/\/proceedings.mlr.press\/v51\/wang16d.html"},{"key":"e_1_3_2_38_2","article-title":"How powerful are graph neural networks?","author":"Xu Keyulu","year":"2018","unstructured":"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).","journal-title":"arXiv preprint arXiv:1810.00826"},{"key":"e_1_3_2_39_2","article-title":"Federated learning with non-IID data","volume":"1806","author":"Zhao Yue","year":"2018","unstructured":"Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated learning with non-IID data. ArXiv abs\/1806.00582 (2018).","journal-title":"ArXiv"}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3527622","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3527622","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:53Z","timestamp":1750191533000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3527622"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,11]]},"references-count":38,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,10,31]]}},"alternative-id":["10.1145\/3527622"],"URL":"https:\/\/doi.org\/10.1145\/3527622","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"value":"2157-6904","type":"print"},{"value":"2157-6912","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,11]]},"assertion":[{"value":"2021-03-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-03-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-06-11","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}