{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,9]],"date-time":"2025-07-09T22:59:05Z","timestamp":1752101945479,"version":"3.41.0"},"reference-count":45,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2023,4,6]],"date-time":"2023-04-06T00:00:00Z","timestamp":1680739200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2023,5,31]]},"abstract":"<jats:p>Federated optimization (FedOpt), which targets at collaboratively training a learning model across a large number of distributed clients, is vital for federated learning. The primary concerns in FedOpt can be attributed to the model divergence and communication efficiency, which significantly affect the performance. In this article, we propose a new method, i.e., LoSAC, to learn from heterogeneous distributed data more efficiently. Its key algorithmic insight is to locally update the estimate for the global full gradient after each regular local model update. Thus, LoSAC can keep clients\u2019 information refreshed in a more compact way. In particular, we have studied the convergence result for LoSAC. Besides, the bonus of LoSAC is the ability to defend the information leakage from the recent technique Deep Leakage Gradients (DLG). Finally, experiments have verified the superiority of LoSAC comparing with state-of-the-art FedOpt algorithms. Specifically, LoSAC significantly improves communication efficiency by more than 100% on average, mitigates the model divergence problem, and equips with the defense ability against DLG.<\/jats:p>","DOI":"10.1145\/3566128","type":"journal-article","created":{"date-parts":[[2023,1,17]],"date-time":"2023-01-17T11:59:29Z","timestamp":1673956769000},"page":"1-28","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["LoSAC: An Efficient Local Stochastic Average Control Method for Federated Optimization"],"prefix":"10.1145","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4179-3815","authenticated-orcid":false,"given":"Huiming","family":"Chen","sequence":"first","affiliation":[{"name":"Department of Electronic Engineering, Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6382-0861","authenticated-orcid":false,"given":"Huandong","family":"Wang","sequence":"additional","affiliation":[{"name":"Department of Electronic Engineering, Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8944-8618","authenticated-orcid":false,"given":"Quanming","family":"Yao","sequence":"additional","affiliation":[{"name":"Department of Electronic Engineering, Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5617-1659","authenticated-orcid":false,"given":"Yong","family":"Li","sequence":"additional","affiliation":[{"name":"Department of Electronic Engineering, Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0419-5514","authenticated-orcid":false,"given":"Depeng","family":"Jin","sequence":"additional","affiliation":[{"name":"Department of Electronic Engineering, Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5059-8360","authenticated-orcid":false,"given":"Qiang","family":"Yang","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China"}]}],"member":"320","published-online":{"date-parts":[[2023,4,6]]},"reference":[{"key":"e_1_3_3_2_2","first-page":"5973","volume-title":"Advances in Neural Information Processing Systems, Vol.31","author":"Alistarh Dan","year":"2018","unstructured":"Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cedric Renggli. 2018. The convergence of sparsified gradient methods. In Advances in Neural Information Processing Systems, Vol.31. 5973\u20135983."},{"doi-asserted-by":"publisher","key":"e_1_3_3_3_2","DOI":"10.1103\/PhysRevE.64.061907"},{"key":"e_1_3_3_4_2","volume-title":"ESANN.","author":"Anguita Davide","year":"2013","unstructured":"Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge L. Reyes-Ortiz.2013. A public domain dataset for human activity recognition using smartphones. In ESANN."},{"key":"e_1_3_3_5_2","first-page":"177","volume-title":"COMPSTAT","author":"Bottou L\u00e9on","year":"2010","unstructured":"L\u00e9on Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In COMPSTAT. Springer, 177\u2013186."},{"doi-asserted-by":"publisher","key":"e_1_3_3_6_2","DOI":"10.1561\/2200000016"},{"doi-asserted-by":"publisher","key":"e_1_3_3_7_2","DOI":"10.1016\/j.ijmedinf.2018.01.007"},{"key":"e_1_3_3_8_2","first-page":"1646","article-title":"SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives","author":"Defazio A.","year":"2014","unstructured":"A. Defazio, F. Bach, and S. Julien. 2014. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inf. Process. Syst. 27 (2014), 1646\u20131654.","journal-title":"Adv. Neural Inf. Process. Syst. 27"},{"doi-asserted-by":"publisher","key":"e_1_3_3_9_2","DOI":"10.1109\/JPROC.2020.3028013"},{"key":"e_1_3_3_10_2","first-page":"103","article-title":"Patient clustering improves efficiency of federated machine learning to predict mortality and hospital stay time using distributed electronic medical records","author":"Huang Li","year":"2019","unstructured":"Li Huang, Andrew L. Shea, Huining Qian, Aditya Masurkar, Hao Deng, and Dianbo Liu. 2019. Patient clustering improves efficiency of federated machine learning to predict mortality and hospital stay time using distributed electronic medical records. J. Biomed. Inform. (2019), 103\u2013291.","journal-title":"J. Biomed. Inform."},{"key":"e_1_3_3_11_2","first-page":"315","article-title":"Accelerating stochastic gradient descent using predictive variance reduction","author":"Johnson Rie","year":"2013","unstructured":"Rie Johnson and Tong Zhang. 2013. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS. 315\u2013323.","journal-title":"NIPS"},{"key":"e_1_3_3_12_2","article-title":"Advances and open problems in federated learning","volume":"1712","author":"Kairouz Peter","year":"2019","unstructured":"Peter Kairouz, H. Brendan McMahan, and Brendan Avent etc.2019. Advances and open problems in federated learning. CoRR abs\/1712.07557 (2019).","journal-title":"CoRR"},{"unstructured":"Sai Praneeth Karimireddy Martin Jaggi Satyen Kale Mehryar Mohri Sashank J. Reddi Sebastian U. Stich and Ananda Theertha Suresh. 2020. Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning. arxiv:2008.03606 [cs.LG]","key":"e_1_3_3_13_2"},{"key":"e_1_3_3_14_2","volume-title":"ICML","author":"Karimireddy Sai Praneeth Reddy","year":"2020","unstructured":"Sai Praneeth Reddy Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Jakkam Reddi, Sebastian Stich, and Ananda Theertha Suresh. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In ICML."},{"key":"e_1_3_3_15_2","volume-title":"AISTATS","author":"Khaled A.","year":"2020","unstructured":"A. Khaled, K. Mishchenko, and P. Richt\u00e1rik. 2020. Tighter theory for local SGD on identical and heterogeneous data. In AISTATS."},{"key":"e_1_3_3_16_2","volume-title":"ICLR","author":"Kingma Diederik P.","year":"2015","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR."},{"key":"e_1_3_3_17_2","article-title":"Federated optimization: Distributed machine learning for on-device intelligence","volume":"1610","author":"Konecn\u00fd Jakub","year":"2016","unstructured":"Jakub Konecn\u00fd, H. Brendan McMahan, Daniel Ramage, and Peter Richt\u00e1rik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. CoRR abs\/1610.02527 (2016).","journal-title":"CoRR"},{"key":"e_1_3_3_18_2","article-title":"Federated learning: Strategies for improving communication efficiency","volume":"1610","author":"Konecn\u00fd Jakub","year":"2016","unstructured":"Jakub Konecn\u00fd, H. Brendan McMahan, Felix X. Yu, Peter Richt\u00e1rik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. CoRR abs\/1610.05492 (2016).","journal-title":"CoRR"},{"doi-asserted-by":"publisher","key":"e_1_3_3_19_2","DOI":"10.1109\/5.726791"},{"key":"e_1_3_3_20_2","volume-title":"OSDI","author":"Li Mu","year":"2014","unstructured":"Mu Li, David G. Anderson, Jun Woo Park, Alexander J. Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J. Shekita, and Bor-Yiing Su. 2014. Scaling distributed machine learning with the parameter server. In OSDI."},{"key":"e_1_3_3_21_2","article-title":"Federated optimization in heterogeneous networks","author":"Li Tian","year":"2020","unstructured":"Tian Li, Anit Kumar Sahu, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. In MLSys.","journal-title":"MLSys"},{"key":"e_1_3_3_22_2","volume-title":"ICLR","author":"Li Xiang","year":"2020","unstructured":"Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2020. On the convergence of FedAvg on non-IID data. In ICLR."},{"unstructured":"Zhize Li and Peter Richtarik. 2021. A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization. arxiv:2102.01375 [cs.LG]","key":"e_1_3_3_23_2"},{"unstructured":"X. Liang S. Shen J. Liu Z. Pan E. Chen and Y. Cheng. 2019. Variance reduced local SGD with lower communication complexity. Retrieved from https:\/\/arxiv.org\/abs\/1912.12844.","key":"e_1_3_3_24_2"},{"doi-asserted-by":"publisher","key":"e_1_3_3_25_2","DOI":"10.1109\/TPDS.2020.2975189"},{"key":"e_1_3_3_26_2","volume-title":"AISTATS","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In AISTATS."},{"key":"e_1_3_3_27_2","first-page":"92","article-title":"Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation.","author":"Sheller M. J.","year":"2019","unstructured":"M. J. Sheller, G. A. Reina, B. Edwards, J. Martin, and S. Bakas. 2019. Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation. Brainlesion (2019), 92\u2013104.","journal-title":"Brainlesion"},{"key":"e_1_3_3_28_2","volume-title":"ICML","author":"Negahban Sahand","year":"2010","unstructured":"Sahand Negahban and Martin J. Wainwright. 2010. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. In ICML."},{"doi-asserted-by":"publisher","key":"e_1_3_3_29_2","DOI":"10.1561\/2400000003"},{"unstructured":"Sashank Reddi Zachary Charles Manzil Zaheer Zachary Garrett Keith Rush Jakub Kone\u010d y Sanjiv Kumar and H. Brendan McMahan. 2020. Adaptive Federated Optimization. arxiv:2003.00295 [cs.LG]","key":"e_1_3_3_30_2"},{"unstructured":"Amirhossein Reisizadeh Aryan Mokhtari Hamed Hassani Ali Jadbabaie and Ramtin Pedarsani. 2020. FedPAQ: A communication-efficient federated learning method with periodic averaging and quantization. In Proceedings of Machine Learning Research Vol. 108. PMLR 2021\u20132031.","key":"e_1_3_3_31_2"},{"doi-asserted-by":"publisher","key":"e_1_3_3_32_2","DOI":"10.1214\/aoms\/1177729586"},{"issue":"1","key":"e_1_3_3_33_2","doi-asserted-by":"crossref","first-page":"267","DOI":"10.1111\/j.2517-6161.1996.tb02080.x","article-title":"Regression shrinkage and selection via the lasso","volume":"58","author":"Robert Tibshirani","year":"1996","unstructured":"Tibshirani Robert. 1996. Regression shrinkage and selection via the lasso. J. Roy. Statist. Societ. 58, 1 (1996), 267\u2013288.","journal-title":"J. Roy. Statist. Societ."},{"doi-asserted-by":"publisher","key":"e_1_3_3_34_2","DOI":"10.1007\/s10107-016-1030-6"},{"unstructured":"Zhou Shenglong and Li Geoffrey Ye. 2022. Federated Learning via Inexact ADMM. arxiv:2204.10607 [math.OC]","key":"e_1_3_3_35_2"},{"key":"e_1_3_3_36_2","volume-title":"ICLR","author":"Stich Sebastian U.","year":"2019","unstructured":"Sebastian U. Stich. 2019. Local SGD converges fast and communicates little. In ICLR."},{"key":"e_1_3_3_37_2","volume-title":"NIPS","author":"Stich Sebastian U.","year":"2018","unstructured":"Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. 2018. Sparsified SGD with memory. In NIPS."},{"unstructured":"Leye Wang Han Yu and Xiao Han. 2020. Federated Crowdsensing: Framework and Challenges. arxiv:2011.03208 [cs.CR]","key":"e_1_3_3_38_2"},{"doi-asserted-by":"publisher","key":"e_1_3_3_39_2","DOI":"10.1109\/JSAC.2019.2904348"},{"unstructured":"Jing Xu Sen Wang Liwei Wang and Andrew Chi-Chih Yao. 2021. FedCM: Federated Learning with Client-level Momentum. arxiv:2106.10874 [cs.LG]","key":"e_1_3_3_40_2"},{"doi-asserted-by":"publisher","key":"e_1_3_3_41_2","DOI":"10.1145\/3298981"},{"key":"e_1_3_3_42_2","volume-title":"AAAI","author":"Yu H.","year":"2019","unstructured":"H. Yu, S. Shen, S. Yang, and S. Zhu. 2019. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning. In AAAI."},{"issue":"68","key":"e_1_3_3_43_2","first-page":"3321","article-title":"Communication-efficient algorithms for statistical optimization","volume":"14","author":"Zhang Yuchen","year":"2013","unstructured":"Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. 2013. Communication-efficient algorithms for statistical optimization. J. Mach. Learn. Res. 14, 68 (2013), 3321\u20133363.","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_3_44_2","article-title":"Federated learning with non-IID data","volume":"1806","author":"Zhao Yue","year":"2018","unstructured":"Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated learning with non-IID data. CoRR abs\/1806.00582 (2018).","journal-title":"CoRR"},{"unstructured":"Zhaohua Zheng Yize Zhou Yilong Sun Zhang Wang Boyi Liu and Keqiu Li. 2021. Federated Learning in Smart Cities: A Comprehensive Survey. arxiv:2102.01375 [cs.LG]","key":"e_1_3_3_45_2"},{"key":"e_1_3_3_46_2","first-page":"14774","volume-title":"Advances in Neural Information Processing Systems, Vol. 32","author":"Zhu Ligeng","year":"2019","unstructured":"Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Advances in Neural Information Processing Systems, Vol. 32. 14774\u201314784."}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3566128","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3566128","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:51:31Z","timestamp":1750182691000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3566128"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,6]]},"references-count":45,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,5,31]]}},"alternative-id":["10.1145\/3566128"],"URL":"https:\/\/doi.org\/10.1145\/3566128","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"type":"print","value":"1556-4681"},{"type":"electronic","value":"1556-472X"}],"subject":[],"published":{"date-parts":[[2023,4,6]]},"assertion":[{"value":"2022-01-25","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-09-14","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-04-06","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}