{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T12:04:25Z","timestamp":1774440265205,"version":"3.50.1"},"reference-count":48,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2025,5,21]],"date-time":"2025-05-21T00:00:00Z","timestamp":1747785600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62072096, 62372100"],"award-info":[{"award-number":["62072096, 62372100"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100013285","name":"Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100013285","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100003399","name":"Science and Technology Commission of Shanghai Municipality","doi-asserted-by":"crossref","award":["23XD1420100"],"award-info":[{"award-number":["23XD1420100"]}],"id":[{"id":"10.13039\/501100003399","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Sen. Netw."],"published-print":{"date-parts":[[2025,5,31]]},"abstract":"<jats:p>\n            Decentralized federated learning emerged to eliminate the reliance on the central server and address the single point of failure and the network bottleneck in centralized federated learning. However, existing works on decentralized federated learning suffer from the following three challenges. First, the transmission of model parameters between devices results in significant bandwidth consumption and network congestion. Second, the decentralized architecture involves numerous devices, which increases the risk of poisoned behavior. Third, the data heterogeneity of devices seriously affects the model accuracy. Unfortunately, there is a lack of research that can effectively address all the challenges above. In this article, we propose a novel scheme of\n            <jats:italic>\n              <jats:underline>D<\/jats:underline>\n            <\/jats:italic>\n            ecentralized federated learning toward\n            <jats:italic>\n              <jats:underline>C<\/jats:underline>\n            <\/jats:italic>\n            ommunication\n            <jats:italic>\n              <jats:underline>E<\/jats:underline>\n            <\/jats:italic>\n            fficiency,\n            <jats:italic>\n              <jats:underline>R<\/jats:underline>\n            <\/jats:italic>\n            obustness, and\n            <jats:italic>\n              <jats:underline>P<\/jats:underline>\n            <\/jats:italic>\n            ersonalization (i.e., D-CERP). We aim at customizing personalized models for each client with lower communication and computation overhead, which can also defend against Byzantine attacks in the decentralized scenario. Specifically, we employ local sparse training with a personalized mask to better fit the heterogeneous data for each client and reduce both on-device computation overhead and cross-device communication overhead. Besides, we apply a trusted neighbor selection scheme based on multi-armed bandit by assigning rewards to high-quality submissions of each communication round, thereby improving the Byzantine robustness. In our experiments, we utilize two data partitioning methods to simulate the heterogeneity of clients in the decentralized setting and conduct exhaustive experiments on CIFAR10, CIFAR100, and Tiny-ImageNet. Experimental results demonstrate that compared to several state-of-the-art baselines, D-CERP achieves comparable personalization with a lower overhead in non-adversarial settings and provides additionally superior Byzantine robustness in adversarial settings.\n          <\/jats:p>","DOI":"10.1145\/3730587","type":"journal-article","created":{"date-parts":[[2025,4,18]],"date-time":"2025-04-18T11:22:16Z","timestamp":1744975336000},"page":"1-20","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Decentralized Federated Learning towards Communication Efficiency, Robustness, and Personalization"],"prefix":"10.1145","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3602-7443","authenticated-orcid":false,"given":"Anqi","family":"Zhang","sequence":"first","affiliation":[{"name":"Donghua University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0907-9926","authenticated-orcid":false,"given":"Ping","family":"Zhao","sequence":"additional","affiliation":[{"name":"Donghua University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0998-0829","authenticated-orcid":false,"given":"Wenke","family":"Lu","sequence":"additional","affiliation":[{"name":"Donghua University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4095-6843","authenticated-orcid":false,"given":"Guanglin","family":"Zhang","sequence":"additional","affiliation":[{"name":"Donghua University, Shanghai, China"}]}],"member":"320","published-online":{"date-parts":[[2025,5,21]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/TBDATA.2024.3362191"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP46215.2023.10179291"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2022.3230938"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2023.3315746"},{"key":"e_1_3_1_6_2","first-page":"4587","volume-title":"Proceedings of the 39th International Conference on Machine Learning (Proceedings of Machine Learning Research)","author":"Dai Rong","year":"2022","unstructured":"Rong Dai, Li Shen, Fengxiang He, Xinmei Tian, and Dacheng Tao. 2022. DisPFL: Towards communication-efficient personalized federated learning via decentralized sparse training. In Proceedings of the 39th International Conference on Machine Learning (Proceedings of Machine Learning Research). PMLR, 4587\u20134604. https:\/\/proceedings.mlr.press\/v162\/dai22b.html"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2021.3138977"},{"key":"e_1_3_1_8_2","first-page":"6080","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","author":"Bibikar Sameer","year":"2022","unstructured":"Sameer Bibikar, Haris Vikalo, Zhangyang Wang, and Xiaohan Chen. 2022. Federated dynamic sparse training: Computing less, communicating less, yet learning better. In Proceedings of the AAAI Conference on Artificial Intelligence. 6080\u20136088."},{"key":"e_1_3_1_9_2","first-page":"10405","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","author":"Xian Wenhan","year":"2021","unstructured":"Wenhan Xian, Feihu Huang, and Heng Huang. 2021. Communication-efficient frank-wolfe algorithm for nonconvex decentralized distributed learning. In Proceedings of the AAAI Conference on Artificial Intelligence. 10405\u201310413."},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2023.3342901"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00954"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2023.3250658"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2022.3176191"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICWS60048.2023.00046"},{"key":"e_1_3_1_15_2","first-page":"11458","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Zheng Cheng","year":"2020","unstructured":"Cheng Zheng, Bo Zong, Wei Cheng, Dongjin Song, Jingchao Ni, Wenchao Yu, Haifeng Chen, and Wei Wang. 2020. Robust graph representation learning via neural sparsification. In Proceedings of the International Conference on Machine Learning. 11458\u201311468."},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2023.3266703"},{"key":"e_1_3_1_17_2","first-page":"39.1\u201339.26","volume-title":"Proceedings of the 25th Annual Conference on Learning Theory (Proceedings of Machine Learning Research)","author":"Agrawal Shipra","year":"2012","unstructured":"Shipra Agrawal and Navin Goyal. 2012. Analysis of thompson sampling for the multi-armed bandit problem. In Proceedings of the 25th Annual Conference on Learning Theory (Proceedings of Machine Learning Research). PMLR, Edinburgh, Scotland, 39.1\u201339.26. Retrieved from https:\/\/proceedings.mlr.press\/v23\/agrawal12.html"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2024.3360869"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/3580305.3599346"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2024.3352415"},{"key":"e_1_3_1_21_2","first-page":"2943","volume-title":"Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research)","author":"Evci Utku","year":"2020","unstructured":"Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. 2020. Rigging the lottery: Making all tickets winners. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research). PMLR, 2943\u20132952. Retrieved from https:\/\/proceedings.mlr.press\/v119\/evci20a.html"},{"key":"e_1_3_1_22_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Mhamdi El Mahdi El","year":"2021","unstructured":"El Mahdi El Mhamdi, Rachid Guerraoui, and S\u00e9bastien Rouault. 2021. Distributed momentum for byzantine-resilient stochastic gradient descent. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2022\/106"},{"key":"e_1_3_1_24_2","first-page":"6357","volume-title":"Proceedings of the 38th International Conference on Machine Learning (Proceedings of Machine Learning Research)","volume":"139","author":"Li Tian","year":"2021","unstructured":"Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. 2021. Ditto: Fair and robust federated learning through personalization. In Proceedings of the 38th International Conference on Machine Learning (Proceedings of Machine Learning Research). Marina Meila and Tong Zhang (Eds.), Vol. 139, PMLR, 6357\u20136368. Retrieved from https:\/\/proceedings.mlr.press\/v139\/li21h.html"},{"key":"e_1_3_1_25_2","first-page":"1273","volume-title":"Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research)","volume":"54","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research).Aarti Singh and Jerry Zhu (Eds.), Vol. 54, PMLR, Sydney, Australia, 1273\u20131282. Retrieved from https:\/\/proceedings.mlr.press\/v54\/mcmahan17a.html"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295285"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2022.3183337"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00983"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2022.3196274"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i7.26083"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58951-6_24"},{"key":"e_1_3_1_32_2","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Baruch Gilad","year":"2019","unstructured":"Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. In Proceedings of the Advances in Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA."},{"key":"e_1_3_1_33_2","volume-title":"Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining","author":"Xie Yueqi","year":"2023","unstructured":"Yueqi Xie, Weizhong Zhang, Renjie Pi, Fangzhao Wu, Qifeng Chen, Xing Xie, and Sunghun Kim. 2023. Optimizing server-side aggregation for robust federated learning via subspace training. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining."},{"key":"e_1_3_1_34_2","first-page":"118","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems","author":"Blanchard Peva","year":"2017","unstructured":"Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the International Conference on Neural Information Processing Systems. Long Beach, CA, USA, 118\u2013128."},{"key":"e_1_3_1_35_2","first-page":"3521","volume-title":"Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research)","volume":"80","author":"Mhamdi El Mahdi El","year":"2018","unstructured":"El Mahdi El Mhamdi, Rachid Guerraoui, and S\u00e9bastien Rouault. 2018. The hidden vulnerability of distributed learning in Byzantium. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research). Jennifer Dy and Andreas Krause (Eds.), Vol. 80, PMLR, Stockholm, SWEDEN, 3521\u20133530. Retrieved from https:\/\/proceedings.mlr.press\/v80\/mhamdi18a.html"},{"key":"e_1_3_1_36_2","first-page":"5636","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Dong Yin","year":"2018","unstructured":"Yin Dong, Yudong Chen, Kannan Ramchandran, and Peter L. Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning. 5636\u20135645."},{"key":"e_1_3_1_37_2","first-page":"3622","volume-title":"Proceedings of the Advances in Neural Information Processing Systems.","volume":"34","author":"Ozkara Kaan","year":"2021","unstructured":"Kaan Ozkara, Navjot Singh, Deepesh Data, and Suhas Diggavi. 2021. QuPeD: Quantized personalization via distillation with applications to federated learning. In Proceedings of the Advances in Neural Information Processing Systems.M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), Vol. 34, Curran Associates, Inc., 3622\u20133634. Retrieved from https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2021\/file\/1dba3025b159cd9354da65e2d0436a31-Paper.pdf"},{"key":"e_1_3_1_38_2","first-page":"2089","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Collins Liam","year":"2021","unstructured":"Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. 2021. Exploiting shared representations for personalized federated learning. In Proceedings of the International Conference on Machine Learning. PMLR, 2089\u20132099."},{"key":"e_1_3_1_39_2","first-page":"19586","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Ghosh Avishek","year":"2020","unstructured":"Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. 2020. An efficient framework for clustered federated learning. In Proceedings of the Advances in Neural Information Processing Systems. 19586\u201319597."},{"key":"e_1_3_1_40_2","volume-title":"Proceedings of the 37th International Conference on Machine Learning.","author":"Karimireddy Sai Praneeth","year":"2020","unstructured":"Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In Proceedings of the 37th International Conference on Machine Learning.JMLR.org, Article 476, 12 pages."},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2020.3009406"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2021.3138848"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVT.2023.3265366"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/BigData.2018.8622598"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2022.3145010"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2020.3044223"},{"key":"e_1_3_1_47_2","first-page":"8084","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Tastan Nurbek","year":"2023","unstructured":"Nurbek Tastan and Karthik Nandakumar. 2023. CaPriDe learning: Confidential and private decentralized learning based on encryption-friendly distillation loss. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Vancouver, BC, Canada, 8084\u20138092."},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2022.3230712"},{"key":"e_1_3_1_49_2","first-page":"7663","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems","author":"Tang Hanlin","year":"2018","unstructured":"Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, and Ji Liu. 2018. Communication compression for decentralized training. In Proceedings of the International Conference on Neural Information Processing Systems. Curran Associates, Inc., Montr\u00e9al CANADA, 7663\u20137673. Retrieved from https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2018\/file\/44feb0096faa8326192570788b38c1d1-Paper.pdf"}],"container-title":["ACM Transactions on Sensor Networks"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3730587","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3730587","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:57:19Z","timestamp":1750298239000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3730587"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,21]]},"references-count":48,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,5,31]]}},"alternative-id":["10.1145\/3730587"],"URL":"https:\/\/doi.org\/10.1145\/3730587","relation":{},"ISSN":["1550-4859","1550-4867"],"issn-type":[{"value":"1550-4859","type":"print"},{"value":"1550-4867","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,21]]},"assertion":[{"value":"2024-05-16","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-04-13","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-05-21","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}