{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T11:49:35Z","timestamp":1774957775561,"version":"3.50.1"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"7","license":[{"start":{"date-parts":[[2024,6,19]],"date-time":"2024-06-19T00:00:00Z","timestamp":1718755200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62202170, 62377012"],"award-info":[{"award-number":["62202170, 62377012"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Open Research Fund of KLATASDS-MOE"},{"DOI":"10.13039\/501100004106","name":"ECNU","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100004106","id-type":"DOI","asserted-by":"crossref"}]},{"name":"CCF-AFSG Research Fund"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2024,8,31]]},"abstract":"<jats:p>\n            In federated learning (FL), malicious clients could manipulate the predictions of the trained model through backdoor attacks, posing a significant threat to the security of FL systems. Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario, where all clients collaborate to train a single global model. A recent study conducted by Qin et\u00a0al. [\n            <jats:xref ref-type=\"bibr\">24<\/jats:xref>\n            ] marks the initial exploration of backdoor attacks within the personalized federated learning (pFL) scenario, where each client constructs a personalized model based on its local data. Notably, the study demonstrates that pFL methods with\n            <jats:italic>parameter decoupling<\/jats:italic>\n            can significantly enhance robustness against backdoor attacks. However, in this article, we whistleblow that pFL methods with parameter decoupling are still vulnerable to backdoor attacks. The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts. We analyze two direct causes of the heterogeneous classifiers: (1) data heterogeneity inherently exists among clients and (2) poisoning by malicious clients further exacerbates the data heterogeneity. To address these issues, we propose a two-pronged attack method, BapFL, which comprises two simple yet effective strategies: (1) poisoning only the feature encoder while keeping the classifier fixed and (2) diversifying the classifier through noise introduction to simulate that of the benign clients. Extensive experiments on three benchmark datasets under varying conditions demonstrate the effectiveness of our proposed attack. Additionally, we evaluate the effectiveness of six widely used defense methods and find that BapFL still poses a significant threat even in the presence of the best defense, Multi-Krum. We hope to inspire further research on attack and defense strategies in pFL scenarios. The code is available at:\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/BapFL\/code\">https:\/\/github.com\/BapFL\/code<\/jats:ext-link>\n          <\/jats:p>","DOI":"10.1145\/3649316","type":"journal-article","created":{"date-parts":[[2024,2,23]],"date-time":"2024-02-23T12:02:54Z","timestamp":1708689774000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["BapFL: You can Backdoor Personalized Federated Learning"],"prefix":"10.1145","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0169-457X","authenticated-orcid":false,"given":"Tiandi","family":"Ye","sequence":"first","affiliation":[{"name":"School of Data Science and Engineering, East China Normal University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0325-1705","authenticated-orcid":false,"given":"Cen","family":"Chen","sequence":"additional","affiliation":[{"name":"School of Data Science and Engineering &amp; KLATASDS-MOE, East China Normal University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6686-6603","authenticated-orcid":false,"given":"Yinggui","family":"Wang","sequence":"additional","affiliation":[{"name":"Ant Group, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0945-145X","authenticated-orcid":false,"given":"Xiang","family":"Li","sequence":"additional","affiliation":[{"name":"School of Data Science and Engineering, East China Normal University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5603-2680","authenticated-orcid":false,"given":"Ming","family":"Gao","sequence":"additional","affiliation":[{"name":"School of Data Science and Engineering &amp; KLATASDS-MOE, East China Normal University, Shanghai, China"}]}],"member":"320","published-online":{"date-parts":[[2024,6,19]]},"reference":[{"key":"e_1_3_1_2_2","article-title":"Federated learning with personalization layers","author":"Arivazhagan Manoj Ghuhan","year":"2019","unstructured":"Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. 2019. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818 (2019).","journal-title":"arXiv preprint arXiv:1912.00818"},{"key":"e_1_3_1_3_2","first-page":"2938","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Bagdasaryan Eugene","year":"2020","unstructured":"Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2938\u20132948."},{"key":"e_1_3_1_4_2","article-title":"Machine learning with adversaries: Byzantine tolerant gradient descent","volume":"30","author":"Blanchard Peva","year":"2017","unstructured":"Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. Adv. Neural Inf. Process. Syst. 30 (2017).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3511808.3557378"},{"key":"e_1_3_1_6_2","volume-title":"34th Conference on Neural Information Processing Systems","author":"Chen C.-L.","year":"2020","unstructured":"C.-L. Chen, Leana Golubchik, and Marco Paolieri. 2020. Backdoor attacks on federated meta-learning. In 34th Conference on Neural Information Processing Systems."},{"key":"e_1_3_1_7_2","article-title":"On bridging generic and personalized federated learning for image classification","author":"Chen Hong-You","year":"2021","unstructured":"Hong-You Chen and Wei-Lun Chao. 2021. On bridging generic and personalized federated learning for image classification. arXiv preprint arXiv:2107.00778 (2021).","journal-title":"arXiv preprint arXiv:2107.00778"},{"key":"e_1_3_1_8_2","first-page":"2089","volume-title":"International Conference on Machine Learning","author":"Collins Liam","year":"2021","unstructured":"Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. 2021. Exploiting shared representations for personalized federated learning. In International Conference on Machine Learning. PMLR, 2089\u20132099."},{"key":"e_1_3_1_9_2","first-page":"19586","article-title":"An efficient framework for clustered federated learning","volume":"33","author":"Ghosh Avishek","year":"2020","unstructured":"Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. 2020. An efficient framework for clustered federated learning. Adv. Neural Inf. Process. Syst. 33 (2020), 19586\u201319597.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2909068"},{"key":"e_1_3_1_11_2","first-page":"448","volume-title":"International Conference on Machine Learning","author":"Ioffe Sergey","year":"2015","unstructured":"Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning. PMLR, 448\u2013456."},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","unstructured":"Peter Kairouz H. Brendan McMahan Brendan Avent Aur\u00e9lien Bellet Mehdi Bennis Arjun Nitin Bhagoji Kallista Bonawitz Zachary Charles Graham Cormode Rachel Cummings Rafael G. L. D\u2019Oliveira Hubert Eichner Salim El Rouayheb David Evans Josh Gardner Zachary Garrett Adri\u00e0 Gasc\u00f3n Badih Ghazi Phillip B. Gibbons Marco Gruteser Zaid Harchaoui Chaoyang He Lie He Zhouyuan Huo Ben Hutchinson Justin Hsu Martin Jaggi Tara Javidi Gauri Joshi Mikhail Khodak Jakub Konecn\u00fd Aleksandra Korolova Farinaz Koushanfar Sanmi Koyejo Tancr\u00e8de Lepoint Yang Liu Prateek Mittal Mehryar Mohri Richard Nock Ayfer \u00d6zg\u00fcr Rasmus Pagh Hang Qi Daniel Ramage Ramesh Raskar Mariana Raykova Dawn Song Weikang Song Sebastian U. Stich Ziteng Sun Ananda Theertha Suresh Florian Tram\u00e8r Praneeth Vepakomma Jianyu Wang Li Xiong Zheng Xu Qiang Yang Felix X. Yu Han Yu and Sen Zhao. 2021. Advances and open problems in federated learning. Foundations and Trends\u00aein Machine Learning 14 1\u20132 (2021) 1\u2013210. DOI:10.1561\/2200000083","DOI":"10.1561\/2200000083"},{"key":"e_1_3_1_13_2","first-page":"5132","volume-title":"International Conference on Machine Learning","author":"Karimireddy Sai Praneeth","year":"2020","unstructured":"Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning. PMLR, 5132\u20135143."},{"key":"e_1_3_1_14_2","article-title":"CIFAR-10 Dataset","author":"Krizhevsky Alex","year":"2009","unstructured":"Alex Krizhevsky. 2009. CIFAR-10 Dataset. Retrieved from http:\/\/www.cs.toronto.edu\/kriz\/cifar.html","journal-title":"R"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_1_16_2","first-page":"6357","volume-title":"International Conference on Machine Learning","author":"Li Tian","year":"2021","unstructured":"Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. 2021. Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning. PMLR, 6357\u20136368."},{"key":"e_1_3_1_17_2","first-page":"429","article-title":"Federated optimization in heterogeneous networks","volume":"2","author":"Li Tian","year":"2020","unstructured":"Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2 (2020), 429\u2013450.","journal-title":"Proc. Mach. Learn. Syst."},{"key":"e_1_3_1_18_2","article-title":"FedBN: Federated learning on non-IID features via local batch normalization","author":"Li Xiaoxiao","year":"2021","unstructured":"Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp, and Qi Dou. 2021. FedBN: Federated learning on non-IID features via local batch normalization. arXiv preprint arXiv:2102.07623 (2021).","journal-title":"arXiv preprint arXiv:2102.07623"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","unstructured":"Yiming Li Yong Jiang Zhifeng Li and Shu-Tao Xia. 2024. Backdoor learning: A survey. IEEE Transactions on Neural Networks and Learning Systems 35 1 (2024) 5\u201322. DOI:10.1109\/TNNLS.2022.3182979","DOI":"10.1109\/TNNLS.2022.3182979"},{"key":"e_1_3_1_20_2","article-title":"Threats to federated learning: A survey","author":"Lyu Lingjuan","year":"2020","unstructured":"Lingjuan Lyu, Han Yu, and Qiang Yang. 2020. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133 (2020).","journal-title":"arXiv preprint arXiv:2003.02133"},{"key":"e_1_3_1_21_2","article-title":"Three approaches for personalization with applications to federated learning","author":"Mansour Yishay","year":"2020","unstructured":"Yishay Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. 2020. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619 (2020).","journal-title":"arXiv preprint arXiv:2002.10619"},{"key":"e_1_3_1_22_2","first-page":"1273","volume-title":"Artificial Intelligence and Statistics","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273\u20131282."},{"key":"e_1_3_1_23_2","first-page":"38831","article-title":"FedSR: A simple and effective domain generalization method for federated learning","volume":"35","author":"Nguyen A. Tuan","year":"2022","unstructured":"A. Tuan Nguyen, Philip Torr, and Ser Nam Lim. 2022. FedSR: A simple and effective domain generalization method for federated learning. Adv. Neural Inf. Process. Syst. 35 (2022), 38831\u201338843.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_24_2","volume-title":"International Conference on Learning Representations","author":"Oh Jaehoon","year":"2021","unstructured":"Jaehoon Oh, SangMook Kim, and Se-Young Yun. 2021. FedBABU: Toward enhanced representation for federated image classification. In International Conference on Learning Representations."},{"key":"e_1_3_1_25_2","article-title":"Revisiting personalized federated learning: Robustness against backdoor attacks","author":"Qin Zeyu","year":"2023","unstructured":"Zeyu Qin, Liuyi Yao, Daoyuan Chen, Yaliang Li, Bolin Ding, and Minhao Cheng. 2023. Revisiting personalized federated learning: Robustness against backdoor attacks. arXiv preprint arXiv:2302.01677 (2023).","journal-title":"arXiv preprint arXiv:2302.01677"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.3015958"},{"key":"e_1_3_1_27_2","article-title":"Very deep convolutional networks for large-scale image recognition","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).","journal-title":"arXiv preprint arXiv:1409.1556"},{"key":"e_1_3_1_28_2","article-title":"Can you really backdoor federated learning?","author":"Sun Ziteng","year":"2019","unstructured":"Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. 2019. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019).","journal-title":"arXiv preprint arXiv:1911.07963"},{"key":"e_1_3_1_29_2","first-page":"21394","article-title":"Personalized federated learning with moreau envelopes","volume":"33","author":"Dinh Canh T.","year":"2020","unstructured":"Canh T. Dinh, Nguyen Tran, and Josh Nguyen. 2020. Personalized federated learning with moreau envelopes. Adv. Neural Inf. Process. Syst. 33 (2020), 21394\u201321405.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","unstructured":"Alysa Ziying Tan Han Yu Lizhen Cui and Qiang Yang. 2023. Towards personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems 34 12 (2023) 9587\u20139603. DOI:10.1109\/TNNLS.2022.3160699","DOI":"10.1109\/TNNLS.2022.3160699"},{"key":"e_1_3_1_31_2","first-page":"16070","article-title":"Attack of the tails: Yes, you really can backdoor federated learning","volume":"33","author":"Wang Hongyi","year":"2020","unstructured":"Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. 2020. Attack of the tails: Yes, you really can backdoor federated learning. Adv. Neural Inf. Process. Syst. 33 (2020), 16070\u201316084.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_32_2","article-title":"Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms","author":"Xiao Han","year":"2017","unstructured":"Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).","journal-title":"arXiv preprint arXiv:1708.07747"},{"key":"e_1_3_1_33_2","volume-title":"International Conference on Learning Representations","author":"Xie Chulin","year":"2020","unstructured":"Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2020. DBA: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations."},{"key":"e_1_3_1_34_2","first-page":"677","volume-title":"International Conference on Database Systems for Advanced Applications","author":"Ye Tiandi","year":"2023","unstructured":"Tiandi Ye, Senhui Wei, Jamie Cui, Cen Chen, Yingnan Fu, and Ming Gao. 2023. Robust clustered federated learning. In International Conference on Database Systems for Advanced Applications. Springer, 677\u2013692."},{"key":"e_1_3_1_35_2","first-page":"5650","volume-title":"International Conference on Machine Learning","author":"Yin Dong","year":"2018","unstructured":"Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning. PMLR, 5650\u20135659."},{"key":"e_1_3_1_36_2","unstructured":"Kaiyuan Zhang Guanhong Tao Qiuling Xu Siyuan Cheng Shengwei An Yingqi Liu Shiwei Feng Guangyu Shen Pin-Yu Chen Shiqing Ma and Xiangyu Zhang. 2023. FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning."},{"key":"e_1_3_1_37_2","first-page":"26429","volume-title":"International Conference on Machine Learning","author":"Zhang Zhengming","year":"2022","unstructured":"Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael Mahoney, Prateek Mittal, Ramchandran Kannan, and Joseph Gonzalez. 2022. Neurotoxin: Durable backdoors in federated learning. In International Conference on Machine Learning. PMLR, 26429\u201326446."},{"key":"e_1_3_1_38_2","article-title":"Backdoor federated learning by poisoning backdoor-critical layers","author":"Zhuang Haomin","year":"2023","unstructured":"Haomin Zhuang, Mingxian Yu, Hao Wang, Yang Hua, Jian Li, and Xu Yuan. 2023. Backdoor federated learning by poisoning backdoor-critical layers. arXiv preprint arXiv:2308.04466 (2023).","journal-title":"arXiv preprint arXiv:2308.04466"}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3649316","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3649316","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:17:47Z","timestamp":1750295867000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3649316"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,19]]},"references-count":37,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2024,8,31]]}},"alternative-id":["10.1145\/3649316"],"URL":"https:\/\/doi.org\/10.1145\/3649316","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,19]]},"assertion":[{"value":"2023-09-16","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-29","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}