{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,13]],"date-time":"2026-05-13T03:37:23Z","timestamp":1778643443403,"version":"3.51.4"},"reference-count":45,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2021,3,14]],"date-time":"2021-03-14T00:00:00Z","timestamp":1615680000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Natural Science Foundation of China; Key Research and Development Plan Project of Zhejiang Province","award":["61803135, 61702150; 2017C01065"],"award-info":[{"award-number":["61803135, 61702150; 2017C01065"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Future Internet"],"abstract":"<jats:p>Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different from existing methods, we primarily focus on the effectiveness, persistence and stealth of attacks. Numerical experiments demonstrate that the proposed method can not only achieve high attack success rate, but it is also stealthy enough to bypass two existing defense methods.<\/jats:p>","DOI":"10.3390\/fi13030073","type":"journal-article","created":{"date-parts":[[2021,3,14]],"date-time":"2021-03-14T22:13:10Z","timestamp":1615759990000},"page":"73","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":151,"title":["Deep Model Poisoning Attack on Federated Learning"],"prefix":"10.3390","volume":"13","author":[{"given":"Xingchen","family":"Zhou","sequence":"first","affiliation":[{"name":"School of Cyberspace, Hangzhou Dianzi University, Hangzhou 310018, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9332-5258","authenticated-orcid":false,"given":"Ming","family":"Xu","sequence":"additional","affiliation":[{"name":"School of Cyberspace, Hangzhou Dianzi University, Hangzhou 310018, China"}]},{"given":"Yiming","family":"Wu","sequence":"additional","affiliation":[{"name":"School of Cyberspace, Hangzhou Dianzi University, Hangzhou 310018, China"}]},{"given":"Ning","family":"Zheng","sequence":"additional","affiliation":[{"name":"School of Cyberspace, Hangzhou Dianzi University, Hangzhou 310018, China"},{"name":"School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,3,14]]},"reference":[{"key":"ref_1","unstructured":"McMahan, B., Moore, E., Ramage, D., Hampson, S., and Arcas, B.A. (2017, January 20\u201322). Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA."},{"key":"ref_2","unstructured":"Konecn\u00fd, J., McMahan, H.B., Yu, F.X., Richt\u00e1rik, P., Suresh, A.T., and Bacon, D. (2016). Federated Learning: Strategies for Improving Communication Efficiency. arXiv."},{"key":"ref_3","unstructured":"Ramaswamy, S., Mathews, R., Rao, K., and Beaufays, F. (2019). Federated Learning for Emoji Prediction in a Mobile Keyboard. arXiv."},{"key":"ref_4","unstructured":"Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., and Shmatikov, V. (2020, January 26\u201328). How To Backdoor Federated Learning. Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, Palermo, Italy."},{"key":"ref_5","unstructured":"Xie, C., Huang, K., Chen, P., and Li, B. (2020, January 30). DBA: Distributed Backdoor Attacks against Federated Learning. Proceedings of the 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"47230","DOI":"10.1109\/ACCESS.2019.2909068","article-title":"BadNets: Evaluating Backdooring Attacks on Deep Neural Networks","volume":"7","author":"Gu","year":"2019","journal-title":"IEEE Access"},{"key":"ref_7","unstructured":"Bhagoji, A.N., Chakraborty, S., Mittal, P., and Calo, S. (2018, January 3\u20138). Model poisoning attacks in federated learning. Proceedings of the Workshop on Security in Machine Learning (SecML), Collocated with the 32nd Conference on Neural Information Processing Systems (NeurIPS\u201918), Montr\u00e9al, QC, Canada."},{"key":"ref_8","unstructured":"Bhagoji, A.N., Chakraborty, S., Mittal, P., and Calo, S.B. (2019, January 9\u201315). Analyzing Federated Learning through an Adversarial Lens. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_9","unstructured":"Fang, M., Cao, X., Jia, J., and Gong, N.Z. (2020, January 12\u201314). Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. Proceedings of the 29th USENIX Security Symposium, USENIX Security, Boston, MA, USA."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Mu\u00f1oz-Gonz\u00e1lez, L., Biggio, B., Demontis, A., Paudice, A., Wongrassamee, V., Lupu, E.C., and Roli, F. (2017, January 3). Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA.","DOI":"10.1145\/3128572.3140451"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Jagielski, M., Oprea, A., Biggio, B., Liu, C., and Nita-Rotaru, C. (2018, January 20\u201324). Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. Proceedings of the 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, San Francisco, CA, USA.","DOI":"10.1109\/SP.2018.00057"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Chen, Y., Su, L., and Xu, J. (2018, January 18\u201322). Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent. Proceedings of the Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems, Irvine, CA, USA.","DOI":"10.1145\/3219617.3219655"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"12:1","DOI":"10.1145\/3298981","article-title":"Federated Machine Learning: Concept and Applications","volume":"10","author":"Yang","year":"2019","journal-title":"ACM Trans. Intell. Syst. Technol."},{"key":"ref_14","unstructured":"Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., Bonawitz, K., Charles, Z., Cormode, G., and Cummings, R. (2019). Advances and open problems in federated learning. arXiv."},{"key":"ref_15","first-page":"50","article-title":"Federated learning: Challenges, methods, and future directions","volume":"37","author":"Li","year":"2020","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_16","unstructured":"Blanchard, P., Mhamdi, E.M.E., Guerraoui, R., and Stainer, J. (2017, January 4\u20139). Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. Proceedings of the Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA."},{"key":"ref_17","unstructured":"Yin, D., Chen, Y., Kannan, R., and Bartlett, P. (2018, January 10\u201315). Byzantine-robust distributed learning: Towards optimal statistical rates. Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_18","unstructured":"Li, L., Xu, W., Chen, T., Giannakis, G.B., and Ling, Q. (February, January 27). RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, Honolulu, HI, USA."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"4583","DOI":"10.1109\/TSP.2020.3012952","article-title":"Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks","volume":"68","author":"Wu","year":"2020","journal-title":"IEEE Trans. Signal Process."},{"key":"ref_20","unstructured":"Fung, C., Yoon, C.J.M., and Beschastnikh, I. (2018). Mitigating Sybils in Federated Learning Poisoning. arXiv."},{"key":"ref_21","unstructured":"Pillutla, V.K., Kakade, S.M., and Harchaoui, Z. (2019). Robust Aggregation for Federated Learning. arXiv."},{"key":"ref_22","unstructured":"Sun, Z., Kairouz, P., Suresh, A.T., and McMahan, H.B. (2019). Can You Really Backdoor Federated Learning?. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Shokri, R., and Shmatikov, V. (2015, January 12\u201316). Privacy-preserving deep learning. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA.","DOI":"10.1145\/2810103.2813687"},{"key":"ref_24","unstructured":"Hardy, S., Henecka, W., Ivey-Law, H., Nock, R., Patrini, G., Smith, G., and Thorne, B. (2017). Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv."},{"key":"ref_25","unstructured":"Guyon, I., von Luxburg, U., Bengio, S., Wallach, H.M., Fergus, R., Vishwanathan, S.V.N., and Garnett, R. (2017, January 4\u20139). Federated Multi-Task Learning. Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA."},{"key":"ref_26","unstructured":"Mohri, M., Sivek, G., and Suresh, A.T. (2019, January 9\u201315). Agnostic federated learning. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Ahn, J.H., Simeone, O., and Kang, J. (2020, January 4\u20138). Cooperative learning via federated distillation over fading channels. Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053448"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Gu, B., Dang, Z., Li, X., and Huang, H. (2020, January 23\u201327). Federated doubly stochastic kernel learning for vertically partitioned data. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, CA, USA.","DOI":"10.1145\/3394486.3403298"},{"key":"ref_29","unstructured":"Biggio, B., Nelson, B., and Laskov, P. (July, January 27). Poisoning Attacks against Support Vector Machines. Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Mei, S., and Zhu, X. (2015, January 28\u201330). Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA.","DOI":"10.1609\/aaai.v29i1.9569"},{"key":"ref_31","unstructured":"Shafahi, A., Huang, W.R., Najibi, M., Suciu, O., Studer, C., and Dumitras, T. (2018, January 3\u20138). Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. Proceedings of the Annual Conference on Neural Information Processing Systems 2018, Montr\u00e9al, QC, Canada."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Fang, M., Gong, N.Z., and Liu, J. (2020, January 20\u201324). Influence function based data poisoning attacks to top-n recommender systems. Proceedings of the Web Conference 2020, Taipei, Taiwan.","DOI":"10.1145\/3366423.3380072"},{"key":"ref_33","unstructured":"Fung, C., Yoon, C.J., and Beschastnikh, I. (2020, January 14\u201318). The Limitations of Federated Learning in Sybil Settings. Proceedings of the 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), San Sebastian, Spain."},{"key":"ref_34","unstructured":"Mhamdi, E.M.E., Guerraoui, R., and Rouault, S. (2018, January 10\u201315). The Hidden Vulnerability of Distributed Learning in Byzantium. Proceedings of the 35th International Conference on Machine Learning, Stockholmsm\u00e4ssan, Stockholm, Sweden."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Shmelkov, K., Schmid, C., and Alahari, K. (2017, January 22\u201329). Incremental learning of object detectors without catastrophic forgetting. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.368"},{"key":"ref_36","unstructured":"Lee, S.W., Kim, J.H., Jun, J., Ha, J.W., and Zhang, B.T. (2017). Overcoming Catastrophic Forgetting by Incremental Moment Matching. arXiv."},{"key":"ref_37","unstructured":"Li, X., Zhou, Y., Wu, T., Socher, R., and Xiong, C. (2019, January 10\u201315). Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Aljundi, R., Kelchtermans, K., and Tuytelaars, T. (2019, January 15\u201321). Task-free continual learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01151"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1016\/1049-9660(92)90003-L","article-title":"A fast algorithm for active contours and curvature estimation","volume":"55","author":"Williams","year":"1992","journal-title":"CVGIP Image Underst."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"3521","DOI":"10.1073\/pnas.1611835114","article-title":"Overcoming catastrophic forgetting in neural networks","volume":"114","author":"Kirkpatrick","year":"2017","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_41","unstructured":"Zenke, F., Poole, B., and Ganguli, S. (2017, January 6\u201311). Continual Learning Through Synaptic Intelligence. Proceedings of the 34th International Conference on Machine Learning, Sydney, NSW, Australia."},{"key":"ref_42","first-page":"2287","article-title":"Stereo matching by training a convolutional neural network to compare image patches","volume":"17","author":"Zbontar","year":"2016","journal-title":"J. Mach. Learn. Res."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Cao, X., Fang, M., Liu, J., and Gong, N.Z. (2020). FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. arXiv.","DOI":"10.14722\/ndss.2021.24434"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Cao, X., Jia, J., and Gong, N.Z. (2021). Provably Secure Federated Learning against Malicious Clients. arXiv.","DOI":"10.1609\/aaai.v35i8.16849"}],"container-title":["Future Internet"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-5903\/13\/3\/73\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T05:35:37Z","timestamp":1760160937000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-5903\/13\/3\/73"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,3,14]]},"references-count":45,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2021,3]]}},"alternative-id":["fi13030073"],"URL":"https:\/\/doi.org\/10.3390\/fi13030073","relation":{},"ISSN":["1999-5903"],"issn-type":[{"value":"1999-5903","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,3,14]]}}}