{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T00:05:47Z","timestamp":1764893147862,"version":"3.46.0"},"reference-count":36,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T00:00:00Z","timestamp":1764892800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T00:00:00Z","timestamp":1764892800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Anhui Provincial Science Research Project","award":["2024AH051527"],"award-info":[{"award-number":["2024AH051527"]}]},{"name":"Talent Research Fund Project of Hefei University","award":["21-22RC19"],"award-info":[{"award-number":["21-22RC19"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cybersecurity"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Untargeted poisoning attacks pose a serious threat to federated learning. However, existing untargeted poisoning attacks have limitations. Most attacks assume that the adversary can control a large number of real clients, which is difficult to achieve in practice. Although the poisoning attack based on fake clients overcomes dependence on real clients. It causes the model to classify all data into default categories, which limits the effectiveness of the attack. Additionally, the fake local model updates are consistent, making them easily detectable by existing defenses. the attack is less stealth. To address these issues, we propose an Untargeted Poisoning Attack based on Fake Clients called UPA-FC. The attack manipulates key model layers based on their importance to enhance its effectiveness. We also introduce a random flipping strategy to reduce similarity between fake local updates, enhancing the stealth of the attack. To defend against UPA-FC, we propose a clustering-based defense scheme called D-UPA-FC. This scheme analyzes the distance matrix using a clustering algorithm. It determines the optimal clusters by calculating Euclidean distances to aggregate the global model. Experimental results show that UPA-FC outperforms the existing poisoning attack in terms of both effectiveness and stealth, while D-UPA-FC effectively defends against the UPA-FC.<\/jats:p>","DOI":"10.1186\/s42400-025-00389-y","type":"journal-article","created":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T00:02:22Z","timestamp":1764892942000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Untargeted poisoning attack based on fake clients and its defense in federated learning"],"prefix":"10.1186","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0009-0000-3454-6245","authenticated-orcid":false,"given":"Caimei","family":"Wang","sequence":"first","affiliation":[]},{"given":"Kangjian","family":"Xu","sequence":"additional","affiliation":[]},{"given":"An","family":"He","sequence":"additional","affiliation":[]},{"given":"Zhipeng","family":"Sun","sequence":"additional","affiliation":[]},{"given":"Jianzhong","family":"Pan","sequence":"additional","affiliation":[]},{"given":"Yudong","family":"Ren","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,5]]},"reference":[{"key":"389_CR1","doi-asserted-by":"publisher","unstructured":"Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. https:\/\/doi.org\/10.48550\/arXiv.1603.04467","DOI":"10.48550\/arXiv.1603.04467"},{"key":"389_CR2","unstructured":"Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V (2020) How to backdoor federated learning. In: International conference on artificial intelligence and statistics, pp 2938\u20132948. PMLR"},{"key":"389_CR3","unstructured":"Baruch G, Baruch M, Goldberg Y (2019) A little is enough: circumventing defenses for distributed learning. Adv Neural Inf Process Syst 32"},{"key":"389_CR4","unstructured":"Bhagoji AN, Chakraborty S, Mittal P, Calo S (2019) Analyzing federated learning through an adversarial lens. In: International conference on machine learning, pp 634\u2013643. PMLR"},{"key":"389_CR5","unstructured":"Blanchard P, El\u00a0Mhamdi EM, Guerraoui R, Stainer J (2017) Machine learning with adversaries: Byzantine tolerant gradient descent. Adv Neural Inf Process Syst 30"},{"key":"389_CR6","doi-asserted-by":"publisher","first-page":"104468","DOI":"10.1016\/j.engappai.2021.104468","volume":"106","author":"A Blanco-Justicia","year":"2021","unstructured":"Blanco-Justicia A, Domingo-Ferrer J, Mart\u00ednez S, S\u00e1nchez D, Flanagan A, Tan KE (2021) Achieving security and privacy in federated learning systems: survey, research challenges and future directions. Eng Appl Artif Intell 106:104468. https:\/\/doi.org\/10.1016\/j.engappai.2021.104468","journal-title":"Eng Appl Artif Intell"},{"key":"389_CR7","doi-asserted-by":"crossref","unstructured":"Cao X, Gong NZ (2022) Mpaf: model poisoning attacks to federated learning based on fake clients. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 3396\u20133404","DOI":"10.1109\/CVPRW56347.2022.00383"},{"issue":"2","key":"389_CR8","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3154503","volume":"1","author":"Y Chen","year":"2017","unstructured":"Chen Y, Su L, Xu J (2017) Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proc ACM Meas Anal Comput Syst 1(2):1\u201325. https:\/\/doi.org\/10.1145\/3154503","journal-title":"Proc ACM Meas Anal Comput Syst"},{"key":"389_CR9","unstructured":"Fang M, Cao X, Jia J, Gong N (2020) Local model poisoning attacks to $$\\{$$Byzantine-Robust$$\\}$$ federated learning. In: 29th USENIX security symposium (USENIX Security 20), pp 1605\u20131622"},{"key":"389_CR10","unstructured":"Fung C, Yoon CJ, Beschastnikh I (2020) The limitations of federated learning in sybil settings. In: 23rd International symposium on research in attacks, intrusions and defenses (RAID 2020), pp 301\u2013316"},{"key":"389_CR11","doi-asserted-by":"publisher","unstructured":"Fung C, Yoon CJ, Beschastnikh I (2018) Mitigating sybils in federated learning poisoning. arXiv preprint arXiv:1808.04866. https:\/\/doi.org\/10.48550\/arXiv.1808.04866","DOI":"10.48550\/arXiv.1808.04866"},{"key":"389_CR12","doi-asserted-by":"crossref","first-page":"110178","DOI":"10.1016\/j.knosys.2022.110178","volume":"260","author":"NM Jebreel","year":"2023","unstructured":"Jebreel NM, Domingo-Ferrer J (2023) Fl-defender: combating targeted attacks in federated learning. Knowl-Based Syst 260:110178","journal-title":"Knowl-Based Syst"},{"key":"389_CR13","unstructured":"Kone\u010dn\u1ef3 J (2016) Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492"},{"key":"389_CR14","unstructured":"Krizhevsky A, Hinton G, et al (2009) Learning multiple layers of features from tiny images"},{"issue":"11","key":"389_CR15","doi-asserted-by":"publisher","first-page":"2278","DOI":"10.1109\/5.726791","volume":"86","author":"Y LeCun","year":"1998","unstructured":"LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278\u20132324. https:\/\/doi.org\/10.1109\/5.726791","journal-title":"Proc IEEE"},{"key":"389_CR16","doi-asserted-by":"crossref","unstructured":"Lee Y, Park S, Kang J (2024) Security-preserving federated learning via byzantine-sensitive triplet distance. In: 2024 IEEE international symposium on biomedical imaging (ISBI), pp 1\u20135. IEEE","DOI":"10.1109\/ISBI56570.2024.10635545"},{"key":"389_CR17","doi-asserted-by":"publisher","unstructured":"Li L, Xu W, Chen T, Giannakis GB, Ling Q (2019) RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 1544\u20131551. https:\/\/doi.org\/10.1609\/aaai.v33i01.33011544","DOI":"10.1609\/aaai.v33i01.33011544"},{"key":"389_CR18","unstructured":"McMahan B, Moore E, Ramage D, Hampson S, Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: artificial intelligence and statistics, pp 1273\u20131282. PMLR"},{"key":"389_CR19","doi-asserted-by":"crossref","unstructured":"Shejwalkar V, Houmansadr A (2021) Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning. In: NDSS","DOI":"10.14722\/ndss.2021.24498"},{"key":"389_CR20","doi-asserted-by":"publisher","unstructured":"Shejwalkar V, Houmansadr A, Kairouz P, Ramage D (2022) Back to the drawing board: a critical evaluation of poisoning attacks on production federated learning. In: 2022 IEEE symposium on security and privacy (SP), pp 1354\u20131371. IEEE. https:\/\/doi.org\/10.1109\/SP46214.2022.9833647","DOI":"10.1109\/SP46214.2022.9833647"},{"key":"389_CR21","doi-asserted-by":"publisher","first-page":"2023006","DOI":"10.1051\/sands\/2023006","volume":"2","author":"L Shi","year":"2023","unstructured":"Shi L, Chen Z, Shi Y, Wei L, Tao Y, He M, Wang Q, Zhou Y, Gao Y (2023) MPHM: model poisoning attacks on federal learning using historical information momentum. Secur Saf 2:2023006. https:\/\/doi.org\/10.1051\/sands\/2023006","journal-title":"Secur Saf"},{"issue":"2","key":"389_CR22","doi-asserted-by":"publisher","first-page":"925","DOI":"10.1109\/TAI.2023.3280155","volume":"5","author":"Y Sun","year":"2023","unstructured":"Sun Y, Ochiai H, Sakuma J (2023) Attacking-distance-aware attack: semi-targeted model poisoning on federated learning. IEEE Trans Artif Intell 5(2):925\u2013939. https:\/\/doi.org\/10.1109\/TAI.2023.3280155","journal-title":"IEEE Trans Artif Intell"},{"key":"389_CR23","doi-asserted-by":"publisher","unstructured":"Sun Z, Kairouz P, Suresh AT, McMahan HB (2019) Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963. https:\/\/doi.org\/10.48550\/arXiv.1911.07963","DOI":"10.48550\/arXiv.1911.07963"},{"key":"389_CR24","doi-asserted-by":"publisher","unstructured":"Tolpegin V, Truex S, Gursoy ME, Liu L (2020) Data poisoning attacks against federated learning systems. In: Computer security\u2013ESORICs 2020: 25th European symposium on research in computer security, ESORICs 2020, Guildford, UK, September 14\u201318, 2020, Proceedings, Part I 25, pp 480\u2013501. Springer. https:\/\/doi.org\/10.1007\/978-3-030-58951-6_24","DOI":"10.1007\/978-3-030-58951-6_24"},{"key":"389_CR25","doi-asserted-by":"publisher","unstructured":"Wan W, Lu J, Hu S, Zhang LY, Pei X (2021) Shielding federated learning: a new attack approach and its defense. In: 2021 IEEE wireless communications and networking conference (WCNC), pp 1\u20137. IEEE. https:\/\/doi.org\/10.1109\/WCNC49053.2021.9417334","DOI":"10.1109\/WCNC49053.2021.9417334"},{"key":"389_CR26","first-page":"16070","volume":"33","author":"H Wang","year":"2020","unstructured":"Wang H, Sreenivasan K, Rajput S, Vishwakarma H, Agarwal S, Sohn J-Y, Lee K, Papailiopoulos D, (2020) Attack of the tails: yes, you really can backdoor federated learning. Adv Neural Inf Process Syst 33:16070\u201316084","journal-title":"Adv Neural Inf Process Syst"},{"issue":"6","key":"389_CR27","first-page":"1302","volume":"46","author":"Y Wang","year":"2023","unstructured":"Wang Y, Zhai D, Xia Y (2023) A robust aggregation algorithm for defending against a large number of backdoor clients in federated learning. J Comput Sci 46(6):1302\u20131314","journal-title":"J Comput Sci"},{"key":"389_CR28","doi-asserted-by":"publisher","unstructured":"Wang Z, Kang Q, Zhang X, Hu Q (2022) Defense strategies toward model poisoning attacks in federated learning: a survey. In: 2022 IEEE wireless communications and networking conference (WCNC), pp 548\u2013553. IEEE. https:\/\/doi.org\/10.1109\/WCNC51071.2022.9771619","DOI":"10.1109\/WCNC51071.2022.9771619"},{"key":"389_CR29","doi-asserted-by":"publisher","first-page":"4583","DOI":"10.1109\/TSP.2020.3012952","volume":"68","author":"Z Wu","year":"2020","unstructured":"Wu Z, Ling Q, Chen T, Giannakis GB (2020) Federated variance-reduced stochastic gradient descent with robustness to Byzantine attacks. IEEE Trans Signal Process 68:4583\u20134596. https:\/\/doi.org\/10.1109\/TSP.2020.3012952","journal-title":"IEEE Trans Signal Process"},{"issue":"3","key":"389_CR30","doi-asserted-by":"publisher","first-page":"1177","DOI":"10.1109\/TNNLS.2020.3041202","volume":"33","author":"Z Xiang","year":"2020","unstructured":"Xiang Z, Miller DJ, Kesidis G (2020) Detection of backdoors in trained classifiers without access to the training set. IEEE Trans Neural Netw Learn Syst 33(3):1177\u20131191. https:\/\/doi.org\/10.1109\/TNNLS.2020.3041202","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"389_CR31","doi-asserted-by":"publisher","unstructured":"Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. https:\/\/doi.org\/10.48550\/arXiv.1708.07747","DOI":"10.48550\/arXiv.1708.07747"},{"key":"389_CR32","doi-asserted-by":"crossref","unstructured":"Xiao H, Xiao H, Eckert C (2012) Adversarial label flips attack on support vector machines. In: Proceedings of the 20th European conference on artificial intelligence, pp 870\u2013875","DOI":"10.3233\/978-1-61499-098-7-870"},{"key":"389_CR33","unstructured":"Xie C, Huang K, Chen P-Y, Li B (2019) Dba: distributed backdoor attacks against federated learning. In: International conference on learning representations"},{"key":"389_CR34","doi-asserted-by":"crossref","unstructured":"Xu W, Evans D, Qi Y (2018) Feature squeezing: detecting adversarial examples in deep neural networks. In: Proceedings 2018 network and distributed system security symposium. Internet Society","DOI":"10.14722\/ndss.2018.23198"},{"key":"389_CR35","unstructured":"Yin D, Chen Y, Kannan R, Bartlett P (2018) Byzantine-robust distributed learning: towards optimal statistical rates. In: International conference on machine learning, pp 5650\u20135659. PMLR"},{"key":"389_CR36","doi-asserted-by":"publisher","unstructured":"Zhao Y, Chen J, Zhang J, Wu D, Teng J, Yu S (2020) PDGAN: a novel poisoning defense method in federated learning using generative adversarial network. In: Algorithms and architectures for parallel processing: 19th international conference, ICA3PP 2019, Melbourne, VIC, Australia, December 9\u201311, 2019, Proceedings, Part I 19, pp 595\u2013609. Springer. https:\/\/doi.org\/10.1007\/978-3-030-38991-8_39","DOI":"10.1007\/978-3-030-38991-8_39"}],"container-title":["Cybersecurity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00389-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42400-025-00389-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00389-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T00:02:25Z","timestamp":1764892945000},"score":1,"resource":{"primary":{"URL":"https:\/\/cybersecurity.springeropen.com\/articles\/10.1186\/s42400-025-00389-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,5]]},"references-count":36,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["389"],"URL":"https:\/\/doi.org\/10.1186\/s42400-025-00389-y","relation":{},"ISSN":["2523-3246"],"issn-type":[{"type":"electronic","value":"2523-3246"}],"subject":[],"published":{"date-parts":[[2025,12,5]]},"assertion":[{"value":"26 November 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 March 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing interests or financial conflicts to disclose.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"90"}}