{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T16:56:40Z","timestamp":1758819400003,"version":"3.40.4"},"reference-count":34,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2025,3,7]],"date-time":"2025-03-07T00:00:00Z","timestamp":1741305600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,7]],"date-time":"2025-03-07T00:00:00Z","timestamp":1741305600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Neural Process Lett"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Federated Learning (FL), which allows multiple participants to co-train machine learning models, enhances privacy-preserving by avoiding exposing local data. In recent years, FL has been considered a promising paradigm. However, during the FL process, individual clients may fall out on the client\u2019s side, or a particular client may engage in dishonest behavior such as uploading malicious data, thereby hindering the training of the global model. Most of the existing defense methods are considered only from the perspective of data filtering or model weighting, which have the disadvantages of poor robustness and high computational cost. Therefore, we propose a novel security FL (FedDefense) scheme based on client selection and adaptive rewards to defend against dishonest client attacks. First, to reduce the likelihood of poisoned clients participating in aggregation, we design a randomized subset method for client contribution evaluation via Kullback\u2013Leibler (KL) divergence. Second, we reduce the server\u2019s dependence on clients through a dynamic reward strategy to ensure healthy model training. Numerical analysis and performance evaluation show that the proposed technique prevents the threat of dishonest clients during FL processing. Compared with existing methods, our approach has significant advantages in terms of efficiency and performance.<\/jats:p>","DOI":"10.1007\/s11063-025-11724-2","type":"journal-article","created":{"date-parts":[[2025,3,7]],"date-time":"2025-03-07T06:08:45Z","timestamp":1741327725000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["FedDefense: A Defense Mechanism for Dishonest Client Attacks in Federated Learning"],"prefix":"10.1007","volume":"57","author":[{"given":"Gaofeng","family":"Yue","sequence":"first","affiliation":[]},{"given":"Xiaowei","family":"Han","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,7]]},"reference":[{"key":"11724_CR1","doi-asserted-by":"crossref","unstructured":"Duan Y, Fu X, Luo B, Wang Z, Shi J, Du X (2015) Detective: automatically identify and analyze malware processes in forensic scenarios via dlls. In: Proc of ICC","DOI":"10.1109\/ICC.2015.7249229"},{"key":"11724_CR2","doi-asserted-by":"crossref","unstructured":"Liu T, Di B, An P, Song L (2021) Privacy-preserving incentive mechanism design for federated cloud-edge learning. IEEE TNSE 8(3)","DOI":"10.1109\/TNSE.2021.3100096"},{"key":"11724_CR3","doi-asserted-by":"crossref","unstructured":"Mills J, Hu J, Min G (2022) Multi-task federated learning for personalised deep neural networks in edge computing. IEEE TPDS 33(3)","DOI":"10.1109\/TPDS.2021.3098467"},{"key":"11724_CR4","unstructured":"McMahan B, Moore E, Ramage D, Hampson S, Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: Proc of AISTATS"},{"key":"11724_CR5","doi-asserted-by":"crossref","unstructured":"Liu B, Cai Y, Zhang Z, Li Y, Wang L, Li D, Guo Y, Chen X (2021) Distfl: distribution-aware federated learning for mobile scenarios. ACM IMWUT 5(4)","DOI":"10.1145\/3494966"},{"key":"11724_CR6","doi-asserted-by":"crossref","unstructured":"Wang N, Yang W, Guan Z, Du X, Guizani M (2021) BPFL: a blockchain based privacy-preserving federated learning scheme. In: Proc of GLOBECOM","DOI":"10.1109\/GLOBECOM46510.2021.9685821"},{"key":"11724_CR7","unstructured":"Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V (2020) How to backdoor federated learning. In: Proc of AISTATS"},{"key":"11724_CR8","doi-asserted-by":"crossref","unstructured":"Hei X, Yin X, Wang Y, Ren J, Zhu L (2020) A trusted feature aggregator federated learning for distributed malicious attack detection. Comput Secur 99","DOI":"10.1016\/j.cose.2020.102033"},{"key":"11724_CR9","doi-asserted-by":"crossref","unstructured":"Gu Z, Yang Y (2021) Detecting malicious model updates from federated learning on conditional variational autoencoder. In: Proc of IPDPS","DOI":"10.1109\/IPDPS49936.2021.00075"},{"key":"11724_CR10","doi-asserted-by":"crossref","unstructured":"Cao X, Jia J, Gong NZ (2021) Provably secure federated learning against malicious clients. In: Proc of AAAI","DOI":"10.1609\/aaai.v35i8.16849"},{"key":"11724_CR11","doi-asserted-by":"publisher","DOI":"10.1016\/j.comnet.2021.108691","volume":"204","author":"Z Abubaker","year":"2022","unstructured":"Abubaker Z, Javaid N, Almogren A, Akbar M, Zuair M, Ben-Othman J (2022) Blockchained service provisioning and malicious node detection via federated learning in scalable internet of sensor things networks. Comput Netw 204:108691","journal-title":"Comput Netw"},{"key":"11724_CR12","doi-asserted-by":"publisher","unstructured":"Yu P, Liu Y (2019) Federated object detection: optimizing object detection model with federated learning. In: Proc of ICVISP, pp 7\u2013176. ACM https:\/\/doi.org\/10.1145\/3387168.3387181","DOI":"10.1145\/3387168.3387181"},{"key":"11724_CR13","unstructured":"Claici S, Yurochkin M, Ghosh S, Solomon J (2020) Model fusion with kullback-leibler divergence. In: Proceedings of ICML. Proceedings of machine learning research, vol 119, pp 2038\u20132047. PMLR http:\/\/proceedings.mlr.press\/v119\/claici20a.html"},{"key":"11724_CR14","doi-asserted-by":"crossref","unstructured":"Zhu M, Ning W, Qi Q, Wang J, Zhuang Z, Sun H, Huang J, Liao J (2024) Fluk: protecting federated learning against malicious clients for internet of vehicles. In: Euro-Par 2024: parallel processing, pp 454\u2013469. Springer, Cham","DOI":"10.1007\/978-3-031-69766-1_31"},{"key":"11724_CR15","unstructured":"Yadav C, Bottou L (2019) Cold case: the lost MNIST digits. In: Wallach HM, Larochelle H, Beygelzimer A, d\u2019Alch\u00e9-Buc F, Fox EB, Garnett R (eds) Proc of NeurIPS"},{"key":"11724_CR16","doi-asserted-by":"crossref","unstructured":"Zhong Z, Zhou Y, Wu D, Chen X, Chen M, Li C, Sheng QZ (2021) P-fedavg: parallelizing federated learning with theoretical guarantees. In: Proc of INFOCOM","DOI":"10.1109\/INFOCOM42981.2021.9488877"},{"key":"11724_CR17","doi-asserted-by":"crossref","unstructured":"Liao Y, Xu Y, Xu H, Wang L, Qian C (2023) Adaptive configuration for heterogeneous participants in decentralized federated learning. In: Proc of INFOCOM","DOI":"10.1109\/INFOCOM53939.2023.10228945"},{"key":"11724_CR18","unstructured":"Geiping J, Bauermeister H, Dr\u00f6ge H, Moeller M (2020) Inverting gradients - how easy is it to break privacy in federated learning? In: Proc of NeurIPS"},{"key":"11724_CR19","unstructured":"Fu C, Zhang X, Shouling\u00a0Ji ea (2022) Label inference attacks against vertical federated learning. In: Proc of USENIX Security"},{"key":"11724_CR20","doi-asserted-by":"crossref","unstructured":"Fu L, Zhang H, Gao G, Zhang M, Liu X (2023) Client selection in federated learning: principles, challenges, and opportunities. IEEE IoT-Jl 10(24)","DOI":"10.1109\/JIOT.2023.3299573"},{"key":"11724_CR21","doi-asserted-by":"crossref","unstructured":"Nguyen HT, Sehwag V, Hosseinalipour S, Brinton CG, Chiang M, Poor HV (2021) Fast-convergent federated learning. IEEE JSAC 39(1)","DOI":"10.1109\/JSAC.2020.3036952"},{"key":"11724_CR22","doi-asserted-by":"crossref","unstructured":"Li C, Zeng X, Zhang M, Cao Z (2022) Pyramidfl: a fine-grained client selection framework for efficient federated learning. In: Proc of ACM MobiCom","DOI":"10.1145\/3495243.3517017"},{"issue":"3\u20134","key":"11724_CR23","first-page":"211","volume":"9","author":"C Dwork","year":"2014","unstructured":"Dwork C, Roth A (2014) The algorithmic foundations of differential privacy. Found Trends Theor Comput Sci 9(3\u20134):211\u2013407","journal-title":"Found Trends Theor Comput Sci"},{"key":"11724_CR24","doi-asserted-by":"crossref","unstructured":"Shi L, Shu J, Zhang W, Liu Y (2021) HFL-DP: hierarchical federated learning with differential privacy. In: Proc of GLOBECOM","DOI":"10.1109\/GLOBECOM46510.2021.9685644"},{"key":"11724_CR25","unstructured":"Hu R, Gong Y, Guo Y (2022) Federated learning with sparsified model perturbation: improving accuracy under client-level differential privacy. CoRR abs\/2202.07178"},{"key":"11724_CR26","unstructured":"Stacey T, Ling L, Ka-Ho C, Mehmet-Emre G, Wenqi W (2020) Ldp-fed: federated learning with local differential privacy. In: Proc of EdgeSys"},{"key":"11724_CR27","doi-asserted-by":"crossref","unstructured":"Cheng Y, Lu J, Niyato D, Lyu B, Kang J, Zhu S (2022) Federated transfer learning with client selection for intrusion detection in mobile edge computing. IEEE Commun Lett 26(3)","DOI":"10.1109\/LCOMM.2022.3140273"},{"key":"11724_CR28","unstructured":"Jinkyu\u00a0Kim BH, Geeho\u00a0Kim (2022) Multi-level branched regularization for federated learning. In: Proc of ICML"},{"key":"11724_CR29","unstructured":"Blanchard P, El\u00a0Mhamdi EM, Guerraoui R, Stainer J (2017) Machine learning with adversaries: byzantine tolerant gradient descent. In: Proc of NIPS"},{"key":"11724_CR30","unstructured":"Yin D, Chen Y, Kannan R, Bartlett P (2018) Byzantine-robust distributed learning: towards optimal statistical rates. In: Proc of ICML"},{"key":"11724_CR31","unstructured":"El\u00a0Mhamdi EM, Guerraoui R, Rouault S (2018) The hidden vulnerability of distributed learning in Byzantium. In: Proc of ICML"},{"key":"11724_CR32","doi-asserted-by":"crossref","unstructured":"Hossain MT, Islam S, Badsha S, Shen H (2021) Desmp: Differential privacy-exploited stealthy model poisoning attacks in federated learning. In: Proc of MSN","DOI":"10.1109\/MSN53354.2021.00038"},{"key":"11724_CR33","doi-asserted-by":"crossref","unstructured":"Lu J, Hu S, Wan W, Li M, Zhang LY, Xue L, Jin H (2024) Depriving the survival space of adversaries against poisoned gradients in federated learning. IEEE TIFS 19","DOI":"10.1109\/TIFS.2024.3360869"},{"key":"11724_CR34","unstructured":"PySyft. https:\/\/github.com\/OpenMined\/PySyft. Accessed in June, 2023 (2023)"}],"container-title":["Neural Processing Letters"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11063-025-11724-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11063-025-11724-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11063-025-11724-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,23]],"date-time":"2025-04-23T16:58:05Z","timestamp":1745427485000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11063-025-11724-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,7]]},"references-count":34,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2025,4]]}},"alternative-id":["11724"],"URL":"https:\/\/doi.org\/10.1007\/s11063-025-11724-2","relation":{},"ISSN":["1573-773X"],"issn-type":[{"type":"electronic","value":"1573-773X"}],"subject":[],"published":{"date-parts":[[2025,3,7]]},"assertion":[{"value":"6 January 2025","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 March 2025","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no Conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"28"}}