{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T10:04:00Z","timestamp":1773828240844,"version":"3.50.1"},"reference-count":26,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:00:00Z","timestamp":1773792000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:00:00Z","timestamp":1773792000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"China NSF","award":["No.62202146"],"award-info":[{"award-number":["No.62202146"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cybersecurity"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Federated Learning is a machine learning paradigm designed to address the issues of privacy protection, data security, and big data process. FedAvg, a widely used algorithm in federated learning, is vulnerable to gradient leakage, parameter exposure, and user data compromise. Existing works use differential privacy, homomorphic encryption, and secure multi-party computation to protect the gradients of FedAvg. However, these existing efforts lead to gradient polymerization errors of approximately 10\u201330% (applying differential privacy results in noisy gradients) in the server or have a high computational dimension. In this paper, we design a secure group aggregation approach for the gradient protection in federated learning. It realizes zero error in gradient aggregation, and the computational time under different number of users drops is almost the same, the time difference is a constant, and the time overhead is reduced by about 10\u201375% compared to traditional differential privacy, and 80% compared to homomorphic encryption methods. First, we use digital signature and authentication encryption to guarantee the integrity of the gradient. Second, we use the double-masking to deal with the situation when users exit, dropout or reconnect halfway, this ensures that the server is able to restore the correct gradient after aggregation, addressing the gradient aggregation error problem. Third, during the encryption period, our experiments have found a suitable group size for the federated learning\u2019s gradient aggregation approach. Specifically, we evaluate the efficiency that a group size of 7 is better when the number of users is smaller than\n                    <jats:inline-formula>\n                      <jats:alternatives>\n                        <jats:tex-math>$$2^{10}$$<\/jats:tex-math>\n                        <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                          <mml:msup>\n                            <mml:mn>2<\/mml:mn>\n                            <mml:mn>10<\/mml:mn>\n                          <\/mml:msup>\n                        <\/mml:math>\n                      <\/jats:alternatives>\n                    <\/jats:inline-formula>\n                    , while a group size of 128 or 64 can be adopted when the number of users is larger than\n                    <jats:inline-formula>\n                      <jats:alternatives>\n                        <jats:tex-math>$$2^{10}$$<\/jats:tex-math>\n                        <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                          <mml:msup>\n                            <mml:mn>2<\/mml:mn>\n                            <mml:mn>10<\/mml:mn>\n                          <\/mml:msup>\n                        <\/mml:math>\n                      <\/jats:alternatives>\n                    <\/jats:inline-formula>\n                    .\n                  <\/jats:p>","DOI":"10.1186\/s42400-025-00443-9","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T07:50:51Z","timestamp":1773820251000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Secure group aggregation for privacy-protection federated learning"],"prefix":"10.1186","volume":"9","author":[{"given":"Yucheng","family":"Yan","sequence":"first","affiliation":[]},{"given":"Jia","family":"Yang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,3,18]]},"reference":[{"key":"443_CR1","first-page":"143","volume-title":"Cryptographers- Track at the RSA Conference","author":"M Abdalla","year":"2002","unstructured":"Abdalla M, Bellare M, Rogaway P (2002) The oracle diffiehellman assumptions and an analysis of dhies. Cryptographers- Track at the RSA Conference. Springer, Cham, pp 143\u2013158"},{"key":"443_CR2","doi-asserted-by":"crossref","unstructured":"Ben-Or M, Goldwasser S, Wigderson A (1988) Completeness theorems for non-cryptographic fault-tolerant distributed computation. In: Proceedings of the twentieth annual ACM symposium on Theory of computing","DOI":"10.1145\/62212.62213"},{"key":"443_CR3","doi-asserted-by":"crossref","unstructured":"Bonawitz K, Ivanov V, Kreuter B, Marcedone A, McMahan HB, Patel S, Ramage D, Segal A, Seth K, (2017) Practical secure aggregationfor privacy-preserving machine learning. CCS-17. October 30-November 3, 2017, Dallas, TX, USA","DOI":"10.1145\/3133956.3133982"},{"key":"443_CR4","doi-asserted-by":"publisher","first-page":"644","DOI":"10.1109\/TIT.1976.1055638","volume":"22","author":"W Diffie","year":"1976","unstructured":"Diffie W, Hellman M (1976) New directions in cryptography. IEEE Trans Inf Theory 22:644\u2013654","journal-title":"IEEE Trans Inf Theory"},{"key":"443_CR5","first-page":"2241","volume":"57","author":"Y Dong","year":"2020","unstructured":"Dong Y, Hou W, Chen X, Zeng S (2020) Efficient and secure federated learning based on secret sharing and gradients selection. J Comput Res Dev 57:2241\u201350","journal-title":"J Comput Res Dev"},{"key":"443_CR6","unstructured":"Du Y, Zhang Z, Wu B, Liu L, Xu T, Chen E (2023) Federated nearest neighbor machine translation. In: International Conference on Learning Representations (ICLR)"},{"key":"443_CR7","doi-asserted-by":"crossref","unstructured":"Gehlhar T, Marx F, Schneider T, Suresh A, Wehrle T, Yalame H (2023) Safefl: Mpc-friendly framework for private and robust federated learning. In: 2023 IEEE Security and Privacy Workshops (SPW)","DOI":"10.1109\/SPW59333.2023.00012"},{"key":"443_CR8","doi-asserted-by":"crossref","unstructured":"Gehlhar T, Marx F, Yalame H, Schneider T (2023) Safefl: Mpc-friendly framework for private and robust federated learning. In: IEEE Security and Privacy Workshops (SPW)","DOI":"10.1109\/SPW59333.2023.00012"},{"issue":"6","key":"443_CR9","first-page":"19","volume":"47","author":"X Jiajun","year":"2021","unstructured":"Jiajun X, Ying LU, Ziyang Z, Yuting Z, Jiachen Z (2021) Research on vertical federated learning based on secret sharing and homomorphic encryption. Inf Commun Technol Policy 47(6):19","journal-title":"Inf Commun Technol Policy"},{"key":"443_CR10","unstructured":"Katz J, Mohassel P, Rindal P (2023) Securefl: Efficient secure multiparty computation framework for federated learning at scale. In: USENIX Security Symposium"},{"key":"443_CR11","unstructured":"Kim M, Kim J, Kim S (2020) Blockfl: Secure blockchain-enabled federated learning. IEEE Netw"},{"key":"443_CR12","unstructured":"Li T, Hu S, Beirami A, Smith V (2021) Ditto: Fair and robust federated learning through personalization. NeurIPS"},{"key":"443_CR13","doi-asserted-by":"crossref","unstructured":"Lindell Y, Pinkas B, Smart NP, Yanai A (2015) Efficient constant round multi-party computation combining BMR and SPDZ. In: Annual Cryptology Conference","DOI":"10.1007\/978-3-662-48000-7_16"},{"key":"443_CR14","unstructured":"Li T, Sanjabi M, Smith V (2020) Fair resource allocation in federated learning. In: ICLR 2020"},{"key":"443_CR15","unstructured":"McMahan B, Moore E, Ramage D, Hampson S, y Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. Proc Mach Learn Res"},{"key":"443_CR16","unstructured":"Qi D, Zhao H, Li S (2023) Better generative replay for continual federated learning. Int Conf Learn Represent (ICLR)"},{"key":"443_CR17","doi-asserted-by":"publisher","first-page":"612","DOI":"10.1145\/359168.359176","volume":"22","author":"A Shamir","year":"1979","unstructured":"Shamir A (1979) How to share a secret. Commun ACM 22:612\u2013613","journal-title":"Commun ACM"},{"issue":"7","key":"443_CR18","doi-asserted-by":"publisher","first-page":"1513","DOI":"10.1109\/TPDS.2020.3044223","volume":"32","author":"M Shayan","year":"2021","unstructured":"Shayan M, Fung C, Yoon CJ, Beschastnikh I (2021) Biscotti: A blockchain system for private and secure federated learning. IEEE Trans Parallel Distrib Syst 32(7):1513\u20131525","journal-title":"IEEE Trans Parallel Distrib Syst"},{"key":"443_CR19","unstructured":"Sheller MJ, Edwards B, Reina GA, Martin J, Pati S, Kotrotsou A, Milchenko M, Xu W, Marcus D, Colen RR, Bakas S (2022) Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. NPJ Digit Med"},{"key":"443_CR20","unstructured":"Shi Y, Liang J, Zhang W, Tan VY, Bai S (2022) Towards understanding and mitigating dimensional collapse in heterogeneous federated learning. Comput Sci"},{"key":"443_CR21","unstructured":"Sullivan J (2018) Secure analytics: Federated learning and secure aggregation"},{"issue":"8","key":"443_CR22","first-page":"9171","volume":"36","author":"Z Wang","year":"2022","unstructured":"Wang Z, Shen Y, Yang Q (2022) Byzantine-robust federated learning via blockchain. IEEE Trans Inf Forens Sec 36(8):9171\u20139179","journal-title":"IEEE Trans Inf Forens Sec"},{"key":"443_CR23","unstructured":"Wang H, Yurochkin M, Sun Y, Papailiopoulos D, Khazaeni Y (2020) Federated learning with matched averaging. In: International Conference on Learning Representations (ICLR)"},{"key":"443_CR24","unstructured":"Xu J, Tong X, Huang SL (2023) Personalized federated learning with feature alignment and classifier collaboration. Comput Sci. arXiv:2306.11867"},{"key":"443_CR25","doi-asserted-by":"crossref","unstructured":"Zhuang W, Wen Y, Lyu L, Zhang S (2023) Mas: Towards resource-efficient federated multiple-task learning. In: Proceedings of the 2023 IEEE\/CVF International Conference on Computer Vision (ICCV\u201923)","DOI":"10.1109\/ICCV51070.2023.02140"},{"key":"443_CR26","unstructured":"Zhu L, Liu Z, Han S (2019) Deep leakage from gradients. Comput Sci Mach Learn 32"}],"container-title":["Cybersecurity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00443-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42400-025-00443-9","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00443-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T07:51:06Z","timestamp":1773820266000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1186\/s42400-025-00443-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,18]]},"references-count":26,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2026,12]]}},"alternative-id":["443"],"URL":"https:\/\/doi.org\/10.1186\/s42400-025-00443-9","relation":{},"ISSN":["2523-3246"],"issn-type":[{"value":"2523-3246","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,18]]},"assertion":[{"value":"3 January 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 June 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 March 2026","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declared that they have no competing of interest to this work. We declare that we do not have any commercial or associative interest that represents a Conflict of interest in connection with the work submitted.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing of interest"}}],"article-number":"138"}}