{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T10:49:39Z","timestamp":1772794179547,"version":"3.50.1"},"reference-count":261,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2024,6,20]],"date-time":"2024-06-20T00:00:00Z","timestamp":1718841600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,6,20]],"date-time":"2024-06-20T00:00:00Z","timestamp":1718841600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Key Research Project of Zhejiang Lab","award":["2022PG0AC02"],"award-info":[{"award-number":["2022PG0AC02"]}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"crossref","award":["2021M692956"],"award-info":[{"award-number":["2021M692956"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Key R\\&D Program of Zhejiang","award":["2022C04006"],"award-info":[{"award-number":["2022C04006"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["U22A6001"],"award-info":[{"award-number":["U22A6001"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62102052"],"award-info":[{"award-number":["62102052"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["U21A20463"],"award-info":[{"award-number":["U21A20463"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"CCF- AFSG Research Fund","award":["20220009"],"award-info":[{"award-number":["20220009"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Artif Intell Rev"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Increasing numbers of artificial intelligence systems are employing collaborative machine learning techniques, such as federated learning, to build a shared powerful deep model among participants, while keeping their training data locally. However, concerns about integrity and privacy in such systems have significantly hindered the use of collaborative learning systems. Therefore, numerous efforts have been presented to preserve the model\u2019s integrity and reduce the privacy leakage of training data throughout the training phase of various collaborative learning systems. This survey seeks to provide a systematic and comprehensive evaluation of security and privacy studies in collaborative training, in contrast to prior surveys that only focus on one single collaborative learning system. Our survey begins with an overview of collaborative learning systems from various perspectives. Then, we systematically summarize the integrity and privacy risks of collaborative learning systems. In particular, we describe state-of-the-art integrity attacks (e.g., Byzantine, backdoor, and adversarial attacks) and privacy attacks (e.g., membership, property, and sample inference attacks), as well as the associated countermeasures. We additionally provide an analysis of open problems to motivate possible future studies.<\/jats:p>","DOI":"10.1007\/s10462-024-10797-0","type":"journal-article","created":{"date-parts":[[2024,6,20]],"date-time":"2024-06-20T10:06:18Z","timestamp":1718877978000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Robust and privacy-preserving collaborative training: a comprehensive survey"],"prefix":"10.1007","volume":"57","author":[{"given":"Fei","family":"Yang","sequence":"first","affiliation":[]},{"given":"Xu","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Shangwei","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Daiyuan","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Yan","family":"Gan","sequence":"additional","affiliation":[]},{"given":"Tao","family":"Xiang","sequence":"additional","affiliation":[]},{"given":"Yang","family":"Liu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,6,20]]},"reference":[{"key":"10797_CR1","doi-asserted-by":"publisher","unstructured":"Abadi M, Chu A, Goodfellow I et\u00a0al (2016) Deep learning with differential privacy. In: ACM SIGSAC conference on computer and communications security. pp 308\u2013318. https:\/\/doi.org\/10.1145\/2976749.2978318","DOI":"10.1145\/2976749.2978318"},{"key":"10797_CR2","unstructured":"Akbiyik ME (2023) Data augmentation in training CNNs: injecting noise to images. arXiv preprint http:\/\/arxiv.org\/abs\/2307.06855"},{"key":"10797_CR3","doi-asserted-by":"publisher","first-page":"140699","DOI":"10.1109\/ACCESS.2020.3013541","volume":"8","author":"M Aledhari","year":"2020","unstructured":"Aledhari M, Razzak R, Parizi RM et al (2020) Federated learning: a survey on enabling technologies, protocols, and applications. IEEE Access 8:140699\u2013140725. https:\/\/doi.org\/10.1109\/ACCESS.2020.3013541","journal-title":"IEEE Access"},{"key":"10797_CR4","doi-asserted-by":"publisher","unstructured":"Andreina S, Marson GA, M\u00f6llering H et\u00a0al (2021) Baffle: backdoor detection via feedback-based federated learning. In: International conference on distributed computing systems. pp 852\u2013863. https:\/\/doi.org\/10.1109\/ICDCS51616.2021.00086","DOI":"10.1109\/ICDCS51616.2021.00086"},{"key":"10797_CR5","doi-asserted-by":"publisher","unstructured":"Aono Y, Hayashi T, Trieu\u00a0Phong L et\u00a0al (2016) Scalable and secure logistic regression via homomorphic encryption. In: ACM conference on data and application security and privacy. pp 142\u2013144. https:\/\/doi.org\/10.1145\/2857705.2857731","DOI":"10.1145\/2857705.2857731"},{"key":"10797_CR6","doi-asserted-by":"crossref","unstructured":"Arous A, Guesmi A, Hanif MA et\u00a0al (2023) Exploring machine learning privacy\/utility trade-off from a hyperparameters lens. arXiv preprint http:\/\/arxiv.org\/abs\/2303.01819","DOI":"10.1109\/IJCNN54540.2023.10191743"},{"key":"10797_CR7","unstructured":"Azizi A, Tahmid IA, Waheed A et\u00a0al (2021) T-Miner: a generative approach to defend against Trojan attacks on DNN-based text classification. In: USENIX security symposium. https:\/\/www.usenix.org\/conference\/usenixsecurity21\/presentation\/azizi"},{"key":"10797_CR8","unstructured":"Bagdasaryan E, Veit A, Hua Y et\u00a0al (2020) How to backdoor federated learning. In: International conference on artificial intelligence and statistics. pp 2938\u20132948. http:\/\/proceedings.mlr.press\/v108\/bagdasaryan20a.html"},{"key":"10797_CR9","unstructured":"Baluja S (2017) Hiding images in plain sight: deep steganography. In: Advances in neural information processing systems, vol 30. pp 2069\u20132079. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/838e8afb1ca34354ac209f53d90c3a43-Abstract.html"},{"key":"10797_CR10","unstructured":"Balunovi\u0107 M, Dimitrov DI, Staab R et\u00a0al (2021) Bayesian framework for gradient leakage. arXiv preprint http:\/\/arxiv.org\/abs\/2111.04706"},{"key":"10797_CR11","unstructured":"Baruch M, Baruch G, Goldberg Y (2019) A little is enough: circumventing defenses for distributed learning. arXiv preprint http:\/\/arxiv.org\/abs\/1902.06156"},{"issue":"2","key":"10797_CR12","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1162\/neco.1992.4.2.141","volume":"4","author":"R Battiti","year":"1992","unstructured":"Battiti R (1992) First- and second-order methods for learning: between steepest descent and Newton\u2019s method. Neural Comput 4(2):141\u2013166. https:\/\/doi.org\/10.1162\/neco.1992.4.2.141","journal-title":"Neural Comput"},{"key":"10797_CR13","doi-asserted-by":"publisher","unstructured":"Bell JH, Bonawitz KA, Gasc\u00f3n A et\u00a0al (2020) Secure single-server aggregation with (poly) logarithmic overhead. In: ACM SIGSAC conference on computer and communications security. pp 1253\u20131269. https:\/\/doi.org\/10.1145\/3372297.3417885","DOI":"10.1145\/3372297.3417885"},{"key":"10797_CR14","unstructured":"Bhagoji AN, Chakraborty S, Mittal P et\u00a0al (2019) Analyzing federated learning through an adversarial lens. In: International conference on machine learning. pp 634\u2013643. http:\/\/proceedings.mlr.press\/v97\/bhagoji19a.html"},{"key":"10797_CR15","unstructured":"Bhowmick A, Duchi J, Freudiger J et\u00a0al (2018) Protection against reconstruction and its applications in private federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/1812.00984"},{"key":"10797_CR16","unstructured":"Blanchard P, El\u00a0Mhamdi EM, Guerraoui R et\u00a0al (2017) Machine learning with adversaries: Byzantine tolerant gradient descent. In: Advances in neural information processing systems. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/f4b9ec30ad9f68f89b29639786cb62ef-Abstract.html"},{"key":"10797_CR17","doi-asserted-by":"publisher","unstructured":"Bonawitz K, Ivanov V, Kreuter B et\u00a0al (2017) Practical secure aggregation for privacy-preserving machine learning. In: ACM SIGSAC conference on computer and communications security. pp 1175\u20131191. https:\/\/doi.org\/10.1145\/3133956.3133982","DOI":"10.1145\/3133956.3133982"},{"key":"10797_CR18","unstructured":"Brown T, Mann B, Ryder N et\u00a0al (2020) Language models are few-shot learners. In: Advances in neural information processing systems. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html"},{"key":"10797_CR19","unstructured":"Bui AT, Le T, Tran QH et\u00a0al (2022) A unified Wasserstein distributional robustness framework for adversarial training. In: ICLR. https:\/\/openreview.net\/forum?id=Dzpe9C1mpiv"},{"issue":"22","key":"10797_CR20","doi-asserted-by":"publisher","first-page":"5850","DOI":"10.1109\/TSP.2019.2946020","volume":"67","author":"X Cao","year":"2019","unstructured":"Cao X, Lai L (2019) Distributed gradient descent algorithm robust to an arbitrary number of Byzantine attackers. IEEE Trans Signal Process 67(22):5850\u20135864. https:\/\/doi.org\/10.1109\/TSP.2019.2946020","journal-title":"IEEE Trans Signal Process"},{"key":"10797_CR21","doi-asserted-by":"crossref","unstructured":"Cao X, Fang M, Liu J et\u00a0al (2021) FLTrust: Byzantine-robust federated learning via trust bootstrapping. In: ISOC network and distributed system security symposium. https:\/\/www.ndss-symposium.org\/ndss-paper\/fltrust-byzantine-robust-federated-learning-via-trust-bootstrapping\/","DOI":"10.14722\/ndss.2021.24434"},{"key":"10797_CR22","doi-asserted-by":"publisher","unstructured":"Carlini N, Wagner D (2018) Audio adversarial examples: targeted attacks on speech-to-text. In: IEEE security and privacy workshops (SPW). pp 1\u20137. https:\/\/doi.org\/10.1109\/SPW.2018.00009","DOI":"10.1109\/SPW.2018.00009"},{"key":"10797_CR23","unstructured":"Chan A, Ong YS (2019) Poison as a cure: detecting & neutralizing variable-sized backdoor attacks in deep neural networks. arXiv preprint http:\/\/arxiv.org\/abs\/1911.08040"},{"key":"10797_CR24","unstructured":"Chang H, Shejwalkar V, Shokri R et\u00a0al (2019) Cronus: robust and heterogeneous collaborative learning with black-box knowledge transfer. arXiv preprint arXiv:1912.11279"},{"key":"10797_CR25","unstructured":"Chaudhuri K, Monteleoni C, Sarwate AD (2011) Differentially private empirical risk minimization. J Mach Learn Res 12(3). https:\/\/www.jmlr.org\/papers\/volume12\/chaudhuri11a\/chaudhuri11a.pdf"},{"key":"10797_CR26","doi-asserted-by":"publisher","unstructured":"Chen J, Gu Q (2020) Rays: a ray searching method for hard-label adversarial attack. In: ACM SIGKDD international conference on knowledge discovery & data mining. pp 1739\u20131747. https:\/\/doi.org\/10.1145\/3394486.3403225","DOI":"10.1145\/3394486.3403225"},{"issue":"2","key":"10797_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3154503","volume":"1","author":"Y Chen","year":"2017","unstructured":"Chen Y, Su L, Xu J (2017) Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proc ACM Meas Anal Comput Syst 1(2):1\u201325. https:\/\/doi.org\/10.1145\/3154503","journal-title":"Proc ACM Meas Anal Comput Syst"},{"key":"10797_CR28","unstructured":"Chen B, Carvalho W, Baracaldo N et\u00a0al (2018) Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint http:\/\/arxiv.org\/abs\/1811.03728"},{"key":"10797_CR29","doi-asserted-by":"publisher","unstructured":"Chen H, Fu C, Zhao J et\u00a0al (2019) DeepInspect: a black-box Trojan detection and mitigation framework for deep neural networks. In: IJCAI. pp 4658\u20134664. https:\/\/doi.org\/10.24963\/ijcai.2019\/647","DOI":"10.24963\/ijcai.2019\/647"},{"key":"10797_CR30","doi-asserted-by":"publisher","unstructured":"Chen C, Kailkhura B, Goldhahn R et\u00a0al (2021a) Certifiably-robust federated adversarial learning via randomized smoothing. In: IEEE international conference on mobile ad hoc and smart systems. pp 173\u2013179. https:\/\/doi.org\/10.1109\/MASS52906.2021.00032","DOI":"10.1109\/MASS52906.2021.00032"},{"key":"10797_CR31","doi-asserted-by":"publisher","unstructured":"Chen S, Kahla M, Jia R et\u00a0al (2021b) Knowledge-enriched distributional model inversion attacks. In: IEEE\/CVF international conference on computer vision. pp 16178\u201316187. https:\/\/doi.org\/10.1109\/ICCV48922.2021.01587","DOI":"10.1109\/ICCV48922.2021.01587"},{"key":"10797_CR32","unstructured":"Chou E, Tram\u00e8r F, Pellegrino G et\u00a0al (2018) SentiNet: detecting physical attacks against deep learning systems. arXiv preprint http:\/\/arxiv.org\/abs\/1812.00292"},{"key":"10797_CR33","unstructured":"Dang T, Thakkar O, Ramaswamy S et\u00a0al (2021) Revealing and protecting labels in distributed training. In: Advances in neural information processing systems, vol 34. https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/0d924f0e6b3fd0d91074c22727a53966-Abstract.html"},{"key":"10797_CR34","unstructured":"Dean J, Corrado G, Monga R et\u00a0al (2012) Large scale distributed deep networks. In: Advances in neural information processing systems. pp 1232\u20131240. https:\/\/papers.nips.cc\/paper\/2012\/hash\/6aca97005c68f1206823815f66102863-Abstract.html"},{"key":"10797_CR35","doi-asserted-by":"publisher","unstructured":"Deng Y, Lyu F, Ren J et\u00a0al (2021) Fair: quality-aware federated learning with precise user incentive and model aggregation. In: IEEE conference on computer communications. pp 1\u201310. https:\/\/doi.org\/10.1109\/INFOCOM42981.2021.9488743","DOI":"10.1109\/INFOCOM42981.2021.9488743"},{"key":"10797_CR36","unstructured":"Devlin J, Chang MW, Lee K et\u00a0al (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint http:\/\/arxiv.org\/abs\/1810.04805"},{"key":"10797_CR37","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2021.3102155","author":"J Domingo-Ferrer","year":"2021","unstructured":"Domingo-Ferrer J, Blanco-Justicia A, Manj\u00f3n J et al (2021) Secure and privacy-preserving federated learning via co-utility. IEEE Internet Things J. https:\/\/doi.org\/10.1109\/JIOT.2021.3102155","journal-title":"IEEE Internet Things J"},{"key":"10797_CR38","unstructured":"Dong Y, Deng Z, Pang T et\u00a0al (2020) Adversarial distributional training for robust deep learning. In: Advances in neural information processing systems, vol 33. pp 8270\u20138283. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/5de8a36008b04a6167761fa19b61aa6c-Abstract.html"},{"key":"10797_CR39","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-88418-5_24","author":"Y Dong","year":"2021","unstructured":"Dong Y, Chen X, Li K et al (2021) FLOD: oblivious defender for private Byzantine-robust federated learning with dishonest-majority. Cryptol ePrint Archiv. https:\/\/doi.org\/10.1007\/978-3-030-88418-5_24","journal-title":"Cryptol ePrint Archiv"},{"key":"10797_CR40","unstructured":"Dong T, Zhao B, Lyu L (2022) Privacy for free: how does dataset condensation help privacy? In: International conference on machine learning. https:\/\/proceedings.mlr.press\/v162\/dong22c.html"},{"key":"10797_CR41","unstructured":"Dwork C, Rothblum GN (2016) Concentrated differential privacy. arXiv preprint http:\/\/arxiv.org\/abs\/1603.01887"},{"key":"10797_CR42","doi-asserted-by":"crossref","unstructured":"Dwork C, Kenthapadi K, McSherry F et\u00a0al (2006) Our data, ourselves: privacy via distributed noise generation. In: Annual international conference on the theory and applications of cryptographic techniques. pp 486\u2013503. https:\/\/www.iacr.org\/archive\/eurocrypt2006\/40040493\/40040493.pdf","DOI":"10.1007\/11761679_29"},{"key":"10797_CR43","doi-asserted-by":"publisher","unstructured":"Dwork C, Rothblum GN, Vadhan S (2010) Boosting and differential privacy. In: IEEE annual symposium on foundations of computer science. pp 51\u201360. https:\/\/doi.org\/10.1109\/FOCS.2010.12","DOI":"10.1109\/FOCS.2010.12"},{"key":"10797_CR44","unstructured":"El-Mhamdi EM, Farhadkhani S, Guerraoui R et\u00a0al (2021) Collaborative learning in the jungle (decentralized, Byzantine, heterogeneous, asynchronous and nonconvex learning). In: Advances in neural information processing systems, vol 34. https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/d2cd33e9c0236a8c2d8bd3fa91ad3acf-Abstract.html"},{"key":"10797_CR45","unstructured":"El\u00a0Mhamdi EM, Guerraoui R, Rouault SLA (2021) Distributed momentum for Byzantine-resilient stochastic gradient descent. In: International conference on learning representations. https:\/\/openreview.net\/forum?id=H8UHdhWG6A3"},{"key":"10797_CR46","doi-asserted-by":"crossref","unstructured":"Enthoven D, Al-Ars Z (2020) An overview of federated deep learning privacy attacks and defensive strategies. arXiv preprint http:\/\/arxiv.org\/abs\/2004.04676","DOI":"10.1007\/978-3-030-70604-3_8"},{"key":"10797_CR47","doi-asserted-by":"crossref","unstructured":"Fan L, Ng KW, Ju C et\u00a0al (2020) Rethinking privacy preserving deep learning: how to evaluate and thwart privacy attacks. arXiv preprint http:\/\/arxiv.org\/abs\/2006.11601","DOI":"10.1007\/978-3-030-63076-8_3"},{"key":"10797_CR48","unstructured":"Fan X, Ma Y, Dai Z et\u00a0al (2021) Fault-tolerant federated reinforcement learning with theoretical guarantee. In: Advances in neural information processing systems, vol 34. https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/080acdcce72c06873a773c4311c2e464-Abstract.html"},{"key":"10797_CR49","doi-asserted-by":"crossref","unstructured":"Fang P, Chen J (2023) On the vulnerability of backdoor defenses for federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/2301.08170","DOI":"10.1609\/aaai.v37i10.26393"},{"key":"10797_CR50","unstructured":"Fang M, Cao X, Jia J et\u00a0al (2020) Local model poisoning attacks to Byzantine-robust federated learning. In: USENIX security symposium. https:\/\/www.usenix.org\/conference\/usenixsecurity20\/presentation\/fang"},{"key":"10797_CR51","unstructured":"Feng J, Xu H, Mannor S (2014) Distributed robust learning. arXiv preprint http:\/\/arxiv.org\/abs\/1409.5937"},{"key":"10797_CR52","doi-asserted-by":"publisher","unstructured":"Feng Y, Wu B, Fan Y et\u00a0al (2022) Boosting black-box attack with partially transferred conditional adversarial distribution. In: IEEE\/CVF conference on computer vision and pattern recognition. https:\/\/doi.org\/10.1109\/CVPR52688.2022.01467","DOI":"10.1109\/CVPR52688.2022.01467"},{"key":"10797_CR53","unstructured":"Fowl L, Geiping J, Czaja W et\u00a0al (2021) Robbing the fed: directly obtaining private data in federated learning with modified models. arXiv preprint http:\/\/arxiv.org\/abs\/2110.13057"},{"key":"10797_CR54","doi-asserted-by":"publisher","unstructured":"Fredrikson M, Jha S, Ristenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: ACM SIGSAC conference. https:\/\/doi.org\/10.1145\/2810103.2813677","DOI":"10.1145\/2810103.2813677"},{"key":"10797_CR55","doi-asserted-by":"publisher","first-page":"323","DOI":"10.2478\/popets-2021-0030","volume":"2","author":"D Froelicher","year":"2021","unstructured":"Froelicher D, Troncoso-Pastoriza JR, Pyrgelis A et al (2021) Scalable privacy-preserving distributed learning. Proc Priv Enhanc Technol 2:323\u2013347. https:\/\/doi.org\/10.2478\/popets-2021-0030","journal-title":"Proc Priv Enhanc Technol"},{"key":"10797_CR56","unstructured":"Fu C, Zhang X, Ji S et\u00a0al (2022) Label inference attacks against vertical federated learning. In: USENIX security symposium. https:\/\/www.usenix.org\/conference\/usenixsecurity22\/presentation\/fu-chong"},{"key":"10797_CR57","doi-asserted-by":"publisher","unstructured":"Gao Y, Xu C, Wang D et\u00a0al (2019) Strip: a defence against Trojan attacks on deep neural networks. In: Computer security applications conference. pp 113\u2013125. https:\/\/doi.org\/10.1145\/3359789.3359790","DOI":"10.1145\/3359789.3359790"},{"key":"10797_CR58","unstructured":"Gao Y, Doan BG, Zhang Z et\u00a0al (2020) Backdoor attacks and countermeasures on deep learning: a comprehensive review. arXiv preprint http:\/\/arxiv.org\/abs\/2007.10760"},{"key":"10797_CR59","doi-asserted-by":"publisher","unstructured":"Gao W, Guo S, Zhang T et\u00a0al (2021) Privacy-preserving collaborative learning with automatic transformation search. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 114\u2013123. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00018","DOI":"10.1109\/CVPR46437.2021.00018"},{"key":"10797_CR60","doi-asserted-by":"publisher","unstructured":"Gawali M, Arvind C, Suryavanshi S et\u00a0al (2021) Comparison of privacy-preserving distributed deep learning methods in healthcare. In: Annual conference on medical image understanding and analysis. pp 457\u2013471. https:\/\/doi.org\/10.48550\/arXiv.2012.12591","DOI":"10.48550\/arXiv.2012.12591"},{"key":"10797_CR61","unstructured":"Geiping J, Bauermeister H, Dr\u00f6ge H et\u00a0al (2020) Inverting gradients\u2014how easy is it to break privacy in federated learning? arXiv preprint http:\/\/arxiv.org\/abs\/2003.14053"},{"key":"10797_CR62","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint http:\/\/arxiv.org\/abs\/1412.6572"},{"key":"10797_CR63","unstructured":"Goyal P, Doll\u00e1r P, Girshick R et\u00a0al (2017) Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv preprint http:\/\/arxiv.org\/abs\/1706.02677"},{"key":"10797_CR64","unstructured":"Grama M, Musat M, Mu\u00f1oz-Gonz\u00e1lez L et\u00a0al (2020) Robust aggregation for adaptive privacy preserving federated learning in healthcare. arXiv preprint http:\/\/arxiv.org\/abs\/2009.08294"},{"key":"10797_CR65","unstructured":"Gu T, Dolan-Gavitt B, Garg S (2017) BadNets: identifying vulnerabilities in the machine learning model supply chain. arXiv preprint http:\/\/arxiv.org\/abs\/1708.06733"},{"key":"10797_CR66","unstructured":"Guerraoui R, Rouault S et\u00a0al (2018) The hidden vulnerability of distributed learning in Byzantium. In: International conference on machine learning. pp 3521\u20133530. http:\/\/proceedings.mlr.press\/v80\/mhamdi18a.html"},{"key":"10797_CR67","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2021.3105723","author":"S Guo","year":"2021","unstructured":"Guo S, Zhang T, Xu G et al (2021a) Topology-aware differential privacy for decentralized image classification. IEEE Trans Circuits Syst Video Technol. https:\/\/doi.org\/10.1109\/TCSVT.2021.3105723","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"10797_CR68","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2021.3116976","author":"S Guo","year":"2021","unstructured":"Guo S, Zhang T, Yu H et al (2021b) Byzantine-resilient decentralized stochastic gradient descent. IEEE Trans Circuits Syst Video Technol. https:\/\/doi.org\/10.1109\/TCSVT.2021.3116976","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"10797_CR69","doi-asserted-by":"crossref","unstructured":"Guo W, Tondi B, Barni M (2022) An overview of backdoor attacks against deep neural networks and possible defences. IEEE Open J Signal Process. http:\/\/arxiv.org\/abs\/2111.08429","DOI":"10.1109\/OJSP.2022.3190213"},{"issue":"9","key":"10797_CR70","doi-asserted-by":"publisher","first-page":"2231","DOI":"10.1109\/TPDS.2021.3064345","volume":"32","author":"R Han","year":"2021","unstructured":"Han R, Li D, Ouyang J et al (2021) Accurate differentially private deep learning on the edge. IEEE Trans Parallel Distrib Syst 32(9):2231\u20132247. https:\/\/doi.org\/10.1109\/TPDS.2021.3064345","journal-title":"IEEE Trans Parallel Distrib Syst"},{"key":"10797_CR71","unstructured":"Hard A, Rao K, Mathews R et\u00a0al (2018) Federated learning for mobile keyboard prediction. arXiv preprint http:\/\/arxiv.org\/abs\/1811.03604"},{"key":"10797_CR72","doi-asserted-by":"publisher","unstructured":"Hartmann V, Meynent L, Peyrard M et\u00a0al (2023) Distribution inference risks: identifying and mitigating sources of leakage. In: IEEE conference on secure and trustworthy machine learning. https:\/\/doi.org\/10.1109\/SaTML54575.2023.00018","DOI":"10.1109\/SaTML54575.2023.00018"},{"key":"10797_CR73","doi-asserted-by":"publisher","unstructured":"Hatamizadeh A, Yin H, Roth HR et\u00a0al (2022) GradViT: gradient inversion of vision transformers. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 10021\u201310030. https:\/\/doi.org\/10.1109\/CVPR52688.2022.00978","DOI":"10.1109\/CVPR52688.2022.00978"},{"issue":"1","key":"10797_CR74","doi-asserted-by":"publisher","first-page":"133","DOI":"10.2478\/popets-2019-0008","volume":"2019","author":"J Hayes","year":"2019","unstructured":"Hayes J, Melis L, Danezis G et al (2019) LOGAN: membership inference attacks against generative models. Proc Priv Enhanc Technol 2019(1):133\u2013152. https:\/\/doi.org\/10.2478\/popets-2019-0008","journal-title":"Proc Priv Enhanc Technol"},{"key":"10797_CR75","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S et\u00a0al (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition. https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"10797_CR76","doi-asserted-by":"publisher","unstructured":"He Z, Zhang T, Lee RB (2019) Model inversion attacks against collaborative inference. In: Computer security applications conference. pp 148\u2013162. https:\/\/doi.org\/10.1145\/3359789.3359824","DOI":"10.1145\/3359789.3359824"},{"key":"10797_CR77","unstructured":"He C, Annavaram M, Avestimehr S (2020) Group knowledge transfer: federated learning of large CNNs at the edge. arXiv preprint http:\/\/arxiv.org\/abs\/2007.14513"},{"key":"10797_CR78","doi-asserted-by":"publisher","unstructured":"Hitaj B, Ateniese G, Perez-Cruz F (2017) Deep models under the GAN: information leakage from collaborative deep learning. In: ACM SIGSAC conference on computer and communications security. pp 603\u2013618. https:\/\/doi.org\/10.1145\/3133956.3134012","DOI":"10.1145\/3133956.3134012"},{"key":"10797_CR79","unstructured":"Hong J, Wang H, Wang Z et\u00a0al (2021) Federated robustness propagation: sharing adversarial robustness in federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/2106.10196"},{"key":"10797_CR80","doi-asserted-by":"publisher","unstructured":"Hu S, Liu X, Zhang Y et\u00a0al (2022) Protecting facial privacy: generating adversarial identity masks via style-robust makeup transfer. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 15014\u201315023. https:\/\/doi.org\/10.1109\/CVPR52688.2022.01459","DOI":"10.1109\/CVPR52688.2022.01459"},{"key":"10797_CR81","unstructured":"Huang R, Xu B, Schuurmans D et\u00a0al (2015) Learning with a strong adversary. arXiv preprint http:\/\/arxiv.org\/abs\/1511.03034"},{"key":"10797_CR82","unstructured":"Huang X, Alzantot M, Srivastava M (2019) NeuronInspect: detecting backdoors in neural networks via output explanations. arXiv preprint http:\/\/arxiv.org\/abs\/1911.07399"},{"key":"10797_CR83","unstructured":"Huang W, Li T, Wang D et\u00a0al (2020a) Fairness and accuracy in federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/2012.10069"},{"key":"10797_CR84","unstructured":"Huang WR, Geiping J, Fowl L et\u00a0al (2020b) MetaPoison: practical general-purpose clean-label data poisoning. arXiv preprint http:\/\/arxiv.org\/abs\/2004.00225"},{"key":"10797_CR85","unstructured":"Huang Y, Song Z, Li K et\u00a0al (2020c) InstaHide: instance-hiding schemes for private distributed learning. In: International conference on machine learning. pp 4507\u20134518. http:\/\/proceedings.mlr.press\/v119\/huang20i.html"},{"key":"10797_CR86","unstructured":"Huang Y, Gupta S, Song Z et\u00a0al (2021) Evaluating gradient inversion attacks and defenses in federated learning. In: Advances in neural information processing systems, vol 34. https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/3b3fff6463464959dcd1b68d0320f781-Abstract.html"},{"key":"10797_CR87","unstructured":"Hynes N, Cheng R, Song D (2018) Efficient deep learning on multi-source private data. arXiv preprint http:\/\/arxiv.org\/abs\/1807.06689"},{"key":"10797_CR88","unstructured":"Jagannatha A, Rawat BPS, Yu H (2021) Membership inference attack susceptibility of clinical language models. arXiv preprint http:\/\/arxiv.org\/abs\/2104.08305"},{"key":"10797_CR89","doi-asserted-by":"publisher","unstructured":"Jagielski M, Oprea A, Biggio B et\u00a0al (2018) Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: IEEE symposium on security and privacy. pp 19\u201335. https:\/\/doi.org\/10.1109\/SP.2018.00057","DOI":"10.1109\/SP.2018.00057"},{"key":"10797_CR90","unstructured":"Jayaraman B, Evans D (2019) Evaluating differentially private machine learning in practice. In: USENIX security symposium. pp 1895\u20131912. https:\/\/www.usenix.org\/system\/files\/sec19-jayaraman.pdf"},{"key":"10797_CR91","unstructured":"Jayaraman B, Wang L, Evans D et\u00a0al (2018) Distributed learning without distress: privacy-preserving empirical risk minimization. In: Advances in neural information processing systems. pp 6343\u20136354. https:\/\/proceedings.neurips.cc\/paper\/2018\/file\/7221e5c8ec6b08ef6d3f9ff3ce6eb1d1-Paper.pdf"},{"key":"10797_CR92","unstructured":"Jeon J, Kim J, Lee K et\u00a0al (2021) Gradient inversion with generative image prior. In: Neural information processing systems. https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/fa84632d742f2729dc32ce8cb5d49733-Abstract.html"},{"key":"10797_CR93","unstructured":"Jeong E, Oh S, Kim H et\u00a0al (2018) Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data. arXiv preprint http:\/\/arxiv.org\/abs\/1811.11479"},{"key":"10797_CR94","doi-asserted-by":"publisher","unstructured":"Ji Y, Zhang X, Wang T (2017) Backdoor attacks against learning systems. In: IEEE conference on communications and network security. pp 1\u20139. https:\/\/doi.org\/10.1109\/CNS.2017.8228656","DOI":"10.1109\/CNS.2017.8228656"},{"key":"10797_CR95","unstructured":"Jin X, Chen PY, Hsu CY et\u00a0al (2021) CAFE: catastrophic data leakage in vertical federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/2110.15122"},{"key":"10797_CR96","doi-asserted-by":"publisher","unstructured":"Jin G, Yi X, Huang W et al (2022) Enhancing adversarial training with second-order statistics of weights. In: IEEE\/CVF conference on computer vision and pattern recognition. https:\/\/doi.org\/10.1109\/CVPR52688.2022.01484","DOI":"10.1109\/CVPR52688.2022.01484"},{"key":"10797_CR97","unstructured":"Kairouz P, McMahan HB, Avent B et\u00a0al (2019) Advances and open problems in federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/1912.04977"},{"key":"10797_CR98","unstructured":"Kang Y, Liu Y, Wang W (2019) Weighted distributed differential privacy ERM: convex and non-convex. arXiv preprint http:\/\/arxiv.org\/abs\/1910.10308"},{"key":"10797_CR99","unstructured":"Karimireddy SP, He L, Jaggi M (2021) Learning from history for Byzantine robust optimization. In: International conference on machine learning. pp 5311\u20135319. http:\/\/proceedings.mlr.press\/v139\/karimireddy21a.html"},{"key":"10797_CR100","doi-asserted-by":"publisher","unstructured":"Kim KI (2022) Robust combination of distributed gradients under adversarial perturbations. In: IEEE\/CVF conference on computer vision and pattern recognition. https:\/\/doi.org\/10.1109\/CVPR52688.2022.00035","DOI":"10.1109\/CVPR52688.2022.00035"},{"issue":"2","key":"10797_CR101","doi-asserted-by":"publisher","first-page":"e8805","DOI":"10.2196\/medinform.8805","volume":"6","author":"M Kim","year":"2018","unstructured":"Kim M, Song Y, Wang S et al (2018) Secure logistic regression based on homomorphic encryption: design and evaluation. JMIR Med Inform 6(2):e8805. https:\/\/doi.org\/10.2196\/medinform.8805","journal-title":"JMIR Med Inform"},{"key":"10797_CR102","unstructured":"Kone\u010dn\u1ef3 J, McMahan HB, Yu FX et\u00a0al (2016) Federated learning: strategies for improving communication efficiency. arXiv preprint http:\/\/arxiv.org\/abs\/1610.05492"},{"key":"10797_CR103","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2012a) ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems, vol 25. pp 1097\u20131105. https:\/\/proceedings.neurips.cc\/paper\/2012\/hash\/c399862d3b9d6b76c8436e924a68c45b-Abstract.html"},{"key":"10797_CR104","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2012b) ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems. pp 1106\u20131114. https:\/\/papers.nips.cc\/paper\/2012\/hash\/c399862d3b9d6b76c8436e924a68c45b-Abstract.html"},{"key":"10797_CR105","doi-asserted-by":"publisher","first-page":"526","DOI":"10.1109\/TIFS.2019.2925452","volume":"15","author":"H Kwon","year":"2019","unstructured":"Kwon H, Kim Y, Yoon H et al (2019) Selective audio adversarial example in evasion attack on speech recognition system. IEEE Trans Inf Forensics Secur 15:526\u2013538. https:\/\/doi.org\/10.1109\/TIFS.2019.2925452","journal-title":"IEEE Trans Inf Forensics Secur"},{"key":"10797_CR106","unstructured":"Lam M, Wei GY, Brooks D et\u00a0al (2021) Gradient disaggregation: breaking privacy in federated learning by reconstructing the user participant matrix. arXiv preprint http:\/\/arxiv.org\/abs\/2106.06089"},{"key":"10797_CR107","unstructured":"Le TP, Aono Y, Hayashi T et\u00a0al (2017) Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans Inf Forensics Secur (99):1\u20131. http:\/\/eprint.iacr.org\/2017\/715"},{"key":"10797_CR108","doi-asserted-by":"publisher","unstructured":"Leroy D, Coucke A, Lavril T et\u00a0al (2019) Federated learning for keyword spotting. In: IEEE international conference on acoustics, speech and signal processing. pp 6341\u20136345. https:\/\/doi.org\/10.1109\/ICASSP.2019.8683546","DOI":"10.1109\/ICASSP.2019.8683546"},{"key":"10797_CR109","unstructured":"Li M, Andersen DG, Park JW et\u00a0al (2014) Scaling distributed machine learning with the parameter server. In: USENIX symposium on operating systems design and implementation. pp 583\u2013598. https:\/\/www.usenix.org\/conference\/osdi14\/technical-sessions\/presentation\/li_mu"},{"issue":"8","key":"10797_CR110","doi-asserted-by":"publisher","first-page":"1440","DOI":"10.1109\/TKDE.2018.2794384","volume":"30","author":"C Li","year":"2018","unstructured":"Li C, Zhou P, Xiong L et al (2018) Differentially private distributed online learning. IEEE Trans Knowl Data Eng 30(8):1440\u20131453. https:\/\/doi.org\/10.1109\/TKDE.2018.2794384","journal-title":"IEEE Trans Knowl Data Eng"},{"key":"10797_CR111","unstructured":"Li S, Ma S, Xue M et\u00a0al (2020a) Deep learning backdoors. arXiv preprint http:\/\/arxiv.org\/abs\/2007.08273"},{"issue":"12","key":"10797_CR112","doi-asserted-by":"publisher","first-page":"11460","DOI":"10.1109\/JIOT.2020.3012480","volume":"7","author":"Y Li","year":"2020","unstructured":"Li Y, Li H, Xu G et al (2020b) Toward secure and privacy-preserving distributed deep learning in fog-cloud computing. IEEE Internet Things J 7(12):11460\u201311472. https:\/\/doi.org\/10.1109\/JIOT.2020.3012480","journal-title":"IEEE Internet Things J"},{"key":"10797_CR113","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.3016145","author":"Y Li","year":"2020","unstructured":"Li Y, Xu X, Xiao J et al (2020c) Adaptive square attack: fooling autonomous cars with adversarial traffic signs. IEEE Internet Things J. https:\/\/doi.org\/10.1109\/JIOT.2020.3016145","journal-title":"IEEE Internet Things J"},{"issue":"8","key":"10797_CR114","doi-asserted-by":"publisher","first-page":"6178","DOI":"10.1109\/JIOT.2020.3022911","volume":"8","author":"Y Li","year":"2020","unstructured":"Li Y, Zhou Y, Jolfaei A et al (2020d) Privacy-preserving federated learning framework based on chained secure multiparty computing. IEEE Internet Things J 8(8):6178\u20136186. https:\/\/doi.org\/10.1109\/JIOT.2020.3022911","journal-title":"IEEE Internet Things J"},{"key":"10797_CR115","unstructured":"Li Q, He B, Song D (2021a) Adversarial collaborative learning on non-IID features. arXiv preprint https:\/\/openreview.net\/forum?id=EgkZwzEwciE"},{"key":"10797_CR116","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2021.3124599","author":"Q Li","year":"2021","unstructured":"Li Q, Wen Z, Wu Z et al (2021b) A survey on federated learning systems: vision, hype and reality for data privacy and protection. IEEE Trans Knowl Data Eng. https:\/\/doi.org\/10.1109\/TKDE.2021.3124599","journal-title":"IEEE Trans Knowl Data Eng"},{"key":"10797_CR117","unstructured":"Li T, Hu S, Beirami A et\u00a0al (2021c) Ditto: fair and robust federated learning through personalization. In: International conference on machine learning. pp 6357\u20136368. http:\/\/proceedings.mlr.press\/v139\/li21h.html"},{"key":"10797_CR118","doi-asserted-by":"publisher","unstructured":"Li Y, Li Y, Wu B et\u00a0al (2021d) Invisible backdoor attack with sample-specific triggers. In: IEEE\/CVF international conference on computer vision. pp 16463\u201316472. https:\/\/doi.org\/10.1109\/ICCV48922.2021.01615","DOI":"10.1109\/ICCV48922.2021.01615"},{"key":"10797_CR119","doi-asserted-by":"crossref","unstructured":"Li Q, Diao Y, Chen Q et\u00a0al (2022a) Federated learning on non-IID data silos: an experimental study. In: International conference on data engineering, http:\/\/arxiv.org\/abs\/2102.02079","DOI":"10.1109\/ICDE53745.2022.00077"},{"key":"10797_CR120","doi-asserted-by":"crossref","unstructured":"Li Z, Zhang J, Liu L et\u00a0al (2022b) Auditing privacy defenses in federated learning via generative gradient leakage. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 10132\u201310142","DOI":"10.1109\/CVPR52688.2022.00989"},{"key":"10797_CR121","unstructured":"Lian X, Zhang C, Zhang H et\u00a0al (2017) Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. In: Advances in neural information processing systems. pp 5330\u20135340. https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/f75526659f31040afeb61cb7133e4e6d-Paper.pdf"},{"key":"10797_CR122","unstructured":"Liang Z, Wang B, Gu Q et\u00a0al (2020) Differentially private federated learning with Laplacian smoothing. arXiv preprint http:\/\/arxiv.org\/abs\/2005.00218"},{"issue":"3","key":"10797_CR123","doi-asserted-by":"publisher","first-page":"2031","DOI":"10.1109\/COMST.2020.2986024","volume":"22","author":"WYB Lim","year":"2020","unstructured":"Lim WYB, Luong NC, Hoang DT et al (2020) Federated learning in mobile edge networks: a comprehensive survey. IEEE Commun Surv Tutor 22(3):2031\u20132063. https:\/\/doi.org\/10.1109\/COMST.2020.2986024","journal-title":"IEEE Commun Surv Tutor"},{"key":"10797_CR124","unstructured":"Lin T, Kong L, Stich SU et\u00a0al (2020) Ensemble distillation for robust model fusion in federated learning. In: Advances in neural information processing systems, vol 33. pp 2351\u20132363. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/18df51b97ccd68128e994804f3eccc87-Abstract.html"},{"key":"10797_CR125","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1016\/j.media.2017.07.005","volume":"42","author":"G Litjens","year":"2017","unstructured":"Litjens G, Kooi T, Bejnordi BE et al (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60\u201388. https:\/\/doi.org\/10.1016\/j.media.2017.07.005","journal-title":"Med Image Anal"},{"key":"10797_CR126","doi-asserted-by":"crossref","unstructured":"Liu Y, Ma S, Aafer Y et\u00a0al (2018) Trojaning attack on neural networks. In: Annual network and distributed system security symposium. http:\/\/wp.internetsociety.org\/ndss\/wp-content\/uploads\/sites\/25\/2018\/02\/ndss2018_03A-5_Liu_paper.pdf","DOI":"10.14722\/ndss.2018.23291"},{"issue":"3","key":"10797_CR127","doi-asserted-by":"publisher","first-page":"4946","DOI":"10.1109\/JIOT.2019.2897619","volume":"6","author":"D Liu","year":"2019","unstructured":"Liu D, Yan Z, Ding W et al (2019a) A survey on secure data analytics in edge computing. IEEE Internet Things J 6(3):4946\u20134967. https:\/\/doi.org\/10.1109\/JIOT.2019.2897619","journal-title":"IEEE Internet Things J"},{"key":"10797_CR128","unstructured":"Liu M, Zhang W, Mroueh Y et\u00a0al (2019b) A decentralized parallel algorithm for training generative adversarial nets. arXiv preprint http:\/\/arxiv.org\/abs\/1910.12999"},{"key":"10797_CR129","doi-asserted-by":"publisher","unstructured":"Liu Y, Lee WC, Tao G et\u00a0al (2019c) ABS: scanning neural networks for back-doors by artificial brain stimulation. In: ACM SIGSAC conference on computer and communications security. pp 1265\u20131282. https:\/\/doi.org\/10.1145\/3319535.3363216","DOI":"10.1145\/3319535.3363216"},{"key":"10797_CR130","unstructured":"Liu Y, Yi Z, Chen T (2020) Backdoor attacks and defenses in feature-partitioned collaborative learning. arXiv preprint http:\/\/arxiv.org\/abs\/2007.03608"},{"key":"10797_CR131","doi-asserted-by":"publisher","first-page":"4574","DOI":"10.1109\/TIFS.2021.3108434","volume":"16","author":"X Liu","year":"2021","unstructured":"Liu X, Li H, Xu G et al (2021) Privacy-enhanced federated learning against poisoning adversaries. IEEE Trans Inf Forensics Secur 16:4574\u20134588. https:\/\/doi.org\/10.1109\/TIFS.2021.3108434","journal-title":"IEEE Trans Inf Forensics Secur"},{"key":"10797_CR132","doi-asserted-by":"publisher","unstructured":"Liu Y, Shen G, Tao G et\u00a0al (2022a) Complex backdoor detection by symmetric feature differencing. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 15003\u201315013. https:\/\/doi.org\/10.1109\/CVPR52688.2022.01458","DOI":"10.1109\/CVPR52688.2022.01458"},{"key":"10797_CR133","doi-asserted-by":"publisher","unstructured":"Liu Z, Guo J, Lam KY et\u00a0al (2022b) Efficient dropout-resilient aggregation for privacy-preserving machine learning. IEEE Trans Inf Forensics Secur. https:\/\/doi.org\/10.48550\/arXiv.2203.17044","DOI":"10.48550\/arXiv.2203.17044"},{"key":"10797_CR134","unstructured":"Liu X, Kuang H, Lin X et\u00a0al (2023) CAT: collaborative adversarial training. arXiv preprint http:\/\/arxiv.org\/abs\/2303.14922"},{"key":"10797_CR135","unstructured":"Long Y, Bindschaedler V, Wang L et\u00a0al (2018) Understanding membership inferences on well-generalized learning models. arXiv preprint http:\/\/arxiv.org\/abs\/1802.04889"},{"key":"10797_CR136","unstructured":"Lu Y, De\u00a0Sa C (2021) Optimal complexity in decentralized training. In: International conference on machine learning. pp 7111\u20137123. https:\/\/proceedings.mlr.press\/v139\/lu21a\/lu21a.pdf"},{"key":"10797_CR137","unstructured":"Luo S, Zhu D, Li Z et\u00a0al (2021) Ensemble federated adversarial training with non-IID data. arXiv preprint http:\/\/arxiv.org\/abs\/2110.14814"},{"key":"10797_CR138","doi-asserted-by":"publisher","unstructured":"Lyu L (2021) DP-SIGNSGD: when efficiency meets privacy and robustness. In: International conference on acoustics, speech and signal processing. pp 3070\u20133074. https:\/\/doi.org\/10.1109\/ICASSP39728.2021.9414538","DOI":"10.1109\/ICASSP39728.2021.9414538"},{"key":"10797_CR139","unstructured":"Lyu L, Yu H, Ma X et\u00a0al (2020a) Privacy and robustness in federated learning: attacks and defenses. arXiv preprint http:\/\/arxiv.org\/abs\/2012.06337"},{"key":"10797_CR140","unstructured":"Lyu L, Yu H, Yang Q (2020b) Threats to federated learning: a survey. arXiv preprint http:\/\/arxiv.org\/abs\/2003.02133"},{"key":"10797_CR141","doi-asserted-by":"crossref","unstructured":"Ma S, Liu Y (2019) NIC: detecting adversarial samples with neural network invariant checking. In: Network and distributed system security symposium. https:\/\/www.ndss-symposium.org\/ndss-paper\/nic-detecting-adversarial-samples-with-neural-network-invariant-checking\/","DOI":"10.14722\/ndss.2019.23415"},{"key":"10797_CR142","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2021.3079472","author":"C Ma","year":"2021","unstructured":"Ma C, Li J, Ding M et al (2021) Federated learning with unreliable clients: performance analysis and mechanism design. IEEE Internet Things J. https:\/\/doi.org\/10.1109\/JIOT.2021.3079472","journal-title":"IEEE Internet Things J"},{"key":"10797_CR143","doi-asserted-by":"publisher","DOI":"10.1109\/LCOMM.2022.3180113","author":"X Ma","year":"2022","unstructured":"Ma X, Sun X, Wu Y et al (2022a) Differentially private Byzantine-robust federated learning. IEEE Trans Parallel Distrib Syst. https:\/\/doi.org\/10.1109\/LCOMM.2022.3180113","journal-title":"IEEE Trans Parallel Distrib Syst"},{"issue":"103","key":"10797_CR144","doi-asserted-by":"publisher","first-page":"561","DOI":"10.1016\/j.csi.2021.103561","volume":"80","author":"X Ma","year":"2022","unstructured":"Ma X, Zhou Y, Wang L et al (2022b) Privacy-preserving Byzantine-robust federated learning. Comput Stand Interfaces 80(103):561. https:\/\/doi.org\/10.1016\/j.csi.2021.103561","journal-title":"Comput Stand Interfaces"},{"key":"10797_CR145","doi-asserted-by":"publisher","first-page":"1639","DOI":"10.1109\/TIFS.2022.3169918","volume":"17","author":"Z Ma","year":"2022","unstructured":"Ma Z, Ma J, Miao Y et al (2022c) ShieldFL: mitigating model poisoning attacks in privacy-preserving federated learning. IEEE Trans Inf Forensics Secur 17:1639\u20131654. https:\/\/doi.org\/10.1109\/TIFS.2022.3169918","journal-title":"IEEE Trans Inf Forensics Secur"},{"key":"10797_CR146","unstructured":"Madry A, Makelov A, Schmidt L et\u00a0al (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint http:\/\/arxiv.org\/abs\/1706.06083"},{"key":"10797_CR147","doi-asserted-by":"publisher","unstructured":"Mahloujifar S, Ghosh E, Chase M (2022) Property inference from poisoning. In: IEEE symposium on security and privacy. pp 1569\u20131569. https:\/\/doi.org\/10.1109\/SP46214.2022.9833623","DOI":"10.1109\/SP46214.2022.9833623"},{"key":"10797_CR148","doi-asserted-by":"publisher","unstructured":"Mao Y, Yuan X, Zhao X et\u00a0al (2021) ROMOA: robust model aggregation for the resistance of federated learning to model poisoning attacks. In: European symposium on research in computer security. pp 476\u2013496. https:\/\/doi.org\/10.1007\/978-3-030-88418-5_23","DOI":"10.1007\/978-3-030-88418-5_23"},{"key":"10797_CR149","unstructured":"McMahan B, Moore E, Ramage D et\u00a0al (2017) Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics. pp 1273\u20131282. http:\/\/proceedings.mlr.press\/v54\/mcmahan17a.html"},{"key":"10797_CR150","doi-asserted-by":"publisher","unstructured":"Melis L, Song C, De\u00a0Cristofaro E et\u00a0al (2019) Exploiting unintended feature leakage in collaborative learning. In: IEEE symposium on security and privacy. pp 691\u2013706. https:\/\/doi.org\/10.1109\/SP.2019.00029","DOI":"10.1109\/SP.2019.00029"},{"key":"10797_CR151","doi-asserted-by":"publisher","unstructured":"Mironov I (2017) R\u00e9nyi differential privacy. In: IEEE computer security foundations symposium. pp 263\u2013275. https:\/\/doi.org\/10.1109\/CSF.2017.11","DOI":"10.1109\/CSF.2017.11"},{"key":"10797_CR152","unstructured":"Moritz P, Nishihara R, Stoica I et\u00a0al (2015) SparkNet: training deep networks in spark. arXiv preprint http:\/\/arxiv.org\/abs\/1511.06051"},{"key":"10797_CR153","unstructured":"Moritz P, Nishihara R, Stoica I et\u00a0al (2016) SparkNet: training deep networks in spark. In: International conference on learning representations. http:\/\/learningsys.org\/papers\/LearningSys_2015_paper_18.pdf"},{"key":"10797_CR154","doi-asserted-by":"publisher","first-page":"619","DOI":"10.1016\/j.future.2020.10.007","volume":"115","author":"V Mothukuri","year":"2021","unstructured":"Mothukuri V, Parizi RM, Pouriyeh S et al (2021) A survey on security and privacy of federated learning. Futur Gener Comput Syst 115:619\u2013640. https:\/\/doi.org\/10.1016\/j.future.2020.10.007","journal-title":"Futur Gener Comput Syst"},{"key":"10797_CR155","doi-asserted-by":"publisher","unstructured":"Mu\u00f1oz-Gonz\u00e1lez L, Biggio B, Demontis A et\u00a0al (2017) Towards poisoning of deep learning algorithms with back-gradient optimization. In: ACM workshop on artificial intelligence and security. pp 27\u201338. https:\/\/doi.org\/10.1145\/3128572.3140451","DOI":"10.1145\/3128572.3140451"},{"key":"10797_CR156","unstructured":"Mu\u00f1oz-Gonz\u00e1lez L, Co KT, Lupu EC (2019) Byzantine-robust federated machine learning through adaptive model averaging. arXiv preprint http:\/\/arxiv.org\/abs\/1909.05125"},{"key":"10797_CR157","doi-asserted-by":"crossref","unstructured":"Narayanan D, Harlap A, Phanishayee A et\u00a0al (2019) PipeDream: generalized pipeline parallelism for DNN training. In: ACM symposium on operating systems principles. pp 1\u201315","DOI":"10.1145\/3341301.3359646"},{"key":"10797_CR158","unstructured":"Naseri M, Hayes J, De\u00a0Cristofaro E (2020) Toward robustness and privacy in federated learning: experimenting with local and central differential privacy. arXiv preprint http:\/\/arxiv.org\/abs\/2009.03561"},{"key":"10797_CR159","doi-asserted-by":"publisher","unstructured":"Nasr M, Shokri R, Houmansadr A (2019) Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: IEEE symposium on security and privacy. pp 739\u2013753. https:\/\/doi.org\/10.1109\/SP.2019.00065","DOI":"10.1109\/SP.2019.00065"},{"key":"10797_CR160","doi-asserted-by":"publisher","unstructured":"Naveed M, Kamara S, Wright CV (2015) Inference attacks on property-preserving encrypted databases. In: ACM SIGSAC conference on computer and communications security. https:\/\/doi.org\/10.1145\/2810103.2813651","DOI":"10.1145\/2810103.2813651"},{"key":"10797_CR161","doi-asserted-by":"publisher","unstructured":"Nguyen TD, Rieger P, Miettinen M et\u00a0al (2020) Poisoning attacks on federated learning-based IoT intrusion detection system. In: Proceedings Workshop on Decentralized IoT System and Security. pp 1\u20137. https:\/\/doi.org\/10.14722\/diss.2020.23003","DOI":"10.14722\/diss.2020.23003"},{"key":"10797_CR162","unstructured":"Ouyang L, Wu J, Jiang X et\u00a0al (2022) Training language models to follow instructions with human feedback. In: Advances in neural information processing systems. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/file\/b1efde53be364a73914f58805a001731-Paper-Conference.pdf"},{"key":"10797_CR163","unstructured":"Ozdayi MS, Kantarcioglu M, Gel YR (2020) Defending against backdoors in federated learning with robust learning rate. arXiv preprint http:\/\/arxiv.org\/abs\/2007.03767"},{"key":"10797_CR164","unstructured":"Pan X, Zhang M, Wu D et\u00a0al (2020a) Justinian\u2019s GAAvernor: robust distributed learning with gradient aggregation agent. In: USENIX security symposium. pp 1641\u20131658. https:\/\/www.usenix.org\/conference\/usenixsecurity20\/presentation\/pan"},{"key":"10797_CR165","unstructured":"Pan X, Zhang M, Yan Y et\u00a0al (2020b) Exploring the security boundary of data reconstruction via neuron exclusivity analysis. arXiv e-prints"},{"key":"10797_CR166","unstructured":"Park M, Foulds J, Choudhary K et\u00a0al (2017) DP-EM: differentially private expectation maximization. In: Artificial intelligence and statistics. pp 896\u2013904. http:\/\/proceedings.mlr.press\/v54\/park17c\/park17c.pdf"},{"key":"10797_CR167","doi-asserted-by":"publisher","unstructured":"Pedarla LP, Zhang X, Zhao L et\u00a0al (2023) Evaluation of query-based membership inference attack on the medical data. In: ACM southeast conference. https:\/\/doi.org\/10.1145\/3564746.3587027","DOI":"10.1145\/3564746.3587027"},{"issue":"1","key":"10797_CR168","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s13748-012-0035-5","volume":"2","author":"D Peteiro-Barral","year":"2013","unstructured":"Peteiro-Barral D, Guijarro-Berdi\u00f1as B (2013) A survey of methods for distributed machine learning. Prog Artif Intell 2(1):1\u201311. https:\/\/doi.org\/10.1007\/s13748-012-0035-5","journal-title":"Prog Artif Intell"},{"key":"10797_CR169","doi-asserted-by":"publisher","unstructured":"Phan N, Wang Y, Wu X et\u00a0al (2016) Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: AAAI conference on artificial intelligence. pp 1309\u20131316. https:\/\/doi.org\/10.1609\/aaai.v30i1.10165","DOI":"10.1609\/aaai.v30i1.10165"},{"key":"10797_CR170","doi-asserted-by":"publisher","first-page":"328","DOI":"10.1016\/j.future.2020.12.003","volume":"117","author":"Y Qi","year":"2021","unstructured":"Qi Y, Hossain MS, Nie J et al (2021) Privacy-preserving blockchain-based federated learning for traffic flow prediction. Futur Gener Comput Syst 117:328\u2013337. https:\/\/doi.org\/10.1016\/j.future.2020.12.003","journal-title":"Futur Gener Comput Syst"},{"key":"10797_CR171","unstructured":"Qin C, Martens J, Gowal S et\u00a0al (2019) Adversarial robustness through local linearization. In: Advances in neural information processing systems, vol 32. https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/0defd533d51ed0a10c5c9dbf93ee78a5-Abstract.html"},{"key":"10797_CR172","doi-asserted-by":"publisher","unstructured":"Qiu H, Xiao C, Yang L et\u00a0al (2020) SemanticAdv: generating adversarial examples via attribute-conditioned image editing. In: European conference on computer vision. pp 19\u201337. https:\/\/doi.org\/10.1007\/978-3-030-58568-6_2","DOI":"10.1007\/978-3-030-58568-6_2"},{"key":"10797_CR173","doi-asserted-by":"publisher","unstructured":"Qiu H, Zeng Y, Guo S et\u00a0al (2021) DeepSweep: an evaluation framework for mitigating DNN backdoor attacks using data augmentation. In: ACM Asia conference on computer and communications security. pp 363\u2013377. https:\/\/doi.org\/10.1145\/3433210.3453108","DOI":"10.1145\/3433210.3453108"},{"key":"10797_CR174","unstructured":"Reddi S, Charles Z, Zaheer M et\u00a0al (2020) Adaptive federated optimization. arXiv preprint http:\/\/arxiv.org\/abs\/2003.00295"},{"key":"10797_CR175","unstructured":"Sahu AK, Li T, Sanjabi M et\u00a0al (2018) On the convergence of federated optimization in heterogeneous networks. arXiv preprint 3:3. http:\/\/arxiv.org\/abs\/1812.06127"},{"key":"10797_CR176","doi-asserted-by":"crossref","unstructured":"Salem A, Zhang Y, Humbert M et\u00a0al (2018) ML-Leaks: model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint http:\/\/arxiv.org\/abs\/1806.01246","DOI":"10.14722\/ndss.2019.23119"},{"key":"10797_CR177","doi-asserted-by":"publisher","unstructured":"Scheliga D, M\u00e4der P, Seeland M (2022) PRECODE\u2014a generic model extension to prevent deep gradient leakage. In: IEEE\/CVF winter conference on applications of computer vision. pp 1849\u20131858. https:\/\/doi.org\/10.1109\/WACV51458.2022.00366","DOI":"10.1109\/WACV51458.2022.00366"},{"key":"10797_CR178","unstructured":"Shafahi A, Huang WR, Najibi M et\u00a0al (2018) Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in neural information processing systems. pp 6103\u20136113. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/22722a343513ed45f14905eb07621686-Abstract.html"},{"key":"10797_CR179","unstructured":"Shafahi A, Najibi M, Ghiasi A et\u00a0al (2019) Adversarial training for free! arXiv preprint http:\/\/arxiv.org\/abs\/1904.12843"},{"key":"10797_CR180","unstructured":"Shah D, Dube P, Chakraborty S et\u00a0al (2021) Adversarial training in communication constrained federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/2103.01319"},{"key":"10797_CR181","doi-asserted-by":"publisher","first-page":"195","DOI":"10.1016\/j.neucom.2018.04.027","volume":"307","author":"U Shaham","year":"2018","unstructured":"Shaham U, Yamada Y, Negahban S (2018) Understanding adversarial training: increasing local stability of supervised models through robust optimization. Neurocomputing 307:195\u2013204. https:\/\/doi.org\/10.1016\/j.neucom.2018.04.027","journal-title":"Neurocomputing"},{"key":"10797_CR182","doi-asserted-by":"crossref","unstructured":"Shejwalkar V, Houmansadr A (2021) Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning. Internet Society, p\u00a018. https:\/\/people.cs.umass.edu\/~amir\/papers\/NDSS21-model-poisoning.pdf","DOI":"10.14722\/ndss.2021.24498"},{"key":"10797_CR183","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2016.2579198","author":"W Shi","year":"2016","unstructured":"Shi W, Cao J, Zhang Q et al (2016) Edge computing: vision and challenges. IEEE Internet Things J. https:\/\/doi.org\/10.1109\/JIOT.2016.2579198","journal-title":"IEEE Internet Things J"},{"key":"10797_CR184","doi-asserted-by":"crossref","unstructured":"Shi J, Wan W, Hu S et\u00a0al (2022) Challenges and approaches for mitigating Byzantine attacks in federated learning. In: IEEE international conference on trust, security and privacy in computing and communications. http:\/\/arxiv.org\/abs\/2112.14468","DOI":"10.1109\/TrustCom56396.2022.00030"},{"key":"10797_CR185","unstructured":"Shoeybi M, Patwary M, Puri R et\u00a0al (2019) Megatron-LM: training multi-billion parameter language models using model parallelism. arXiv preprint http:\/\/arxiv.org\/abs\/1909.08053"},{"key":"10797_CR186","doi-asserted-by":"publisher","unstructured":"Shokri R, Shmatikov V (2015) Privacy-preserving deep learning. In: ACM SIGSAC conference on computer and communications security. pp 1310\u20131321. https:\/\/doi.org\/10.1145\/2810103.2813687","DOI":"10.1145\/2810103.2813687"},{"key":"10797_CR187","doi-asserted-by":"publisher","unstructured":"Shokri R, Stronati M, Song C et\u00a0al (2017) Membership inference attacks against machine learning models. In: IEEE symposium on security and privacy. pp 3\u201318. https:\/\/doi.org\/10.1109\/SP.2017.41","DOI":"10.1109\/SP.2017.41"},{"issue":"10","key":"10797_CR188","doi-asserted-by":"publisher","first-page":"2430","DOI":"10.1109\/JSAC.2020.3000372","volume":"38","author":"M Song","year":"2020","unstructured":"Song M, Wang Z, Zhang Z et al (2020) Analyzing user-level privacy attack against federated learning. IEEE J Sel Areas Commun 38(10):2430\u20132444. https:\/\/doi.org\/10.1109\/JSAC.2020.3000372","journal-title":"IEEE J Sel Areas Commun"},{"key":"10797_CR189","doi-asserted-by":"crossref","unstructured":"Stripelis D, Saleem H, Ghai T et\u00a0al (2021) Secure neuroimaging analysis using federated learning with homomorphic encryption. arXiv preprint http:\/\/arxiv.org\/abs\/2108.03437","DOI":"10.1117\/12.2606256"},{"key":"10797_CR190","unstructured":"Sun Z, Kairouz P, Suresh AT et\u00a0al (2019) Can you really backdoor federated learning? arXiv preprint http:\/\/arxiv.org\/abs\/1911.07963"},{"key":"10797_CR191","unstructured":"Sun G, Cong Y, Dong J et\u00a0al (2020) Data poisoning attacks on federated machine learning. arXiv preprint http:\/\/arxiv.org\/abs\/2004.10020"},{"key":"10797_CR192","unstructured":"Sun J, Li A, DiValentin L et\u00a0al (2021a) FL-WBC: enhancing robustness against model poisoning attacks in federated learning from a client perspective. In: Advances in neural information processing systems, vol 34. https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/692baebec3bb4b53d7ebc3b9fabac31b-Abstract.html"},{"key":"10797_CR193","doi-asserted-by":"publisher","unstructured":"Sun J, Li A, Wang B et\u00a0al (2021b) Soteria: provable defense against privacy leakage in federated learning from representation perspective. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 9311\u20139319. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00919","DOI":"10.1109\/CVPR46437.2021.00919"},{"key":"10797_CR194","doi-asserted-by":"publisher","DOI":"10.1109\/JSAC.2021.3118354","author":"P Sun","year":"2021","unstructured":"Sun P, Che H, Wang Z et al (2021c) Pain-FL: personalized privacy-preserving incentive for federated learning. IEEE J Sel Areas Commun. https:\/\/doi.org\/10.1109\/JSAC.2021.3118354","journal-title":"IEEE J Sel Areas Commun"},{"key":"10797_CR195","unstructured":"Sun T, Li D, Wang B (2021d) Stability and generalization of the decentralized stochastic gradient descent. arXiv preprint http:\/\/arxiv.org\/abs\/2102.01302"},{"key":"10797_CR196","doi-asserted-by":"publisher","DOI":"10.1142\/S0218488502001648","author":"L Sweeney","year":"2002","unstructured":"Sweeney L (2002) k-anonymity: a model for protecting privacy. Int J Uncertain Fuzziness Knowl Based Syst. https:\/\/doi.org\/10.1142\/S0218488502001648","journal-title":"Int J Uncertain Fuzziness Knowl Based Syst"},{"key":"10797_CR197","unstructured":"Szegedy C, Zaremba W, Sutskever I et\u00a0al (2013) Intriguing properties of neural networks. arXiv preprint http:\/\/arxiv.org\/abs\/1312.6199"},{"key":"10797_CR198","doi-asserted-by":"publisher","unstructured":"Szegedy C, Liu W, Jia Y et\u00a0al (2015) Going deeper with convolutions. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 1\u20139. https:\/\/doi.org\/10.1109\/CVPR.2015.7298594","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"10797_CR199","doi-asserted-by":"publisher","unstructured":"Tancik M, Mildenhall B, Ng R (2020) StegaStamp: invisible hyperlinks in physical photographs. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 2117\u20132126. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00219","DOI":"10.1109\/CVPR42600.2020.00219"},{"key":"10797_CR200","doi-asserted-by":"publisher","unstructured":"Tao G, Shen G, Liu Y et\u00a0al (2022) Better trigger inversion optimization in backdoor scanning. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 13368\u201313378. https:\/\/doi.org\/10.1109\/CVPR52688.2022.01301","DOI":"10.1109\/CVPR52688.2022.01301"},{"key":"10797_CR201","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-020-00320-x","author":"S Thudumu","year":"2020","unstructured":"Thudumu S, Branch P, Jin J et al (2020) A comprehensive survey of anomaly detection techniques for high dimensional big data. J Big Data. https:\/\/doi.org\/10.1186\/s40537-020-00320-x","journal-title":"J Big Data"},{"key":"10797_CR202","doi-asserted-by":"crossref","unstructured":"Tolpegin V, Truex S, Gursoy ME et\u00a0al (2020) Data poisoning attacks against federated learning systems. In: European symposium on research in computer security. pp 480\u2013501. https:\/\/www.usenix.org\/conference\/usenixsecurity20\/presentation\/fang","DOI":"10.1007\/978-3-030-58951-6_24"},{"key":"10797_CR203","unstructured":"Tran B, Li J, Madry A (2018) Spectral signatures in backdoor attacks. In: Advances in neural information processing systems. pp 8000\u20138010. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/280cf18baf4311c92aa5a042336587d3-Abstract.html"},{"key":"10797_CR204","doi-asserted-by":"publisher","unstructured":"Truong L, Jones C, Hutchinson B et\u00a0al (2020) Systematic evaluation of backdoor data poisoning attacks on image classifiers. In: IEEE\/CVF conference on computer vision and pattern recognition workshops. pp 788\u2013789. https:\/\/doi.org\/10.1109\/CVPRW50498.2020.00402","DOI":"10.1109\/CVPRW50498.2020.00402"},{"key":"10797_CR205","doi-asserted-by":"publisher","unstructured":"Tsaknakis I, Hong M, Liu S (2020) Decentralized min-max optimization: formulations, algorithms and applications in network poisoning attack. In: IEEE international conference on acoustics, speech and signal processing. pp 5755\u20135759. https:\/\/doi.org\/10.1109\/ICASSP40776.2020.9054056","DOI":"10.1109\/ICASSP40776.2020.9054056"},{"issue":"84","key":"10797_CR206","first-page":"1","volume":"22","author":"J Tu","year":"2021","unstructured":"Tu J, Liu W, Mao X et al (2021) Variance reduced median-of-means estimator for Byzantine-robust distributed inference. J Mach Learn Res 22(84):1\u201367","journal-title":"J Mach Learn Res"},{"key":"10797_CR207","unstructured":"Turner A, Tsipras D, Madry A (2018) Clean-label backdoor attacks. arXiv preprint https:\/\/people.csail.mit.edu\/madry\/lab\/cleanlabel.pdf"},{"key":"10797_CR208","unstructured":"Vepakomma P, Swedish T, Raskar R et\u00a0al (2018) No peek: a survey of private distributed deep learning. arXiv preprint http:\/\/arxiv.org\/abs\/1812.03288"},{"key":"10797_CR209","unstructured":"Vinaroz M, Park MJ (2023) Differentially private kernel inducing points (DP-KIP) for privacy-preserving data distillation. arXiv preprint http:\/\/arxiv.org\/abs\/2301.13389"},{"key":"10797_CR210","doi-asserted-by":"publisher","unstructured":"Wang B, Yao Y, Shan S et\u00a0al (2019a) Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: IEEE symposium on security and privacy. pp 707\u2013723. https:\/\/doi.org\/10.1109\/SP.2019.00031","DOI":"10.1109\/SP.2019.00031"},{"key":"10797_CR211","doi-asserted-by":"publisher","unstructured":"Wang Z, Song M, Zhang Z et\u00a0al (2019b) Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE conference on computer communications. pp 2512\u20132520. https:\/\/doi.org\/10.1109\/INFOCOM.2019.8737416","DOI":"10.1109\/INFOCOM.2019.8737416"},{"key":"10797_CR212","unstructured":"Wang B, Cao X, Gong NZ et\u00a0al (2020a) On certifying robustness against backdoor attacks via randomized smoothing. arXiv preprint http:\/\/arxiv.org\/abs\/2002.11750"},{"key":"10797_CR213","unstructured":"Wang H, Sreenivasan K, Rajput S et\u00a0al (2020b) Attack of the tails: yes, you really can backdoor federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/2007.05084"},{"key":"10797_CR214","unstructured":"Wang J, Liu Q, Liang H et\u00a0al (2020c) Tackling the objective inconsistency problem in heterogeneous federated optimization. arXiv preprint http:\/\/arxiv.org\/abs\/2007.07481"},{"key":"10797_CR215","unstructured":"Weber M, Xu X, Karla\u0161 B et\u00a0al (2020) RAB: provable robustness against backdoor attacks. arXiv preprint http:\/\/arxiv.org\/abs\/2003.08904"},{"key":"10797_CR216","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2020.2988575","author":"K Wei","year":"2020","unstructured":"Wei K, Li J, Ding M et al (2020) Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans Inf Forensics Secur. https:\/\/doi.org\/10.1109\/TIFS.2020.2988575","journal-title":"IEEE Trans Inf Forensics Secur"},{"key":"10797_CR217","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2021.3056991","author":"K Wei","year":"2021","unstructured":"Wei K, Li J, Ding M et al (2021a) User-level privacy-preserving federated learning: analysis and performance optimization. IEEE Trans Mob Comput. https:\/\/doi.org\/10.1109\/TMC.2021.3056991","journal-title":"IEEE Trans Mob Comput"},{"key":"10797_CR218","doi-asserted-by":"publisher","unstructured":"Wei W, Liu L, Wut Y et\u00a0al (2021b) Gradient-leakage resilient federated learning. In: International conference on distributed computing systems. pp 797\u2013807. https:\/\/doi.org\/10.1109\/ICDCS51616.2021.00081","DOI":"10.1109\/ICDCS51616.2021.00081"},{"key":"10797_CR219","unstructured":"Wu Y, Schuster M, Chen Z et\u00a0al (2016) Google\u2019s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint http:\/\/arxiv.org\/abs\/1609.08144"},{"key":"10797_CR220","unstructured":"Wu C, Yang X, Zhu S et\u00a0al (2020) Mitigating backdoor attacks in federated learning. arXiv preprint http:\/\/arxiv.org\/abs\/2011.01767"},{"key":"10797_CR221","doi-asserted-by":"publisher","unstructured":"Wu Y, Chen H, Wang X et\u00a0al (2021) Tolerating adversarial attacks and Byzantine faults in distributed machine learning. In: International conference on big data. pp 3380\u20133389. https:\/\/doi.org\/10.1109\/BigData52589.2021.9671583","DOI":"10.1109\/BigData52589.2021.9671583"},{"key":"10797_CR222","unstructured":"Xie C, Koyejo O, Gupta I (2018) Generalized Byzantine-tolerant SGD. arXiv preprint http:\/\/arxiv.org\/abs\/1802.10116"},{"key":"10797_CR223","unstructured":"Xie C, Huang K, Chen PY et\u00a0al (2019a) DBA: distributed backdoor attacks against federated learning. In: International conference on learning representations. https:\/\/research.ibm.com\/publications\/dba-distributed-backdoor-attacks-against-federated-learning"},{"key":"10797_CR224","unstructured":"Xie C, Koyejo S, Gupta I (2019b) Zeno: distributed stochastic gradient descent with suspicion-based fault-tolerance. In: International conference on machine learning. pp 6893\u20136901. http:\/\/proceedings.mlr.press\/v97\/xie19b.html"},{"key":"10797_CR225","unstructured":"Xie C, Koyejo S, Gupta I (2020) Zeno++: robust fully asynchronous SGD. In: International conference on machine learning. pp 10495\u201310503. http:\/\/proceedings.mlr.press\/v119\/xie20c.html"},{"key":"10797_CR226","unstructured":"Xie C, Chen M, Chen PY et\u00a0al (2021) CRFL: certifiably robust federated learning against backdoor attacks. arXiv preprint http:\/\/arxiv.org\/abs\/2106.08283"},{"key":"10797_CR227","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2021.3073925","author":"Z Xiong","year":"2021","unstructured":"Xiong Z, Cai Z, Takabi D et al (2021) Privacy threat and defense for federated learning with non-IID data in AIoT. IEEE Trans Ind Inf. https:\/\/doi.org\/10.1109\/TII.2021.3073925","journal-title":"IEEE Trans Ind Inf"},{"key":"10797_CR228","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2020.3005909","author":"G Xu","year":"2020","unstructured":"Xu G, Li H, Zhang Y et al (2020) Privacy-preserving federated deep learning with irregular users. IEEE Trans Depend Secur Comput. https:\/\/doi.org\/10.1109\/TDSC.2020.3005909","journal-title":"IEEE Trans Depend Secur Comput"},{"key":"10797_CR229","unstructured":"Yang YR, Li WJ (2021) BASGD: buffered asynchronous SGD for Byzantine learning. In: International conference on machine learning. pp 11751\u201311761. http:\/\/proceedings.mlr.press\/v139\/yang21e.html"},{"issue":"3","key":"10797_CR230","doi-asserted-by":"publisher","first-page":"146","DOI":"10.1109\/MSP.2020.2973345","volume":"37","author":"Z Yang","year":"2020","unstructured":"Yang Z, Gang A, Bajwa WU (2020) Adversary-resilient distributed and decentralized statistical inference and machine learning: an overview of recent advances under the Byzantine threat model. IEEE Signal Process Mag 37(3):146\u2013159. https:\/\/doi.org\/10.1109\/MSP.2020.2973345","journal-title":"IEEE Signal Process Mag"},{"key":"10797_CR231","unstructured":"Yin D, Chen Y, Kannan R et\u00a0al (2018) Byzantine-robust distributed learning: towards optimal statistical rates. In: International conference on machine learning. pp 5650\u20135659. http:\/\/proceedings.mlr.press\/v54\/mcmahan17a.html"},{"key":"10797_CR232","unstructured":"Yin H, Mallya A, Vahdat A et\u00a0al (2021) See through gradients: image batch recovery via GradInversion. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 16337\u201316346. https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/fa84632d742f2729dc32ce8cb5d49733-Abstract.html"},{"key":"10797_CR233","doi-asserted-by":"publisher","unstructured":"Yin M, Li S, Song C et\u00a0al (2022) ADC: adversarial attacks against object detection that evade context consistency checks. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision. pp 3278\u20133287. https:\/\/doi.org\/10.1109\/WACV51458.2022.00289","DOI":"10.1109\/WACV51458.2022.00289"},{"key":"10797_CR234","doi-asserted-by":"publisher","unstructured":"Yu L, Liu L, Pu C et\u00a0al (2019a) Differentially private model publishing for deep learning. In: IEEE symposium on security and privacy. pp 332\u2013349. https:\/\/doi.org\/10.1109\/SP.2019.00019","DOI":"10.1109\/SP.2019.00019"},{"key":"10797_CR235","unstructured":"Yu Y, Wu J, Huang L (2019b) Double quantization for communication-efficient distributed optimization. In: Advances in neural information processing systems, vol 32. https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/ea4eb49329550caaa1d2044105223721-Abstract.html"},{"key":"10797_CR236","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2021.3089713","author":"X Yuan","year":"2021","unstructured":"Yuan X, Ma X, Zhang L et al (2021) Beyond class-level privacy leakage: breaking record-level privacy in federated learning. IEEE Internet Things J. https:\/\/doi.org\/10.1109\/JIOT.2021.3089713","journal-title":"IEEE Internet Things J"},{"key":"10797_CR237","unstructured":"Zelenkova R, Swallow J, Chamikara M et\u00a0al (2022) Resurrecting trust in facial recognition: mitigating backdoor attacks in face recognition to prevent potential privacy breaches. arXiv preprint http:\/\/arxiv.org\/abs\/2202.10320"},{"key":"10797_CR238","unstructured":"Zhang H, Cisse M, Dauphin YN et\u00a0al (2017) mixup: beyond empirical risk minimization. arXiv preprint http:\/\/arxiv.org\/abs\/1710.09412"},{"key":"10797_CR239","doi-asserted-by":"publisher","unstructured":"Zhang D, Chen X, Wang D et\u00a0al (2018a) A survey on collaborative deep learning and privacy-preserving. In: IEEE international conference on data science in cyberspace. pp 652\u2013658. https:\/\/doi.org\/10.1109\/DSC.2018.00104","DOI":"10.1109\/DSC.2018.00104"},{"key":"10797_CR240","unstructured":"Zhang X, Khalili MM, Liu M (2018b) Improving the privacy and accuracy of ADMM-based distributed algorithms. In: International conference on machine learning. http:\/\/proceedings.mlr.press\/v80\/zhang18f.html"},{"key":"10797_CR241","unstructured":"Zhang D, Zhang T, Lu Y et\u00a0al (2019a) You only propagate once: accelerating adversarial training via maximal principle. In: Advances in neural information processing systems, vol 32. https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/812b4ba287f5ee0bc9d43bbf5bbe87fb-Abstract.html"},{"key":"10797_CR242","unstructured":"Zhang H, Yu Y, Jiao J et\u00a0al (2019b) Theoretically principled trade-off between robustness and accuracy. In: International conference on machine learning. pp 7472\u20137482. http:\/\/proceedings.mlr.press\/v97\/zhang19p.html"},{"key":"10797_CR243","unstructured":"Zhang C, Li S, Xia J et\u00a0al (2020a) BatchCrypt: efficient homomorphic encryption for cross-silo federated learning. In: USENIX annual technical conference. pp 493\u2013506. https:\/\/www.usenix.org\/conference\/atc20\/presentation\/zhang-chengliang"},{"key":"10797_CR244","doi-asserted-by":"publisher","unstructured":"Zhang J, Zhang J, Chen J et\u00a0al (2020b) GAN enhanced membership inference: a passive local attack in federated learning. In: IEEE international conference on communications. https:\/\/doi.org\/10.1109\/ICC40277.2020.9148790","DOI":"10.1109\/ICC40277.2020.9148790"},{"key":"10797_CR245","doi-asserted-by":"crossref","unstructured":"Zhang Q, Xin C, Wu H (2021a) GALA: greedy computation for linear algebra in privacy-preserved neural networks. In: Network and distributed system security symposium. https:\/\/www.ndss-symposium.org\/ndss-paper\/gala-greedy-computation-for-linear-algebra-in-privacy-preserved-neural-networks\/","DOI":"10.14722\/ndss.2021.24351"},{"key":"10797_CR246","unstructured":"Zhang W, Tople S, Ohrimenko O (2021b) Leakage of dataset properties in multi-party machine learning. In: USENIX security symposium. pp 2687\u20132704. https:\/\/www.usenix.org\/conference\/usenixsecurity21\/presentation\/zhang-wanrong"},{"key":"10797_CR247","unstructured":"Zhang G, Lu S, Zhang Y et\u00a0al (2022) Distributed adversarial training to robustify deep neural networks at scale. In: Uncertainty in artificial intelligence. pp 2353\u20132363. https:\/\/proceedings.mlr.press\/v180\/zhang22a.html"},{"key":"10797_CR248","unstructured":"Zhao Y, Li M, Lai L et\u00a0al (2018) Federated learning with non-IID data. arXiv preprint http:\/\/arxiv.org\/abs\/1806.00582"},{"key":"10797_CR249","unstructured":"Zhao B, Mopuri KR, Bilen H (2020a) IDLG: improved deep leakage from gradients. arXiv preprint http:\/\/arxiv.org\/abs\/2001.02610"},{"key":"10797_CR250","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2020.2986205","author":"L Zhao","year":"2020","unstructured":"Zhao L, Hu S, Wang Q et al (2020b) Shielding collaborative learning: mitigating poisoning attacks through client-side detection. IEEE Trans Depend Secur Comput. https:\/\/doi.org\/10.1109\/TDSC.2020.2986205","journal-title":"IEEE Trans Depend Secur Comput"},{"key":"10797_CR251","doi-asserted-by":"publisher","DOI":"10.1002\/int.22241","author":"Q Zhao","year":"2020","unstructured":"Zhao Q, Zhao C, Cui S et al (2020c) PrivateDL: privacy-preserving collaborative deep learning against leakage from gradient sharing. Int J Intell Syst. https:\/\/doi.org\/10.1002\/int.22241","journal-title":"Int J Intell Syst"},{"key":"10797_CR252","doi-asserted-by":"publisher","unstructured":"Zhao S, Ma X, Zheng X et\u00a0al (2020d) Clean-label backdoor attacks on video recognition models. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 14443\u201314452. https:\/\/doi.org\/10.1109\/CVPR42600.2020.01445","DOI":"10.1109\/CVPR42600.2020.01445"},{"key":"10797_CR253","doi-asserted-by":"publisher","DOI":"10.3390\/a15080283","author":"W Zhao","year":"2022","unstructured":"Zhao W, Alwidian S, Mahmoud QH (2022a) Adversarial training methods for deep learning: a systematic review. Algorithms. https:\/\/doi.org\/10.3390\/a15080283","journal-title":"Algorithms"},{"key":"10797_CR254","doi-asserted-by":"publisher","unstructured":"Zhao Z, Chen X, Xuan Y et\u00a0al (2022b) Defeat: deep hidden feature backdoor attacks by imperceptible perturbation and latent representation constraints. In: IEEE\/CVF conference on computer vision and pattern recognition. pp 15213\u201315222. https:\/\/doi.org\/10.1109\/CVPR52688.2022.01478","DOI":"10.1109\/CVPR52688.2022.01478"},{"key":"10797_CR255","doi-asserted-by":"publisher","unstructured":"Zheng W, Yan L, Gou C et\u00a0al (2020) Federated meta-learning for fraudulent credit card detection. In: IJCAI. pp 4654\u20134660. https:\/\/doi.org\/10.24963\/ijcai.2020\/642","DOI":"10.24963\/ijcai.2020\/642"},{"key":"10797_CR256","unstructured":"Zhou Y, Wu J, He J (2020) Adversarially robust federated learning for neural networks. arXiv preprint https:\/\/openreview.net\/forum?id=5xaInvrGWp"},{"key":"10797_CR257","unstructured":"Zhu J, Blaschko M (2020) R-GAP: recursive gradient attack on privacy. arXiv preprint http:\/\/arxiv.org\/abs\/2010.07733"},{"key":"10797_CR258","doi-asserted-by":"publisher","unstructured":"Zhu J, Kaplan R, Johnson J et\u00a0al (2018) Hidden: hiding data with deep networks. In: Proceedings of the European conference on computer vision. pp 657\u2013672. https:\/\/doi.org\/10.1007\/978-3-030-01267-0_40","DOI":"10.1007\/978-3-030-01267-0_40"},{"key":"10797_CR259","unstructured":"Zhu C, Huang WR, Shafahi A et\u00a0al (2019a) Transferable clean-label poisoning attacks on deep neural nets. arXiv preprint http:\/\/arxiv.org\/abs\/1905.05897"},{"key":"10797_CR260","doi-asserted-by":"crossref","unstructured":"Zhu L, Liu Z, Han S (2019b) Deep leakage from gradients. In: Advances in neural information processing systems. pp 14747\u201314756. https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/60a6c4002cc7b29142def8871531281a-Abstract.html","DOI":"10.1109\/ACCESS.2019.2892118"},{"key":"10797_CR261","unstructured":"Zhu J, Yao J, Liu T et\u00a0al (2021) $$\\alpha$$-weighted federated adversarial training. arXiv preprint https:\/\/openreview.net\/pdf?id=vxlAHR9AyZ6"}],"container-title":["Artificial Intelligence Review"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-024-10797-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10462-024-10797-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-024-10797-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,15]],"date-time":"2024-07-15T10:22:29Z","timestamp":1721038949000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10462-024-10797-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,20]]},"references-count":261,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2024,7]]}},"alternative-id":["10797"],"URL":"https:\/\/doi.org\/10.1007\/s10462-024-10797-0","relation":{},"ISSN":["1573-7462"],"issn-type":[{"value":"1573-7462","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,20]]},"assertion":[{"value":"6 May 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"20 June 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"180"}}