{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T18:02:56Z","timestamp":1764784976920,"version":"3.41.0"},"reference-count":64,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2022,9,23]],"date-time":"2022-09-23T00:00:00Z","timestamp":1663891200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation NSF","doi-asserted-by":"crossref","award":["CNS-1816887 NSF, CCF-1763747 NSF"],"award-info":[{"award-number":["CNS-1816887 NSF, CCF-1763747 NSF"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2022,10,31]]},"abstract":"<jats:p>\n            Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to\n            <jats:italic>poisoning backdoor attacks<\/jats:italic>\n            : a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this article, we analyze the effects of backdoor attacks on federated\n            <jats:italic>meta-learning<\/jats:italic>\n            , where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even one-shot\u00a0attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by\n            <jats:italic>matching networks<\/jats:italic>\n            , where the class of an input is predicted from the similarity of its features with a\n            <jats:italic>support set<\/jats:italic>\n            of labeled examples. By removing the decision logic from the model shared with the federation, the success and persistence of backdoor attacks are greatly reduced.\n          <\/jats:p>","DOI":"10.1145\/3523062","type":"journal-article","created":{"date-parts":[[2022,9,23]],"date-time":"2022-09-23T11:56:47Z","timestamp":1663934207000},"page":"1-25","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["Defending against Poisoning Backdoor Attacks on Federated Meta-learning"],"prefix":"10.1145","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8904-4760","authenticated-orcid":false,"given":"Chien-Lun","family":"Chen","sequence":"first","affiliation":[{"name":"University of Southern California, Los Angeles, California, USA"}]},{"given":"Sara","family":"Babakniya","sequence":"additional","affiliation":[{"name":"University of Southern California, Los Angeles, California, USA"}]},{"given":"Marco","family":"Paolieri","sequence":"additional","affiliation":[{"name":"University of Southern California, Los Angeles, California, USA"}]},{"given":"Leana","family":"Golubchik","sequence":"additional","affiliation":[{"name":"University of Southern California, Los Angeles, California, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,9,23]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDCS51616.2021.00086"},{"key":"e_1_3_2_3_2","first-page":"2938","volume-title":"The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS\u201920)","volume":"108","author":"Bagdasaryan Eugene","year":"2020","unstructured":"Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS\u201920), Vol. 108. PMLR, 2938\u20132948."},{"key":"e_1_3_2_4_2","volume-title":"3rd International Conference on Learning Representations (ICLR\u201915), Conference Track Proceedings","author":"Bahdanau Dzmitry","year":"2015","unstructured":"Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations (ICLR\u201915), Conference Track Proceedings."},{"key":"e_1_3_2_5_2","first-page":"8632","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS\u201919)","author":"Baruch Gilad","year":"2019","unstructured":"Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS\u201919). 8632\u20138642."},{"key":"e_1_3_2_6_2","first-page":"634","volume-title":"Proceedings of the 36th International Conference on Machine Learning (ICML\u201919)","volume":"97","author":"Bhagoji Arjun Nitin","year":"2019","unstructured":"Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin B. Calo. 2019. Analyzing federated learning through an adversarial lens. In Proceedings of the 36th International Conference on Machine Learning (ICML\u201919), Vol. 97. PMLR, 634\u2013643."},{"key":"e_1_3_2_7_2","volume-title":"Proceedings of the 29th International Conference on Machine Learning (ICML\u201912)","author":"Biggio Battista","year":"2012","unstructured":"Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on Machine Learning (ICML\u201912). Omnipress."},{"key":"e_1_3_2_8_2","first-page":"119","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"Blanchard Peva","year":"2017","unstructured":"Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. 119\u2013129."},{"key":"e_1_3_2_9_2","article-title":"Backdoor attacks on federated meta-learning","volume":"2006","author":"Chen Chien-Lun","year":"2020","unstructured":"Chien-Lun Chen, Leana Golubchik, and Marco Paolieri. 2020. Backdoor attacks on federated meta-learning. CoRR abs\/2006.07026 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_10_2","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1109\/SPW.2016.27","volume-title":"2016 IEEE Security and Privacy Workshops (SP Workshops\u201916)","author":"Chen Chien-Lun","year":"2016","unstructured":"Chien-Lun Chen, Ranjan Pal, and Leana Golubchik. 2016. Oblivious mechanisms in differential privacy: Experiments, conjectures, and open questions. In 2016 IEEE Security and Privacy Workshops (SP Workshops\u201916). IEEE Computer Society, 41\u201348."},{"key":"e_1_3_2_11_2","article-title":"Federated meta-learning for recommendation","volume":"1802","author":"Chen Fei","year":"2018","unstructured":"Fei Chen, Zhenhua Dong, Zhenguo Li, and Xiuqiang He. 2018. Federated meta-learning for recommendation. CoRR abs\/1802.07876 (2018).","journal-title":"CoRR"},{"key":"e_1_3_2_12_2","article-title":"Targeted backdoor attacks on deep learning systems using data poisoning","volume":"1712","author":"Chen Xinyun","year":"2017","unstructured":"Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. CoRR abs\/1712.05526 (2017).","journal-title":"CoRR"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219617.3219655"},{"key":"e_1_3_2_14_2","unstructured":"Fran\u00e7ois Chollet et\u00a0al. 2015. Keras. https:\/\/keras.io\/getting_started\/faq\/#how-should-i-cite-keras."},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01424-7_10"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1007\/11681878_14"},{"key":"e_1_3_2_17_2","first-page":"5998","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"al. Ashish Vaswani et","year":"2017","unstructured":"Ashish Vaswani et al.2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. 5998\u20136008."},{"key":"e_1_3_2_18_2","volume-title":"Workshop on Artificial Intelligence Safety 2019","author":"al. Bryant Chen et","year":"2019","unstructured":"Bryant Chen et al.2019. Detecting backdoor attacks on deep neural networks by activation clustering. In Workshop on Artificial Intelligence Safety 2019, Vol. 2301. CEUR-WS.org."},{"key":"e_1_3_2_19_2","volume-title":"Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS\u201920)","author":"al. Hongyi Wang et","year":"2020","unstructured":"Hongyi Wang et al.2020. Attack of the tails: Yes, you really can backdoor federated learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS\u201920)."},{"key":"e_1_3_2_20_2","first-page":"1175","volume-title":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS\u201917)","author":"al. Kallista A. Bonawitz et","year":"2017","unstructured":"Kallista A. Bonawitz et al.2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS\u201917). ACM, 1175\u20131191."},{"key":"e_1_3_2_21_2","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1145\/3128572.3140451","volume-title":"Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec@CCS\u201917)","author":"al. Luis Mu\u00f1oz-Gonz\u00e1lez et","year":"2017","unstructured":"Luis Mu\u00f1oz-Gonz\u00e1lez et al.2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec@CCS\u201917). 27\u201338."},{"key":"e_1_3_2_22_2","article-title":"TensorFlow: Large-scale machine learning on heterogeneous systems","volume":"1603","author":"al. Mart\u00edn Abadi et","year":"2016","unstructured":"Mart\u00edn Abadi et al.2016. TensorFlow: Large-scale machine learning on heterogeneous systems. CoRR abs\/1603.04467 (2016). https:\/\/www.tensorflow.org\/.","journal-title":"CoRR"},{"key":"e_1_3_2_23_2","article-title":"Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses","volume":"2012","author":"al. Micah Goldblum et","year":"2020","unstructured":"Micah Goldblum et al.2020. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. CoRR abs\/2012.10544 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2020.10.007"},{"key":"e_1_3_2_25_2","volume-title":"7th International Conference on Learning Representations (ICLR\u201919)","author":"al. Wei-Yu Chen et","year":"2019","unstructured":"Wei-Yu Chen et al.2019. A closer look at few-shot classification. In 7th International Conference on Learning Representations (ICLR\u201919). OpenReview.net."},{"key":"e_1_3_2_26_2","first-page":"113","volume-title":"Proceedings of the 35th Annual Computer Security Applications Conference (ACSAC\u201919)","author":"al. Yansong Gao et","year":"2019","unstructured":"Yansong Gao et al.2019. STRIP: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference (ACSAC\u201919). ACM, 113\u2013125."},{"key":"e_1_3_2_27_2","volume-title":"25th Annual Network and Distributed System Security Symposium (NDSS\u201918)","author":"al. Yingqi Liu et","year":"2018","unstructured":"Yingqi Liu et al.2018. Trojaning attack on neural networks. In 25th Annual Network and Distributed System Security Symposium (NDSS\u201918). Internet Society."},{"key":"e_1_3_2_28_2","first-page":"1605","volume-title":"29th USENIX Security Symposium (USENIX Security\u201920)","author":"Fang Minghong","year":"2020","unstructured":"Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In 29th USENIX Security Symposium (USENIX Security\u201920). USENIX Association, 1605\u20131622."},{"key":"e_1_3_2_29_2","first-page":"1126","volume-title":"Proceedings of the 34th International Conference on Machine Learning (ICML\u201917)","volume":"70","author":"Finn Chelsea","year":"2017","unstructured":"Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning (ICML\u201917), Vol. 70. PMLR, 1126\u20131135."},{"key":"e_1_3_2_30_2","article-title":"Mitigating Sybils in federated learning poisoning","volume":"1808","author":"Fung Clement","year":"2018","unstructured":"Clement Fung, Chris J. M. Yoon, and Ivan Beschastnikh. 2018. Mitigating Sybils in federated learning poisoning. CoRR abs\/1808.04866 (2018).","journal-title":"CoRR"},{"key":"e_1_3_2_31_2","volume-title":"Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS\u201920)","author":"Geiping Jonas","year":"2020","unstructured":"Jonas Geiping, Hartmut Bauermeister, Hannah Dr\u00f6ge, and Michael Moeller. 2020. Inverting gradients - How easy is it to break privacy in federated learning?. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 (NeurIPS\u201920)."},{"key":"e_1_3_2_32_2","article-title":"BadNets: Identifying vulnerabilities in the machine learning model supply chain","volume":"1708","author":"Gu Tianyu","year":"2017","unstructured":"Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying vulnerabilities in the machine learning model supply chain. CoRR abs\/1708.06733 (2017).","journal-title":"CoRR"},{"key":"e_1_3_2_33_2","article-title":"On the effectiveness of mitigating data poisoning attacks with gradient shaping","volume":"2002","author":"Hong Sanghyun","year":"2020","unstructured":"Sanghyun Hong, Varun Chandrasekaran, Yigitcan Kaya, Tudor Dumitras, and Nicolas Papernot. 2020. On the effectiveness of mitigating data poisoning attacks with gradient shaping. CoRR abs\/2002.11497 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_34_2","article-title":"Meta-learning in neural networks: A survey","volume":"2004","author":"Hospedales Timothy M.","year":"2020","unstructured":"Timothy M. Hospedales, Antreas Antoniou, Paul Micaelli, and Amos J. Storkey. 2020. Meta-learning in neural networks: A survey. CoRR abs\/2004.05439 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_35_2","first-page":"1885","volume-title":"Proceedings of the 34th International Conference on Machine Learning (ICML\u201917)","volume":"70","author":"Koh Pang Wei","year":"2017","unstructured":"Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning (ICML\u201917), Vol. 70. PMLR, 1885\u20131894."},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.aab3050"},{"key":"e_1_3_2_37_2","volume-title":"8th International Conference on Learning Representations (ICLR\u201920)","author":"Li Jeffrey","year":"2020","unstructured":"Jeffrey Li, Mikhail Khodak, Sebastian Caldas, and Ameet Talwalkar. 2020. Differentially private meta-learning. In 8th International Conference on Learning Representations (ICLR\u201920). OpenReview.net."},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33011544"},{"key":"e_1_3_2_39_2","article-title":"Learning to detect malicious clients for robust federated learning","volume":"2002","author":"Li Suyi","year":"2020","unstructured":"Suyi Li, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. 2020. Learning to detect malicious clients for robust federated learning. CoRR abs\/2002.00211 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2020.2975749"},{"key":"e_1_3_2_41_2","article-title":"Backdoor learning: A survey","volume":"2007","author":"Li Yiming","year":"2020","unstructured":"Yiming Li, Baoyuan Wu, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. 2020. Backdoor learning: A survey. CoRR abs\/2007.08745 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.425"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D15-1166"},{"key":"e_1_3_2_44_2","first-page":"1273","volume-title":"Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS\u201917)","volume":"54","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Ag\u00fcera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS\u201917), Vol. 54. PMLR, 1273\u20131282."},{"key":"e_1_3_2_45_2","first-page":"3518","volume-title":"Proceedings of the 35th International Conference on Machine Learning (ICML\u201918)","volume":"80","author":"Mhamdi El Mahdi El","year":"2018","unstructured":"El Mahdi El Mhamdi, Rachid Guerraoui, and S\u00e9bastien Rouault. 2018. The hidden vulnerability of distributed learning in byzantium. In Proceedings of the 35th International Conference on Machine Learning (ICML\u201918), Vol. 80. PMLR, 3518\u20133527."},{"key":"e_1_3_2_46_2","article-title":"On first-order meta-learning algorithms","volume":"1803","author":"Nichol Alex","year":"2018","unstructured":"Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. CoRR abs\/1803.02999 (2018).","journal-title":"CoRR"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i10.17118"},{"key":"e_1_3_2_48_2","article-title":"Detection of adversarial training examples in poisoning attacks through anomaly detection","volume":"1802","author":"Paudice Andrea","year":"2018","unstructured":"Andrea Paudice, Luis Mu\u00f1oz-Gonz\u00e1lez, Andr\u00e1s Gy\u00f6rgy, and Emil C. Lupu. 2018. Detection of adversarial training examples in poisoning attacks through anomaly detection. CoRR abs\/1802.03041 (2018).","journal-title":"CoRR"},{"key":"e_1_3_2_49_2","article-title":"Byzantine-robust variance-reduced federated learning over distributed non-i.i.d. data","volume":"2009","author":"Peng Jie","year":"2020","unstructured":"Jie Peng, Zhaoxian Wu, and Qing Ling. 2020. Byzantine-robust variance-reduced federated learning over distributed non-i.i.d. data. CoRR abs\/2009.08161 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2017.2787987"},{"key":"e_1_3_2_51_2","first-page":"47:1\u201347:20","volume-title":"9th Innovations in Theoretical Computer Science Conference (ITCS\u201918)","volume":"94","author":"Qiao Mingda","year":"2018","unstructured":"Mingda Qiao and Gregory Valiant. 2018. Learning discrete distributions from untrusted batches. In 9th Innovations in Theoretical Computer Science Conference (ITCS\u201918), Vol. 94. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 47:1\u201347:20."},{"key":"e_1_3_2_52_2","first-page":"1842","volume-title":"Proceedings of the 33nd International Conference on Machine Learning (ICML\u201916)","volume":"48","author":"Santoro Adam","year":"2016","unstructured":"Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. 2016. Meta-learning with memory-augmented neural networks. In Proceedings of the 33nd International Conference on Machine Learning (ICML\u201916), Vol. 48. JMLR.org, 1842\u20131850."},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/2991079.2991125"},{"key":"e_1_3_2_54_2","first-page":"4077","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"Snell Jake","year":"2017","unstructured":"Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. 4077\u20134087."},{"key":"e_1_3_2_55_2","first-page":"3517","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"Steinhardt Jacob","year":"2017","unstructured":"Jacob Steinhardt, Pang Wei Koh, and Percy Liang. 2017. Certified defenses for data poisoning attacks. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. 3517\u20133529."},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00049"},{"key":"e_1_3_2_57_2","article-title":"Can you really backdoor federated learning?","volume":"1911","author":"Sun Ziteng","year":"2019","unstructured":"Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. 2019. Can you really backdoor federated learning? CoRR abs\/1911.07963 (2019).","journal-title":"CoRR"},{"key":"e_1_3_2_58_2","first-page":"3630","volume-title":"Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016","author":"Vinyals Oriol","year":"2016","unstructured":"Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016. 3630\u20133638."},{"key":"e_1_3_2_59_2","article-title":"Mitigating backdoor attacks in federated learning","volume":"2011","author":"Wu Chen","year":"2020","unstructured":"Chen Wu, Xian Yang, Sencun Zhu, and Prasenjit Mitra. 2020. Mitigating backdoor attacks in federated learning. CoRR abs\/2011.01767 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSP.2020.3012952"},{"key":"e_1_3_2_61_2","article-title":"Generalized byzantine-tolerant SGD","volume":"1802","author":"Xie Cong","year":"2018","unstructured":"Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2018. Generalized byzantine-tolerant SGD. CoRR abs\/1802.10116 (2018).","journal-title":"CoRR"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298981"},{"key":"e_1_3_2_63_2","first-page":"5636","volume-title":"Proceedings of the 35th International Conference on Machine Learning (ICML\u201918)","volume":"80","author":"Yin Dong","year":"2018","unstructured":"Dong Yin, Yudong Chen, Kannan Ramchandran, and Peter L. Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the 35th International Conference on Machine Learning (ICML\u201918), Vol. 80. PMLR, 5636\u20135645."},{"key":"e_1_3_2_64_2","article-title":"iDLG: Improved deep leakage from gradients","volume":"2001","author":"Zhao Bo","year":"2020","unstructured":"Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. iDLG: Improved deep leakage from gradients. CoRR abs\/2001.02610 (2020).","journal-title":"CoRR"},{"key":"e_1_3_2_65_2","first-page":"14747","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS\u201919)","author":"Zhu Ligeng","year":"2019","unstructured":"Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS\u201919). 14747\u201314756."}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3523062","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3523062","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:30:17Z","timestamp":1750188617000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3523062"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,23]]},"references-count":64,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,10,31]]}},"alternative-id":["10.1145\/3523062"],"URL":"https:\/\/doi.org\/10.1145\/3523062","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"type":"print","value":"2157-6904"},{"type":"electronic","value":"2157-6912"}],"subject":[],"published":{"date-parts":[[2022,9,23]]},"assertion":[{"value":"2021-04-30","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-02-28","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-09-23","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}