{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,10]],"date-time":"2026-05-10T06:09:09Z","timestamp":1778393349919,"version":"3.51.4"},"reference-count":230,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2023,9,15]],"date-time":"2023-09-15T00:00:00Z","timestamp":1694736000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Australian Research Council Discovery","award":["DP200100946 and DP230100246"],"award-info":[{"award-number":["DP200100946 and DP230100246"]}]},{"name":"NSF","award":["III-1763325, III-1909323, III-2106758, and SaTC-1930941"],"award-info":[{"award-number":["III-1763325, III-1909323, III-2106758, and SaTC-1930941"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,2,29]]},"abstract":"<jats:p>Federated learning (FL) has been a hot topic in recent years. Ever since it was introduced, researchers have endeavored to devise FL systems that protect privacy or ensure fair results, with most research focusing on one or the other. As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied. However, since privacy and fairness compete, considering each in isolation will inevitably come at the cost of the other. To provide a broad view of these two critical topics, we presented a detailed literature review of privacy and fairness issues, highlighting unique challenges posed by FL and solutions in federated settings. We further systematically surveyed different interactions between privacy and fairness, trying to reveal how privacy and fairness could affect each other and point out new research directions in fair and private FL.<\/jats:p>","DOI":"10.1145\/3606017","type":"journal-article","created":{"date-parts":[[2023,6,26]],"date-time":"2023-06-26T12:06:29Z","timestamp":1687781189000},"page":"1-37","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":73,"title":["Privacy and Fairness in Federated Learning: On the Perspective of Tradeoff"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4811-6742","authenticated-orcid":false,"given":"Huiqiang","family":"Chen","sequence":"first","affiliation":[{"name":"University of Technology Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3411-7947","authenticated-orcid":false,"given":"Tianqing","family":"Zhu","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4696-641X","authenticated-orcid":false,"given":"Tao","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1680-2521","authenticated-orcid":false,"given":"Wanlei","family":"Zhou","sequence":"additional","affiliation":[{"name":"City University of Macau, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3491-5968","authenticated-orcid":false,"given":"Philip S.","family":"Yu","sequence":"additional","affiliation":[{"name":"University of Illinois at Chicago, US"}]}],"member":"320","published-online":{"date-parts":[[2023,9,15]]},"reference":[{"key":"e_1_3_1_2_2","article-title":"Mitigating bias in federated learning","author":"Abay Annie","year":"2020","unstructured":"Annie Abay, Yi Zhou, Nathalie Baracaldo, Shashank Rajamoni, Ebube Chuba, and Heiko Ludwig. 2020. Mitigating bias in federated learning. arXiv preprint arXiv:2012.02447 (2020).","journal-title":"arXiv preprint arXiv:2012.02447"},{"key":"e_1_3_1_3_2","first-page":"60","volume-title":"International Conference on Machine Learning","author":"Agarwal Alekh","year":"2018","unstructured":"Alekh Agarwal, Alina Beygelzimer, Miroslav Dud\u00edk, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. In International Conference on Machine Learning. PMLR, 60\u201369."},{"key":"e_1_3_1_4_2","article-title":"Federated residual learning","author":"Agarwal Alekh","year":"2020","unstructured":"Alekh Agarwal, John Langford, and Chen-Yu Wei. 2020. Federated residual learning. arXiv preprint arXiv:2003.12880 (2020).","journal-title":"arXiv preprint arXiv:2003.12880"},{"key":"e_1_3_1_5_2","unstructured":"Moustafa Alzantot and Mani Srivastava. Differential privacy synthetic data generation using WGANs 2019. Retrieved from https:\/\/github.com\/nesl\/nist_differential_privacy_synthetic_data_challenge."},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-10-5421-1_9"},{"key":"e_1_3_1_7_2","first-page":"214","volume-title":"International Conference on Machine Learning","author":"Arjovsky Martin","year":"2017","unstructured":"Martin Arjovsky, Soumith Chintala, and L\u00e9on Bottou. 2017. Wasserstein generative adversarial networks. In International Conference on Machine Learning. PMLR, 214\u2013223."},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","unstructured":"Giuseppe Ateniese Giovanni Felici Luigi V. Mancini Angelo Spognardi Antonio Villani and Domenico Vitali. 2013. Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers. DOI:10.48550\/ARXIV.1306.4447","DOI":"10.48550\/ARXIV.1306.4447"},{"key":"e_1_3_1_9_2","first-page":"1770","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Awasthi Pranjal","year":"2020","unstructured":"Pranjal Awasthi, Matth\u00e4us Kleindessner, and Jamie Morgenstern. 2020. Equalized odds postprocessing under imperfect group information. In International Conference on Artificial Intelligence and Statistics. PMLR, 1770\u20131780."},{"key":"e_1_3_1_10_2","first-page":"15479","article-title":"Differential privacy has disparate impact on model accuracy","volume":"32","author":"Bagdasaryan Eugene","year":"2019","unstructured":"Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. 2019. Differential privacy has disparate impact on model accuracy. Adv. Neural Inf. Process. Syst. 32 (2019), 15479\u201315488.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_11_2","article-title":"Bayesian framework for gradient leakage","author":"Balunovi\u0107 Mislav","year":"2021","unstructured":"Mislav Balunovi\u0107, Dimitar I. Dimitrov, Robin Staab, and Martin Vechev. 2021. Bayesian framework for gradient leakage. arXiv preprint arXiv:2111.04706 (2021).","journal-title":"arXiv preprint arXiv:2111.04706"},{"key":"e_1_3_1_12_2","article-title":"A convex framework for fair regression","author":"Berk Richard","year":"2017","unstructured":"Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2017. A convex framework for fair regression. arXiv preprint arXiv:1706.02409 (2017).","journal-title":"arXiv preprint arXiv:1706.02409"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1177\/0049124118782533"},{"key":"e_1_3_1_14_2","first-page":"4349","article-title":"Man is to computer programmer as woman is to homemaker? Debiasing word embeddings","volume":"29","author":"Bolukbasi Tolga","year":"2016","unstructured":"Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Adv. Neural Inf. Process. Syst. 29 (2016), 4349\u20134357.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3133982"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/IEEECONF44664.2019.9049066"},{"key":"e_1_3_1_17_2","article-title":"Identifying and reducing gender bias in word-level language models","author":"Bordia Shikha","year":"2019","unstructured":"Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. arXiv preprint arXiv:1904.03035 (2019).","journal-title":"arXiv preprint arXiv:1904.03035"},{"key":"e_1_3_1_18_2","first-page":"1","article-title":"Federated learning with hierarchical clustering of local updates to improve training on non-IID data","author":"Briggs Christopher","year":"2020","unstructured":"Christopher Briggs, Zhong Fan, and P\u00e9ter Andr\u00e1s. 2020. Federated learning with hierarchical clustering of local updates to improve training on non-IID data. In International Joint Conference on Neural Networks (IJCNN\u201920).1\u20139.","journal-title":"International Joint Conference on Neural Networks (IJCNN\u201920)."},{"key":"e_1_3_1_19_2","first-page":"803","volume-title":"International Conference on Machine Learning","author":"Brunet Marc-Etienne","year":"2019","unstructured":"Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019. Understanding the origins of bias in word embeddings. In International Conference on Machine Learning. PMLR, 803\u2013811."},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10618-010-0190-x"},{"key":"e_1_3_1_21_2","doi-asserted-by":"crossref","unstructured":"Ran Canetti Aloni Cohen Nishanth Dikkala Govind Ramnarayan Sarah Scheffler and Adam Smith. 2019. From Soft classifiers to hard decisions: How fair can we be? arXiv:cs.LG\/1810.02003.","DOI":"10.1145\/3287560.3287561"},{"key":"e_1_3_1_22_2","first-page":"292","volume-title":"IEEE European Symposium on Security and Privacy (EuroS&P\u201921)","author":"Chang Hongyan","year":"2021","unstructured":"Hongyan Chang and Reza Shokri. 2021. On the privacy risks of algorithmic fairness. In IEEE European Symposium on Security and Privacy (EuroS&P\u201921). IEEE, 292\u2013303."},{"issue":"3","key":"e_1_3_1_23_2","article-title":"Differentially private empirical risk minimization.","volume":"12","author":"Chaudhuri Kamalika","year":"2011","unstructured":"Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. 2011. Differentially private empirical risk minimization. J. Mach. Learn. Res. 12, 3 (2011).","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_1_24_2","article-title":"pFL-Bench: A comprehensive benchmark for personalized federated learning","author":"Chen Daoyuan","year":"2022","unstructured":"Daoyuan Chen, Dawei Gao, Weirui Kuang, Yaliang Li, and Bolin Ding. 2022. pFL-Bench: A comprehensive benchmark for personalized federated learning. arXiv preprint arXiv:2206.03655 (2022).","journal-title":"arXiv preprint arXiv:2206.03655"},{"key":"e_1_3_1_25_2","article-title":"On bridging generic and personalized federated learning","author":"Chen Hong-You","year":"2021","unstructured":"Hong-You Chen and Wei-Lun Chao. 2021. On bridging generic and personalized federated learning. arXiv preprint arXiv:2107.00778 (2021).","journal-title":"arXiv preprint arXiv:2107.00778"},{"key":"e_1_3_1_26_2","first-page":"26","volume-title":"Pacific Symposium (BIOCOMPUTING\u201921)","author":"Chen Junjie","year":"2020","unstructured":"Junjie Chen, Wendy Hui Wang, and Xinghua Shi. 2020. Differential privacy protection against membership inference attack on machine learning for genomic data. In Pacific Symposium (BIOCOMPUTING\u201921). World Scientific, 26\u201337."},{"key":"e_1_3_1_27_2","article-title":"Improved techniques for model inversion attacks","author":"Chen Si","year":"2020","unstructured":"Si Chen, Ruoxi Jia, and Guo-Jun Qi. 2020. Improved techniques for model inversion attacks. arXiv preprint arXiv:2010.04092 (2020).","journal-title":"arXiv preprint arXiv:2010.04092"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2020.02.037"},{"key":"e_1_3_1_29_2","article-title":"Client selection in federated learning: Convergence analysis and power-of-choice selection strategies","author":"Cho Yae Jee","year":"2020","unstructured":"Yae Jee Cho, Jianyu Wang, and Gauri Joshi. 2020. Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv preprint arXiv:2010.01243 (2020).","journal-title":"arXiv preprint arXiv:2010.01243"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1089\/big.2016.0047"},{"key":"e_1_3_1_31_2","article-title":"The frontiers of fairness in machine learning","author":"Chouldechova Alexandra","year":"2018","unstructured":"Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018).","journal-title":"arXiv preprint arXiv:1810.08810"},{"key":"e_1_3_1_32_2","article-title":"FedFair: Training fair models in cross-silo federated learning","volume":"2109","author":"Chu Lingyang","year":"2021","unstructured":"Lingyang Chu, Lanjun Wang, Yanjie Dong, Jian Pei, Zirui Zhou, and Yong Zhang. 2021. FedFair: Training fair models in cross-silo federated learning. ArXiv abs\/2109.05662 (2021).","journal-title":"ArXiv"},{"key":"e_1_3_1_33_2","article-title":"Addressing algorithmic disparity and performance inconsistency in federated learning","volume":"34","author":"Cui Sen","year":"2021","unstructured":"Sen Cui, Weishen Pan, Jian Liang, Changshui Zhang, and Fei Wang. 2021. Addressing algorithmic disparity and performance inconsistency in federated learning. Adv. Neural Inf. Process. Syst. 34 (2021).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3314183.3323847"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1089\/big.2016.0048"},{"key":"e_1_3_1_36_2","article-title":"Adaptive personalized federated learning","author":"Deng Yuyang","year":"2020","unstructured":"Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. 2020. Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461 (2020).","journal-title":"arXiv preprint arXiv:2003.13461"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5402"},{"key":"e_1_3_1_38_2","article-title":"FedU: A unified framework for federated multi-task learning with Laplacian regularization","author":"Dinh Canh T.","year":"2021","unstructured":"Canh T. Dinh, Tung T. Vu, Nguyen H. Tran, Minh N. Dao, and Hongyu Zhang. 2021. FedU: A unified framework for federated multi-task learning with Laplacian regularization. arXiv preprint arXiv:2102.07148 (2021).","journal-title":"arXiv preprint arXiv:2102.07148"},{"key":"e_1_3_1_39_2","first-page":"202","volume-title":"22nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems","author":"Dinur Irit","year":"2003","unstructured":"Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In 22nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems. 202\u2013210."},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611976700.21"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2020.3009406"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.5555\/1791834.1791836"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1007\/11681878_14"},{"key":"e_1_3_1_45_2","article-title":"Disparate impact in differential privacy from gradient misalignment","author":"Esipova Maria S.","year":"2022","unstructured":"Maria S. Esipova, Atiyeh Ashari Ghomi, Yaqiao Luo, and Jesse C. Cresswell. 2022. Disparate impact in differential privacy from gradient misalignment. arXiv preprint arXiv:2206.07737 (2022).","journal-title":"arXiv preprint arXiv:2206.07737"},{"key":"e_1_3_1_46_2","article-title":"FairFed: Enabling group fairness in federated learning","author":"Ezzeldin Yahya H.","year":"2021","unstructured":"Yahya H. Ezzeldin, Shen Yan, Chaoyang He, Emilio Ferrara, and Salman Avestimehr. 2021. FairFed: Enabling group fairness in federated learning. arXiv preprint arXiv:2110.00857 (2021).","journal-title":"arXiv preprint arXiv:2110.00857"},{"key":"e_1_3_1_47_2","first-page":"3557","volume-title":"Advances in Neural Information Processing Systems","author":"Fallah Alireza","year":"2020","unstructured":"Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. 2020. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. In Advances in Neural Information Processing Systems, Vol. 33. 3557\u20133568."},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-63076-8_3"},{"key":"e_1_3_1_49_2","first-page":"15","volume-title":"Workshop on Privacy-preserving Machine Learning in Practice","author":"Farrand Tom","year":"2020","unstructured":"Tom Farrand, Fatemehsadat Mireshghallah, Sahib Singh, and Andrew Trask. 2020. Neither private nor fair: Impact of data imbalance on utility and fairness in differential privacy. In Workshop on Privacy-preserving Machine Learning in Practice. 15\u201319."},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2783311"},{"key":"e_1_3_1_51_2","first-page":"1126","volume-title":"International Conference on Machine Learning","author":"Finn Chelsea","year":"2017","unstructured":"Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning. PMLR, 1126\u20131135."},{"key":"e_1_3_1_52_2","doi-asserted-by":"crossref","first-page":"147","DOI":"10.18653\/v1\/W19-3821","volume-title":"1st Workshop on Gender Bias in Natural Language Processing","author":"Font Joel Escud\u00e9","year":"2019","unstructured":"Joel Escud\u00e9 Font and Marta R. Costa-Juss\u00e0. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In 1st Workshop on Gender Bias in Natural Language Processing. 147\u2013154."},{"key":"e_1_3_1_53_2","article-title":"Robbing the fed: Directly obtaining private data in federated learning with modified models","author":"Fowl Liam","year":"2021","unstructured":"Liam Fowl, Jonas Geiping, Wojtek Czaja, Micah Goldblum, and Tom Goldstein. 2021. Robbing the fed: Directly obtaining private data in federated learning with modified models. arXiv preprint arXiv:2110.13057 (2021).","journal-title":"arXiv preprint arXiv:2110.13057"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813677"},{"key":"e_1_3_1_55_2","first-page":"17","volume-title":"23rd USENIX Security Symposium (USENIX Security\u201914)","author":"Fredrikson Matthew","year":"2014","unstructured":"Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In 23rd USENIX Security Symposium (USENIX Security\u201914). USENIX Association, 17\u201332. Retrieved from https:\/\/www.usenix.org\/conference\/usenixsecurity14\/technical-sessions\/presentation\/fredrikson_matthew."},{"key":"e_1_3_1_56_2","article-title":"Enforcing fairness in private federated learning via the modified method of differential multipliers","volume":"2109","author":"G\u00e1lvez Borja Rodr\u00edguez","year":"2021","unstructured":"Borja Rodr\u00edguez G\u00e1lvez, Filip Granqvist, Rogier C. van Dalen, and Matthew Stephen Seigel. 2021. Enforcing fairness in private federated learning via the modified method of differential multipliers. ArXiv abs\/2109.08604 (2021).","journal-title":"ArXiv"},{"key":"e_1_3_1_57_2","first-page":"6944","volume-title":"International Conference on Machine Learning","author":"Ganev Georgi","year":"2022","unstructured":"Georgi Ganev, Bristena Oprisanu, and Emiliano De Cristofaro. 2022. Robin Hood and Matthew effects: Differential privacy has disparate impact on synthetic data. In International Conference on Machine Learning. PMLR, 6944\u20136959."},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3243834"},{"key":"e_1_3_1_59_2","article-title":"Inverting gradients\u2014How easy is it to break privacy in federated learning?","author":"Geiping Jonas","year":"2020","unstructured":"Jonas Geiping, Hartmut Bauermeister, Hannah Dr\u00f6ge, and Michael Moeller. 2020. Inverting gradients\u2014How easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020).","journal-title":"arXiv preprint arXiv:2003.14053"},{"key":"e_1_3_1_60_2","article-title":"Differentially private federated learning: A client level perspective","author":"Geyer Robin C.","year":"2017","unstructured":"Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).","journal-title":"arXiv preprint arXiv:1712.07557"},{"key":"e_1_3_1_61_2","article-title":"An efficient framework for clustered federated learning","volume":"33","author":"Ghosh Avishek","year":"2020","unstructured":"Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. 2020. An efficient framework for clustered federated learning. Adv. Neural Inf. Process. Syst. 33 (2020).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_62_2","first-page":"2415","volume-title":"Conference on Advances in Neural Information Processing Systems","author":"Goh Gabriel","year":"2016","unstructured":"Gabriel Goh, Andrew Cotter, Maya Gupta, and Michael P. Friedlander. 2016. Satisfying real-world goals with dataset constraints. In Conference on Advances in Neural Information Processing Systems. 2415\u20132423."},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.56021\/9781421407944"},{"key":"e_1_3_1_64_2","volume-title":"29th Conference on Neural Information Processing Systems (NIPS\u201916). NIPS Foundation","author":"Goodman Bryce W.","year":"2016","unstructured":"Bryce W. Goodman. 2016. A step towards accountable algorithms? Algorithmic discrimination and the European Union general data protection. In 29th Conference on Neural Information Processing Systems (NIPS\u201916). NIPS Foundation."},{"key":"e_1_3_1_65_2","unstructured":"Zhongshu Gu Heqing Huang Jialong Zhang Dong Su Hani Jamjoom Ankita Lamba Dimitrios Pendarakis and Ian Molloy. 2019. YerbaBuena: Securing deep learning inference data via enclave-based ternary model partitioning. (2019). arXiv preprint arXiv:1807.00969."},{"key":"e_1_3_1_66_2","unstructured":"Maya Gupta Andrew Cotter Mahdi Milani Fard and Serena Wang. 2018. Proxy fairness. arXiv:cs.LG\/1806.11212."},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jnca.2018.05.003"},{"key":"e_1_3_1_68_2","article-title":"Recovering private text in federated learning of language models","author":"Gupta Samyak","year":"2022","unstructured":"Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, and Danqi Chen. 2022. Recovering private text in federated learning of language models. arXiv preprint arXiv:2205.08514 (2022).","journal-title":"arXiv preprint arXiv:2205.08514"},{"key":"e_1_3_1_69_2","article-title":"FedSketch: Communication-efficient and private federated learning via sketching","author":"Haddadpour Farzin","year":"2020","unstructured":"Farzin Haddadpour, Belhal Karimi, Ping Li, and Xiaoyun Li. 2020. FedSketch: Communication-efficient and private federated learning via sketching. arXiv preprint arXiv:2008.04975 (2020).","journal-title":"arXiv preprint arXiv:2008.04975"},{"key":"e_1_3_1_70_2","unstructured":"Filip Hanzely and Peter Richt\u00e1rik. 2021. Federated learning of a mixture of global and local models. arXiv: cs.LG\/2002.05516."},{"key":"e_1_3_1_71_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW53098.2021.00369"},{"key":"e_1_3_1_72_2","article-title":"Federated learning for mobile keyboard prediction","author":"Hard Andrew","year":"2018","unstructured":"Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Fran\u00e7oise Beaufays, Sean Augenstein, Hubert Eichner, Chlo\u00e9 Kiddon, and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018).","journal-title":"arXiv preprint arXiv:1811.03604"},{"key":"e_1_3_1_73_2","first-page":"3315","article-title":"Equality of opportunity in supervised learning","volume":"29","author":"Hardt Moritz","year":"2016","unstructured":"Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29 (2016), 3315\u20133323.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00978"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134012"},{"key":"e_1_3_1_76_2","unstructured":"Zeou Hu Kiarash Shaloudegi Guojun Zhang and Yaoliang Yu. 2020. FedMGDA+: Federated learning meets multi-objective optimization. arXiv:cs.LG\/2006.11489."},{"key":"e_1_3_1_77_2","unstructured":"Wei Huang Tianrui Li Dexian Wang Shengdong Du and Junbo Zhang. 2020. Fairness and accuracy in federated learning. arXiv:cs.LG\/2012.10069."},{"key":"e_1_3_1_78_2","first-page":"7232","article-title":"Evaluating gradient inversion attacks and defenses in federated learning","volume":"34","author":"Huang Yangsibo","year":"2021","unstructured":"Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, and Sanjeev Arora. 2021. Evaluating gradient inversion attacks and defenses in federated learning. Adv. Neural Inf. Process. Syst. 34 (2021), 7232\u20137241.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_79_2","first-page":"4507","volume-title":"International Conference on Machine Learning","author":"Huang Yangsibo","year":"2020","unstructured":"Yangsibo Huang, Zhao Song, Kai Li, and Sanjeev Arora. 2020. InstaHide: Instance-hiding schemes for private distributed learning. In International Conference on Machine Learning. PMLR, 4507\u20134518."},{"key":"e_1_3_1_80_2","article-title":"Efficient deep learning on multi-source private data","author":"Hynes Nick","year":"2018","unstructured":"Nick Hynes, Raymond Cheng, and Dawn Song. 2018. Efficient deep learning on multi-source private data. arXiv preprint arXiv:1807.06689 (2018).","journal-title":"arXiv preprint arXiv:1807.06689"},{"key":"e_1_3_1_81_2","first-page":"3000","volume-title":"International Conference on Machine Learning","author":"Jagielski Matthew","year":"2019","unstructured":"Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. 2019. Differentially private fair learning. In International Conference on Machine Learning. PMLR, 3000\u20133008."},{"key":"e_1_3_1_82_2","article-title":"Gradient inversion with generative image prior","volume":"34","author":"Jeon Jinwoo","year":"2021","unstructured":"Jinwoo Jeon, jaechang Kim, Kangwook Lee, Sewoong Oh, and Jungseul Ok. 2021. Gradient inversion with generative image prior. Adv. Neural Inf. Process. Syst. 34 (2021).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_83_2","unstructured":"Eunjeong Jeong Seungeun Oh Hyesung Kim Jihong Park Mehdi Bennis and Seong-Lyun Kim. 2018. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-IID private data. arXiv:cs.LG\/1811.11479."},{"key":"e_1_3_1_84_2","unstructured":"Yihan Jiang Jakub Kone\u010dn\u00fd Keith Rush and Sreeram Kannan. 2019. Improving federated learning personalization via model agnostic meta learning. arXiv:cs.LG\/1909.12488."},{"key":"e_1_3_1_85_2","volume-title":"International Conference on Learning Representations","author":"Jiang Zhimeng","year":"2022","unstructured":"Zhimeng Jiang, Xiaotian Han, Chao Fan, Fan Yang, Ali Mostafavi, and Xia Hu. 2022. Generalized demographic parity for group fairness. In International Conference on Learning Representations."},{"key":"e_1_3_1_86_2","article-title":"FLASHE: Additively symmetric homomorphic encryption for cross-silo federated learning","author":"Jiang Zhifeng","year":"2021","unstructured":"Zhifeng Jiang, Wei Wang, and Yang Liu. 2021. FLASHE: Additively symmetric homomorphic encryption for cross-silo federated learning. arXiv preprint arXiv:2109.00675 (2021).","journal-title":"arXiv preprint arXiv:2109.00675"},{"key":"e_1_3_1_87_2","article-title":"Catastrophic data leakage in vertical federated learning","volume":"34","author":"Jin Xiao","year":"2021","unstructured":"Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, and Tianyi Chen. 2021. Catastrophic data leakage in vertical federated learning. Adv. Neural Inf. Process. Syst. 34 (2021).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_88_2","volume-title":"International Conference on Learning Representations","author":"Jordon James","year":"2019","unstructured":"James Jordon, Jinsung Yoon, and Mihaela Van Der Schaar. 2019. PATE-GAN: Generating synthetic data with differential privacy guarantees. In International Conference on Learning Representations."},{"key":"e_1_3_1_89_2","article-title":"Advances and open problems in federated learning","author":"Kairouz Peter","year":"2019","unstructured":"Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aur\u00e9lien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Graham Cormode, Rachel Cummings, and others. 2019. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019).","journal-title":"arXiv preprint arXiv:1912.04977"},{"key":"e_1_3_1_90_2","doi-asserted-by":"crossref","unstructured":"Nathan Kallus Xiaojie Mao and Angela Zhou. 2020. Assessing algorithmic fairness with unobserved protected class using data combination. arXiv:stat.ML\/1906.00285.","DOI":"10.1145\/3351095.3373154"},{"key":"e_1_3_1_91_2","first-page":"1","volume-title":"19th Machine Learning Conference.","author":"Kamiran Faisal","year":"2010","unstructured":"Faisal Kamiran and Toon Calders. 2010. Classification with no discrimination by preferential sampling. In 19th Machine Learning Conference. Citeseer, 1\u20136."},{"key":"e_1_3_1_92_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-011-0463-8"},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-33486-3_3"},{"key":"e_1_3_1_94_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDMW.2011.83"},{"key":"e_1_3_1_95_2","doi-asserted-by":"publisher","DOI":"10.1145\/3493700.3493750"},{"key":"e_1_3_1_96_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.1983.1056621"},{"key":"e_1_3_1_97_2","article-title":"OLIVE: Oblivious and differentially private federated learning on trusted execution environment","author":"Kato Fumiyuki","year":"2022","unstructured":"Fumiyuki Kato, Yang Cao, and Masatoshi Yoshikawa. 2022. OLIVE: Oblivious and differentially private federated learning on trusted execution environment. arXiv preprint arXiv:2202.07165 (2022).","journal-title":"arXiv preprint arXiv:2202.07165"},{"key":"e_1_3_1_98_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i9.16986"},{"key":"e_1_3_1_99_2","article-title":"CertiFair: A framework for certified global fairness of neural networks","author":"Khedr Haitham","year":"2022","unstructured":"Haitham Khedr and Yasser Shoukry. 2022. CertiFair: A framework for certified global fairness of neural networks. arXiv preprint arXiv:2205.09927 (2022).","journal-title":"arXiv preprint arXiv:2205.09927"},{"key":"e_1_3_1_100_2","first-page":"5917","article-title":"Adaptive gradient-based meta-learning methods","volume":"32","author":"Khodak Mikhail","year":"2019","unstructured":"Mikhail Khodak, Maria-Florina F. Balcan, and Ameet S. Talwalkar. 2019. Adaptive gradient-based meta-learning methods. Adv. Neural Inf. Process. Syst. 32 (2019), 5917\u20135928.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_101_2","first-page":"2630","volume-title":"International Conference on Machine Learning","author":"Kilbertus Niki","year":"2018","unstructured":"Niki Kilbertus, Adri\u00e0 Gasc\u00f3n, Matt Kusner, Michael Veale, Krishna Gummadi, and Adrian Weller. 2018. Blind justice: Fairness with encrypted sensitive attributes. In International Conference on Machine Learning. PMLR, 2630\u20132639."},{"key":"e_1_3_1_102_2","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1611835114"},{"key":"e_1_3_1_103_2","volume-title":"8th Innovations in Theoretical Computer Science Conference (ITCS\u201917)","author":"Kleinberg Jon","year":"2017","unstructured":"Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent trade-offs in the fair determination of risk scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS\u201917). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik."},{"key":"e_1_3_1_104_2","doi-asserted-by":"publisher","DOI":"10.1109\/WorldS450073.2020.9210355"},{"key":"e_1_3_1_105_2","article-title":"Fair decision making using privacy-protected data","author":"Kuppam Satya","year":"2020","unstructured":"Satya Kuppam, Ryan McKenna, David Pujol, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. 2020. Fair decision making using privacy-protected data. In Conference on Fairness, Accountability, and Transparency.","journal-title":"Conference on Fairness, Accountability, and Transparency"},{"key":"e_1_3_1_106_2","unstructured":"Matt J. Kusner Joshua R. Loftus Chris Russell and Ricardo Silva. 2018. Counterfactual fairness. arXiv: stat.ML\/1703.06856."},{"key":"e_1_3_1_107_2","article-title":"Noise-tolerant fair classification","volume":"32","author":"Lamy Alex","year":"2019","unstructured":"Alex Lamy, Ziyuan Zhong, Aditya K. Menon, and Nakul Verma. 2019. Noise-tolerant fair classification. Adv. Neural Inf. Process. Syst. 32 (2019).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_108_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFCOM.2010.5461911"},{"key":"e_1_3_1_109_2","first-page":"1605","volume-title":"29th USENIX Security Symposium (USENIX Security\u201920)","author":"Leino Klas","year":"2020","unstructured":"Klas Leino and Matt Fredrikson. 2020. Stolen memories: Leveraging model memorization for calibrated White-Box membership inference. In 29th USENIX Security Symposium (USENIX Security\u201920). 1605\u20131622."},{"key":"e_1_3_1_110_2","article-title":"FedMD: Heterogenous federated learning via model distillation","author":"Li Daliang","year":"2019","unstructured":"Daliang Li and Junpu Wang. 2019. FedMD: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581 (2019).","journal-title":"arXiv preprint arXiv:1910.03581"},{"key":"e_1_3_1_111_2","first-page":"6357","volume-title":"Proceedings of the 38th International Conference on Machine Learning (Proceedings of Machine Learning Research)","volume":"139","author":"Li Tian","year":"2021","unstructured":"Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. 2021. Ditto: Fair and robust federated learning through personalization. In Proceedings of the 38th International Conference on Machine Learning (Proceedings of Machine Learning Research), Marina Meila and Tong Zhang (Eds.), Vol. 139. PMLR, 6357\u20136368. Retrieved from https:\/\/proceedings.mlr.press\/v139\/li21h.html."},{"key":"e_1_3_1_112_2","article-title":"Privacy for free: Communication-efficient learning with differential privacy using sketches","author":"Li Tian","year":"2019","unstructured":"Tian Li, Zaoxing Liu, Vyas Sekar, and Virginia Smith. 2019. Privacy for free: Communication-efficient learning with differential privacy using sketches. arXiv preprint arXiv:1911.00972 (2019).","journal-title":"arXiv preprint arXiv:1911.00972"},{"key":"e_1_3_1_113_2","first-page":"429","article-title":"Federated optimization in heterogeneous networks","volume":"2","author":"Li Tian","year":"2020","unstructured":"Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2 (2020), 429\u2013450.","journal-title":"Proc. Mach. Learn. Syst."},{"key":"e_1_3_1_114_2","article-title":"Fair resource allocation in federated learning","author":"Li Tian","year":"2019","unstructured":"Tian Li, Maziar Sanjabi, Ahmad Beirami, and Virginia Smith. 2019. Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019).","journal-title":"arXiv preprint arXiv:1905.10497"},{"key":"e_1_3_1_115_2","article-title":"SoteriaFL: A unified framework for private federated learning with communication compression","author":"Li Zhize","year":"2022","unstructured":"Zhize Li, Haoyu Zhao, Boyue Li, and Yuejie Chi. 2022. SoteriaFL: A unified framework for private federated learning with communication compression. arXiv preprint arXiv:2206.09888 (2022).","journal-title":"arXiv preprint arXiv:2206.09888"},{"key":"e_1_3_1_116_2","unstructured":"Paul Pu Liang Terrance Liu Liu Ziyin Nicholas B. Allen Randy P. Auerbach David Brent Ruslan Salakhutdinov and Louis-Philippe Morency. 2020. Think locally act globally: Federated learning with local and global representations. arXiv:cs.LG\/2001.01523."},{"key":"e_1_3_1_117_2","unstructured":"Shiyun Lin Yuze Han Xiang Li and Zhihua Zhang. 2020. Personalized federated learning towards communication efficiency robustness and fairness. Adv. Neural Inf. Process. Syst. 35 (2020)."},{"issue":"2","key":"e_1_3_1_118_2","first-page":"346","article-title":"Survey on privacy-preserving machine learning","volume":"57","author":"Liu Junxu","year":"2020","unstructured":"Junxu Liu and Xiaofeng Meng. 2020. Survey on privacy-preserving machine learning. J. Comput. Res. Devel. 57, 2 (2020), 346.","journal-title":"J. Comput. Res. Devel."},{"key":"e_1_3_1_119_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2020.2988525"},{"key":"e_1_3_1_120_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8682620"},{"key":"e_1_3_1_121_2","volume-title":"International Conference on Learning Representations","author":"Louizos Christos","year":"2016","unstructured":"Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard S. Zemel. 2016. The variational fair autoencoder. In International Conference on Learning Representations."},{"key":"e_1_3_1_122_2","article-title":"Stochastic differentially private and fair learning","author":"Lowy Andrew","year":"2023","unstructured":"Andrew Lowy, Devansh Gupta, and Meisam Razaviyayn. 2023. Stochastic differentially private and fair learning. In International Conference on Learning Representations.","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_1_123_2","unstructured":"Mi Luo Fei Chen Dapeng Hu Yifan Zhang Jian Liang and Jiashi Feng. 2021. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. Adv. Neural Inf. Process. Syst. 34 (2021) 5972\u20135984."},{"key":"e_1_3_1_124_2","article-title":"Threats to federated learning: A survey","author":"Lyu Lingjuan","year":"2020","unstructured":"Lingjuan Lyu, Han Yu, and Qiang Yang. 2020. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133 (2020).","journal-title":"arXiv preprint arXiv:2003.02133"},{"key":"e_1_3_1_125_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2021.3112737"},{"key":"e_1_3_1_126_2","article-title":"Three approaches for personalization with applications to federated learning","author":"Mansour Yishay","year":"2020","unstructured":"Yishay Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. 2020. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619 (2020).","journal-title":"arXiv preprint arXiv:2002.10619"},{"key":"e_1_3_1_127_2","first-page":"6755","volume-title":"Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research)","volume":"119","author":"Martinez Natalia","year":"2020","unstructured":"Natalia Martinez, Martin Bertran, and Guillermo Sapiro. 2020. Minimax Pareto fairness: A multi objective perspective. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research), Hal Daum\u00e9 III and Aarti Singh (Eds.), Vol. 119. PMLR, 6755\u20136764."},{"key":"e_1_3_1_128_2","article-title":"On measuring social biases in sentence encoders","author":"May Chandler","year":"2019","unstructured":"Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. arXiv preprint arXiv:1903.10561 (2019).","journal-title":"arXiv preprint arXiv:1903.10561"},{"key":"e_1_3_1_129_2","first-page":"1273","volume-title":"Artificial Intelligence and Statistics","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273\u20131282."},{"key":"e_1_3_1_130_2","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS.2007.66"},{"key":"e_1_3_1_131_2","doi-asserted-by":"publisher","DOI":"10.1145\/3457607"},{"key":"e_1_3_1_132_2","first-page":"691","volume-title":"IEEE Symposium on Security and Privacy (SP\u201919)","author":"Melis Luca","year":"2019","unstructured":"Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In IEEE Symposium on Security and Privacy (SP\u201919). IEEE, 691\u2013706."},{"key":"e_1_3_1_133_2","first-page":"263","volume-title":"IEEE 30th Computer Security Foundations Symposium (CSF\u201917)","author":"Mironov Ilya","year":"2017","unstructured":"Ilya Mironov. 2017. R\u00e9nyi differential privacy. In IEEE 30th Computer Security Foundations Symposium (CSF\u201917). IEEE, 263\u2013275."},{"key":"e_1_3_1_134_2","doi-asserted-by":"publisher","DOI":"10.1145\/3386901.3388946"},{"key":"e_1_3_1_135_2","doi-asserted-by":"publisher","DOI":"10.1109\/90.879343"},{"key":"e_1_3_1_136_2","first-page":"4615","volume-title":"International Conference on Machine Learning","author":"Mohri Mehryar","year":"2019","unstructured":"Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. 2019. Agnostic federated learning. In International Conference on Machine Learning. PMLR, 4615\u20134625."},{"key":"e_1_3_1_137_2","article-title":"SCOTCH: An efficient secure computation framework for secure aggregation","author":"Mondal Arup","year":"2022","unstructured":"Arup Mondal, Yash More, Prashanthi Ramachandran, Priyam Panda, Harpreet Virk, and Debayan Gupta. 2022. SCOTCH: An efficient secure computation framework for secure aggregation. arXiv preprint arXiv:2201.07730 (2022).","journal-title":"arXiv preprint arXiv:2201.07730"},{"key":"e_1_3_1_138_2","first-page":"7066","volume-title":"International Conference on Machine Learning","author":"Mozannar Hussein","year":"2020","unstructured":"Hussein Mozannar, Mesrob Ohannessian, and Nathan Srebro. 2020. Fair learning with private demographic data. In International Conference on Machine Learning. PMLR, 7066\u20137075."},{"issue":"10","key":"e_1_3_1_139_2","first-page":"9046","article-title":"Game of gradients: Mitigating irrelevant clients in federated learning","volume":"35","author":"Nagalapatti Lokesh","year":"2021","unstructured":"Lokesh Nagalapatti and Ramasuri Narayanam. 2021. Game of gradients: Mitigating irrelevant clients in federated learning. Proc. AAAI Conf. Artif. Intell. 35, 10 (52021), 9046\u20139054. Retrieved from https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17093.","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"e_1_3_1_140_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00065"},{"key":"e_1_3_1_141_2","unstructured":"Alex Nichol Joshua Achiam and John Schulman. 2018. On first-order meta-learning algorithms. arXiv: cs.LG\/1803.02999."},{"key":"e_1_3_1_142_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICC.2019.8761315"},{"key":"e_1_3_1_143_2","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2019.00013"},{"key":"e_1_3_1_144_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-92310-5_80"},{"key":"e_1_3_1_145_2","article-title":"Exploring the security boundary of data reconstruction via neuron exclusivity analysis","author":"Pan Xudong","year":"2020","unstructured":"Xudong Pan, Mi Zhang, Yifan Yan, Jiaming Zhu, and Min Yang. 2020. Exploring the security boundary of data reconstruction via neuron exclusivity analysis. arXiv preprint arXiv:2010.13356 (2020).","journal-title":"arXiv preprint arXiv:2010.13356"},{"key":"e_1_3_1_146_2","unstructured":"Afroditi Papadaki Natalia Martinez Martin Bertran Guillermo Sapiro and Miguel Rodrigues. 2021. Federating for learning group fair models. arXiv:cs.LG\/2110.01999."},{"key":"e_1_3_1_147_2","article-title":"Semi-supervised knowledge transfer for deep learning from private training data","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Mart\u00edn Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. 2016. Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755 (2016).","journal-title":"arXiv preprint arXiv:1610.05755"},{"key":"e_1_3_1_148_2","doi-asserted-by":"publisher","DOI":"10.3390\/app12020734"},{"key":"e_1_3_1_149_2","doi-asserted-by":"publisher","DOI":"10.5555\/3196160.3196245"},{"key":"e_1_3_1_150_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2019.2911169"},{"key":"e_1_3_1_151_2","volume-title":"Advances in Neural Information Processing Systems","author":"Pleiss Geoff","year":"2017","unstructured":"Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On fairness and calibration. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/b8b9c74ac526fffbeb2d39ab038d1cd7-Paper.pdf."},{"key":"e_1_3_1_152_2","unstructured":"Jia Qian and Lars Kai Hansen. 2020. What can we learn from gradients? arXiv preprint arXiv:2010.15718."},{"key":"e_1_3_1_153_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510032"},{"key":"e_1_3_1_154_2","first-page":"1291","volume-title":"29th USENIX Security Symposium (USENIX Security\u201920)","author":"Salem Ahmed","year":"2020","unstructured":"Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang. 2020. Updates-Leak: Data set inference and reconstruction attacks in online learning. In 29th USENIX Security Symposium (USENIX Security\u201920). 1291\u20131308."},{"key":"e_1_3_1_155_2","article-title":"FairCal: Fairness calibration for face verification","author":"Salvador Tiago","year":"2021","unstructured":"Tiago Salvador, Stephanie Cairns, Vikram Voleti, Noah Marshall, and Adam Oberman. 2021. FairCal: Fairness calibration for face verification. arXiv preprint arXiv:2106.03761 (2021).","journal-title":"arXiv preprint arXiv:2106.03761"},{"key":"e_1_3_1_156_2","first-page":"1738","volume-title":"Uncertainty in Artificial Intelligence","author":"Sanyal Amartya","year":"2022","unstructured":"Amartya Sanyal, Yaxi Hu, and Fanny Yang. 2022. How unfair is private learning? In Uncertainty in Artificial Intelligence. PMLR, 1738\u20131748."},{"key":"e_1_3_1_157_2","first-page":"1668","volume-title":"57th Annual Meeting of the Association for Computational Linguistics","author":"Sap Maarten","year":"2019","unstructured":"Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In 57th Annual Meeting of the Association for Computational Linguistics. 1668\u20131678."},{"key":"e_1_3_1_158_2","article-title":"Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints","author":"Sattler Felix","year":"2020","unstructured":"Felix Sattler, Klaus-Robert M\u00fcller, and Wojciech Samek. 2020. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE Trans. Neural Netw. Learn. Syst. 32, 8 (2020), 3710\u20133722.","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"e_1_3_1_159_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV51458.2022.00366"},{"key":"e_1_3_1_160_2","doi-asserted-by":"publisher","DOI":"10.1145\/359168.359176"},{"key":"e_1_3_1_161_2","article-title":"DReS-FL: Dropout-resilient secure federated learning for non-iid clients via secret data sharing","author":"Shao Jiawei","year":"2022","unstructured":"Jiawei Shao, Yuchang Sun, Songze Li, and Jun Zhang. 2022. DReS-FL: Dropout-resilient secure federated learning for non-iid clients via secret data sharing. arXiv preprint arXiv:2210.02680 (2022).","journal-title":"arXiv preprint arXiv:2210.02680"},{"key":"e_1_3_1_162_2","article-title":"A survey of fairness-aware federated learning","author":"Shi Yuxin","year":"2021","unstructured":"Yuxin Shi, Han Yu, and Cyril Leung. 2021. A survey of fairness-aware federated learning. arXiv preprint arXiv:2111.01872 (2021).","journal-title":"arXiv preprint arXiv:2111.01872"},{"key":"e_1_3_1_163_2","doi-asserted-by":"publisher","DOI":"10.3390\/s21237806"},{"key":"e_1_3_1_164_2","article-title":"Overcoming forgetting in federated learning on non-iid data","author":"Shoham Neta","year":"2019","unstructured":"Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef, and Itai Zeitak. 2019. Overcoming forgetting in federated learning on non-iid data. arXiv preprint arXiv:1910.07796 (2019).","journal-title":"arXiv preprint arXiv:1910.07796"},{"key":"e_1_3_1_165_2","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813687"},{"key":"e_1_3_1_166_2","first-page":"3","volume-title":"IEEE Symposium on Security and Privacy (SP\u201917)","author":"Shokri Reza","year":"2017","unstructured":"Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy (SP\u201917). IEEE, 3\u201318."},{"key":"e_1_3_1_167_2","unstructured":"Karan Singhal Hakim Sidahmed Zachary Garrett Shanshan Wu Keith Rush and Sushant Prakash. 2021. Federated reconstruction: Partially local federated learning. arXiv:cs.LG\/2102.03448."},{"key":"e_1_3_1_168_2","volume-title":"Conference on Neural Information Processing Systems (NIPS\u201917)","author":"Smith Virginia","year":"2017","unstructured":"Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S. Talwalkar. 2017. Federated multi-task learning. In Conference on Neural Information Processing Systems (NIPS\u201917)."},{"key":"e_1_3_1_169_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSAIT.2021.3054610"},{"key":"e_1_3_1_170_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134077"},{"key":"e_1_3_1_171_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSAC.2020.3000372"},{"key":"e_1_3_1_172_2","doi-asserted-by":"publisher","DOI":"10.1109\/GlobalSIP.2013.6736861"},{"key":"e_1_3_1_173_2","doi-asserted-by":"publisher","unstructured":"Ruoyu Sun Tiantian Fang and Alex Schwing. 2020. Towards a better global loss landscape of GANs. DOI:10.48550\/ARXIV.2011.04926","DOI":"10.48550\/ARXIV.2011.04926"},{"key":"e_1_3_1_174_2","article-title":"Towards personalized federated learning","author":"Tan Alysa Ziying","year":"2021","unstructured":"Alysa Ziying Tan, Han Yu, Lizhen Cui, and Qiang Yang. 2021. Towards personalized federated learning. arXiv preprint arXiv:2103.00710 (2021).","journal-title":"arXiv preprint arXiv:2103.00710"},{"key":"e_1_3_1_175_2","article-title":"Slalom: Fast, verifiable and private execution of neural networks in trusted hardware","author":"Tramer Florian","year":"2018","unstructured":"Florian Tramer and Dan Boneh. 2018. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. arXiv preprint arXiv:1806.03287 (2018).","journal-title":"arXiv preprint arXiv:1806.03287"},{"key":"e_1_3_1_176_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i11.17193"},{"key":"e_1_3_1_177_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/78"},{"key":"e_1_3_1_178_2","first-page":"1","volume-title":"12th ACM Workshop on Artificial Intelligence and Security","author":"Truex Stacey","year":"2019","unstructured":"Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou. 2019. A hybrid approach to privacy-preserving federated learning. In 12th ACM Workshop on Artificial Intelligence and Security. 1\u201311."},{"key":"e_1_3_1_179_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSC.2019.2897554"},{"key":"e_1_3_1_180_2","article-title":"DP-SGD vs PATE: Which has less disparate impact on model accuracy?","author":"Uniyal Archit","year":"2021","unstructured":"Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, Fatemehsadat Mireshghallah, and Andrew Trask. 2021. DP-SGD vs PATE: Which has less disparate impact on model accuracy? arXiv preprint arXiv:2106.12576 (2021).","journal-title":"arXiv preprint arXiv:2106.12576"},{"key":"e_1_3_1_181_2","doi-asserted-by":"publisher","DOI":"10.1177\/2053951717743530"},{"key":"e_1_3_1_182_2","doi-asserted-by":"publisher","DOI":"10.1145\/3194770.3194776"},{"key":"e_1_3_1_183_2","first-page":"3152676","article-title":"The EU General Data Protection Regulation (GDPR)","volume":"10","author":"Voigt Paul","year":"2017","unstructured":"Paul Voigt and Axel Von dem Bussche. 2017. The EU General Data Protection Regulation (GDPR). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10 (2017), 3152676.","journal-title":"A Practical Guide, 1st Ed., Cham: Springer International Publishing"},{"key":"e_1_3_1_184_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM41043.2020.9155494"},{"key":"e_1_3_1_185_2","doi-asserted-by":"publisher","unstructured":"Lixu Wang Shichao Xu Xiao Wang and Qi Zhu. 2019. Eavesdrop the composition proportion of training labels in federated learning. DOI:10.48550\/ARXIV.1910.06044","DOI":"10.48550\/ARXIV.1910.06044"},{"key":"e_1_3_1_186_2","first-page":"5190","article-title":"Robust optimization for fairness with noisy protected groups","volume":"33","author":"Wang Serena","year":"2020","unstructured":"Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, and Michael Jordan. 2020. Robust optimization for fairness with noisy protected groups. Adv. Neural Inf. Process. Syst. 33 (2020), 5190\u20135203.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_187_2","article-title":"SAPAG: A self-adaptive privacy attack from gradients","author":"Wang Yijue","year":"2020","unstructured":"Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, and Sanguthevar Rajasekaran. 2020. SAPAG: A self-adaptive privacy attack from gradients. arXiv preprint arXiv:2009.06228 (2020).","journal-title":"arXiv preprint arXiv:2009.06228"},{"key":"e_1_3_1_188_2","article-title":"Poisoning-assisted property inference attack against federated learning","author":"Wang Zhibo","year":"2022","unstructured":"Zhibo Wang, Yuting Huang, Mengkai Song, Libing Wu, Feng Xue, and Kui Ren. 2022. Poisoning-assisted property inference attack against federated learning. IEEE Trans. Depend. Secure Comput. 1 (2022), 1\u20131.","journal-title":"IEEE Trans. Depend. Secure Comput."},{"key":"e_1_3_1_189_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2019.8737416"},{"key":"e_1_3_1_190_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2020.2988575"},{"key":"e_1_3_1_191_2","article-title":"Gradient leakage attack resilient deep learning","author":"Wei Wenqi","year":"2021","unstructured":"Wenqi Wei and Ling Liu. 2021. Gradient leakage attack resilient deep learning. IEEE Trans. Inf. Forens. Secur. (2021).","journal-title":"IEEE Trans. Inf. Forens. Secur."},{"key":"e_1_3_1_192_2","article-title":"A framework for evaluating gradient leakage attacks in federated learning","author":"Wei Wenqi","year":"2020","unstructured":"Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, and Yanzhao Wu. 2020. A framework for evaluating gradient leakage attacks in federated learning. arXiv preprint arXiv:2004.10397 (2020).","journal-title":"arXiv preprint arXiv:2004.10397"},{"key":"e_1_3_1_193_2","article-title":"FedCG: Leverage conditional GAN for protecting privacy and maintaining competitive performance in federated learning","author":"Wu Yuezhou","year":"2021","unstructured":"Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, and Qiang Yang. 2021. FedCG: Leverage conditional GAN for protecting privacy and maintaining competitive performance in federated learning. arXiv preprint arXiv:2111.08211 (2021).","journal-title":"arXiv preprint arXiv:2111.08211"},{"key":"e_1_3_1_194_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467268"},{"key":"e_1_3_1_195_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308560.3317584"},{"key":"e_1_3_1_196_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308560.3317584"},{"key":"e_1_3_1_197_2","doi-asserted-by":"publisher","DOI":"10.1109\/BigData.2018.8622525"},{"key":"e_1_3_1_198_2","article-title":"Privacy-preserving machine learning: Methods, challenges and directions","author":"Xu Runhua","year":"2021","unstructured":"Runhua Xu, Nathalie Baracaldo, and James Joshi. 2021. Privacy-preserving machine learning: Methods, challenges and directions. arXiv preprint arXiv:2108.04417 (2021).","journal-title":"arXiv preprint arXiv:2108.04417"},{"key":"e_1_3_1_199_2","article-title":"Federated learning with class imbalance reduction","author":"Yang Miao","year":"2020","unstructured":"Miao Yang, Akitanoshou Wong, Hongbin Zhu, Haifeng Wang, and Hua Qian. 2020. Federated learning with class imbalance reduction. arXiv preprint arXiv:2011.11266 (2020).","journal-title":"arXiv preprint arXiv:2011.11266"},{"key":"e_1_3_1_200_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298981"},{"key":"e_1_3_1_201_2","article-title":"Gain without pain: Offsetting DP-injected noises stealthily in cross-device federated learning","author":"Yang Wenzhuo","year":"2021","unstructured":"Wenzhuo Yang, Yipeng Zhou, Miao Hu, Di Wu, Xi Zheng, Jessie Hui Wang, Song Guo, and Chao Li. 2021. Gain without pain: Offsetting DP-injected noises stealthily in cross-device federated learning. IEEE Internet Things J. 9, 22 (2021), 22147\u201322157.","journal-title":"IEEE Internet Things J."},{"key":"e_1_3_1_202_2","article-title":"An accuracy-lossless perturbation method for defending privacy attacks in federated learning","author":"Yang Xue","year":"2020","unstructured":"Xue Yang, Yan Feng, Weijun Fang, Jun Shao, Xiaohu Tang, Shu-Tao Xia, and Rongxing Lu. 2020. An accuracy-lossless perturbation method for defending privacy attacks in federated learning. arXiv preprint arXiv:2002.09843 (2020).","journal-title":"arXiv preprint arXiv:2002.09843"},{"key":"e_1_3_1_203_2","article-title":"Adversarial neural network inversion via auxiliary knowledge alignment","author":"Yang Ziqi","year":"2019","unstructured":"Ziqi Yang, Ee-Chien Chang, and Zhenkai Liang. 2019. Adversarial neural network inversion via auxiliary knowledge alignment. arXiv preprint arXiv:1902.08552 (2019).","journal-title":"arXiv preprint arXiv:1902.08552"},{"key":"e_1_3_1_204_2","first-page":"160","volume-title":"23rd Annual Symposium on Foundations of Computer Science (SFCS\u201982)","author":"Yao Andrew C.","year":"1982","unstructured":"Andrew C. Yao. 1982. Protocols for secure computations. In 23rd Annual Symposium on Foundations of Computer Science (SFCS\u201982). IEEE, 160\u2013164."},{"key":"e_1_3_1_205_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP40778.2020.9190968"},{"key":"e_1_3_1_206_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-12229-8_2"},{"key":"e_1_3_1_207_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01607"},{"key":"e_1_3_1_208_2","doi-asserted-by":"publisher","DOI":"10.1145\/3460427"},{"key":"e_1_3_1_209_2","doi-asserted-by":"publisher","DOI":"10.1145\/3375627.3375840"},{"key":"e_1_3_1_210_2","unstructured":"Xubo Yue Maher Nouiehed and Raed Al Kontar. 2021. GIFAIR-FL: An approach for group and individual fairness in federated learning. arXiv:cs.LG\/2108.02741."},{"key":"e_1_3_1_211_2","first-page":"962","volume-title":"Artificial Intelligence and Statistics","author":"Zafar Muhammad Bilal","year":"2017","unstructured":"Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics. PMLR, 962\u2013970."},{"key":"e_1_3_1_212_2","first-page":"325","volume-title":"International Conference on Machine Learning","author":"Zemel Rich","year":"2013","unstructured":"Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. PMLR, 325\u2013333."},{"key":"e_1_3_1_213_2","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278779"},{"key":"e_1_3_1_214_2","first-page":"493","volume-title":"USENIX Annual Technical Conference (USENIX ATC\u201920)","author":"Zhang Chengliang","year":"2020","unstructured":"Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for Cross-Silo federated learning. In USENIX Annual Technical Conference (USENIX ATC\u201920). 493\u2013506."},{"key":"e_1_3_1_215_2","doi-asserted-by":"publisher","DOI":"10.1109\/BigData50022.2020.9378043"},{"key":"e_1_3_1_216_2","doi-asserted-by":"publisher","DOI":"10.1109\/GLOBECOM38437.2019.9014272"},{"key":"e_1_3_1_217_2","doi-asserted-by":"publisher","DOI":"10.1145\/3134428"},{"key":"e_1_3_1_218_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICC40277.2020.9148790"},{"key":"e_1_3_1_219_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICC40277.2020.9148790"},{"key":"e_1_3_1_220_2","volume-title":"AAAI Conference on Artificial Intelligence","author":"Zhang L.","year":"2019","unstructured":"L. Zhang, Y. Wu, and X. Wu. 2019. Fairness-aware classification: Criterion convexity and bounds. In AAAI Conference on Artificial Intelligence."},{"key":"e_1_3_1_221_2","volume-title":"International Conference on Learning Representations","author":"Zhang Michael","year":"2020","unstructured":"Michael Zhang, Karan Sapra, Sanja Fidler, Serena Yeung, and Jose M. Alvarez. 2020. Personalized federated learning with first order model optimization. In International Conference on Learning Representations."},{"key":"e_1_3_1_222_2","first-page":"12589","volume-title":"International Conference on Machine Learning","author":"Zhang Mengjiao","year":"2021","unstructured":"Mengjiao Zhang and Shusen Wang. 2021. Matrix sketching for secure collaborative machine learning. In International Conference on Machine Learning. PMLR, 12589\u201312599."},{"key":"e_1_3_1_223_2","article-title":"Balancing learning model privacy, fairness, and accuracy with early stopping criteria","author":"Zhang Tao","year":"2021","unstructured":"Tao Zhang, Tianqing Zhu, Kun Gao, Wanlei Zhou, and S. Yu Philip. 2021. Balancing learning model privacy, fairness, and accuracy with early stopping criteria. IEEE Trans. Neural Netw. Learn. Syst. (2021).","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"e_1_3_1_224_2","doi-asserted-by":"crossref","unstructured":"Yuheng Zhang Ruoxi Jia Hengzhi Pei Wenxiao Wang Bo Li and Dawn Song. 2020. The secret revealer: Generative model-inversion attacks against deep neural networks. arXiv:cs.LG\/1911.07135.","DOI":"10.1109\/CVPR42600.2020.00033"},{"key":"e_1_3_1_225_2","article-title":"IDLG: Improved deep leakage from gradients","author":"Zhao Bo","year":"2020","unstructured":"Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. IDLG: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020).","journal-title":"arXiv preprint arXiv:2001.02610"},{"key":"e_1_3_1_226_2","article-title":"Gender bias in coreference resolution: Evaluation and debiasing methods","author":"Zhao Jieyu","year":"2018","unstructured":"Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876 (2018).","journal-title":"arXiv preprint arXiv:1804.06876"},{"key":"e_1_3_1_227_2","article-title":"Federated learning with non-iid data","author":"Zhao Yue","year":"2018","unstructured":"Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582 (2018).","journal-title":"arXiv preprint arXiv:1806.00582"},{"key":"e_1_3_1_228_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.07.098"},{"key":"e_1_3_1_229_2","article-title":"R-GAP: Recursive gradient attack on privacy","author":"Zhu Junyi","year":"2020","unstructured":"Junyi Zhu and Matthew Blaschko. 2020. R-GAP: Recursive gradient attack on privacy. arXiv preprint arXiv:2010.07733 (2020).","journal-title":"arXiv preprint arXiv:2010.07733"},{"key":"e_1_3_1_230_2","article-title":"Deep leakage from gradients","volume":"32","author":"Zhu Ligeng","year":"2019","unstructured":"Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 32 (2019).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_231_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10506-016-9182-5"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3606017","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3606017","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3606017","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:36:19Z","timestamp":1750178179000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3606017"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,15]]},"references-count":230,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,2,29]]}},"alternative-id":["10.1145\/3606017"],"URL":"https:\/\/doi.org\/10.1145\/3606017","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,15]]},"assertion":[{"value":"2022-04-27","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-06-22","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-09-15","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}