{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T14:25:39Z","timestamp":1774448739401,"version":"3.50.1"},"reference-count":47,"publisher":"Association for Computing Machinery (ACM)","issue":"4","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2021,12]]},"abstract":"<jats:p>Federated Learning (FL) is a promising framework for multiple clients to learn a joint model without directly sharing the data. In addition to high utility of the joint model, rigorous privacy protection of the data and communication efficiency are important design goals. Many existing efforts achieve rigorous privacy by ensuring differential privacy for intermediate model parameters, however, they assume a uniform privacy parameter for all the clients. In practice, different clients may have different privacy requirements due to varying policies or preferences.<\/jats:p>\n          <jats:p>\n            In this paper, we focus on explicitly modeling and leveraging the heterogeneous privacy requirements of different clients and study how to optimize utility for the joint model while minimizing communication cost. As differentially private perturbations affect the model utility, a natural idea is to make better use of information submitted by the clients with higher privacy budgets (referred to as \"public\" clients, and the opposite as \"private\" clients). The challenge is how to use such information without biasing the joint model. We propose\n            <jats:bold>&lt;u&gt;P&lt;\/u&gt;<\/jats:bold>\n            rojected\n            <jats:bold>&lt;u&gt;F&lt;\/u&gt;<\/jats:bold>\n            ederated\n            <jats:bold>&lt;u&gt;A&lt;\/u&gt;<\/jats:bold>\n            veraging (PFA), which extracts the top singular subspace of the model updates submitted by \"public\" clients and utilizes them to project the model updates of \"private\" clients before aggregating them. We then propose communication-efficient PFA+, which allows \"private\" clients to upload projected model updates instead of original ones. Our experiments verify the utility boost of both algorithms compared to the baseline methods, whereby PFA+ achieves over 99% uplink communication reduction for \"private\" clients.\n          <\/jats:p>","DOI":"10.14778\/3503585.3503592","type":"journal-article","created":{"date-parts":[[2022,4,14]],"date-time":"2022-04-14T22:18:07Z","timestamp":1649974687000},"page":"828-840","source":"Crossref","is-referenced-by-count":60,"title":["Projected federated averaging with heterogeneous differential privacy"],"prefix":"10.14778","volume":"15","author":[{"given":"Junxu","family":"Liu","sequence":"first","affiliation":[{"name":"Renmin University of China"}]},{"given":"Jian","family":"Lou","sequence":"additional","affiliation":[{"name":"Xidian University"}]},{"given":"Li","family":"Xiong","sequence":"additional","affiliation":[{"name":"Emory University"}]},{"given":"Jinfei","family":"Liu","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]},{"given":"Xiaofeng","family":"Meng","sequence":"additional","affiliation":[{"name":"Renmin University of China"}]}],"member":"320","published-online":{"date-parts":[[2022,4,14]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978318"},{"key":"e_1_2_1_2_1","volume-title":"Federated learning via posterior averaging: A new perspective and practical algorithms. arXiv preprint arXiv:2010.05273","author":"Al-Shedivat M.","year":"2020","unstructured":"M. Al-Shedivat , J. Gillenwater , E. Xing , and A. Rostamizadeh . Federated learning via posterior averaging: A new perspective and practical algorithms. arXiv preprint arXiv:2010.05273 , 2020 . M. Al-Shedivat, J. Gillenwater, E. Xing, and A. Rostamizadeh. Federated learning via posterior averaging: A new perspective and practical algorithms. arXiv preprint arXiv:2010.05273, 2020."},{"issue":"2","key":"e_1_2_1_3_1","article-title":"Heterogeneous differential privacy","volume":"7","author":"Alaggan M.","year":"2016","unstructured":"M. Alaggan , S. Gambs , and A. Kermarrec . Heterogeneous differential privacy . J. Priv. Confidentiality , 7 ( 2 ), 2016 . M. Alaggan, S. Gambs, and A. Kermarrec. Heterogeneous differential privacy. J. Priv. Confidentiality, 7(2), 2016.","journal-title":"J. Priv. Confidentiality"},{"key":"e_1_2_1_4_1","first-page":"473","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Bellet A.","year":"2018","unstructured":"A. Bellet , R. Guerraoui , M. Taziki , and M. Tommasi . Personalized and private peer-to-peer machine learning . In International Conference on Artificial Intelligence and Statistics , pages 473 -- 481 . PMLR, 2018 . A. Bellet, R. Guerraoui, M. Taziki, and M. Tommasi. Personalized and private peer-to-peer machine learning. In International Conference on Artificial Intelligence and Statistics, pages 473--481. PMLR, 2018."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS.2012.67"},{"key":"e_1_2_1_6_1","volume-title":"Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046","author":"Bonawitz K.","year":"2019","unstructured":"K. Bonawitz , H. Eichner , W. Grieskamp , D. Huba , A. Ingerman , V. Ivanov , C. Kiddon , J. Konecn\u1ef3 , S. Mazzocchi , H. B. McMahan , Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046 , 2019 . K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konecn\u1ef3, S. Mazzocchi, H. B. McMahan, et al. Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046, 2019."},{"key":"e_1_2_1_7_1","volume-title":"Deep learning with gaussian differential privacy. arXiv preprint arXiv:1911.11607","author":"Bu Z.","year":"2019","unstructured":"Z. Bu , J. Dong , Q. Long , and W. J. Su . Deep learning with gaussian differential privacy. arXiv preprint arXiv:1911.11607 , 2019 . Z. Bu, J. Dong, Q. Long, and W. J. Su. Deep learning with gaussian differential privacy. arXiv preprint arXiv:1911.11607, 2019."},{"key":"e_1_2_1_8_1","volume-title":"Federated intrusion detection for iot with heterogeneous cohort privacy. arXiv preprint arXiv:2101.09878","author":"Chathoth A. K.","year":"2021","unstructured":"A. K. Chathoth , A. Jagannatha , and S. Lee . Federated intrusion detection for iot with heterogeneous cohort privacy. arXiv preprint arXiv:2101.09878 , 2021 . A. K. Chathoth, A. Jagannatha, and S. Lee. Federated intrusion detection for iot with heterogeneous cohort privacy. arXiv preprint arXiv:2101.09878, 2021."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2016.7498248"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/1536414.1536466"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/11681878_14"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1561\/0400000042"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS.2010.12"},{"key":"e_1_2_1_14_1","volume-title":"Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557","author":"Geyer R. C.","year":"2017","unstructured":"R. C. Geyer , T. Klein , and M. Nabi . Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 , 2017 . R. C. Geyer, T. Klein, and M. Nabi. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017."},{"key":"e_1_2_1_15_1","volume-title":"Gradient descent happens in a tiny subspace. arXiv preprint arXiv:1812.04754","author":"Gur-Ari G.","year":"2018","unstructured":"G. Gur-Ari , D. A. Roberts , and E. Dyer . Gradient descent happens in a tiny subspace. arXiv preprint arXiv:1812.04754 , 2018 . G. Gur-Ari, D. A. Roberts, and E. Dyer. Gradient descent happens in a tiny subspace. arXiv preprint arXiv:1812.04754, 2018."},{"key":"e_1_2_1_16_1","volume-title":"On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425","author":"Haddadpour F.","year":"2019","unstructured":"F. Haddadpour and M. Mahdavi . On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425 , 2019 . F. Haddadpour and M. Mahdavi. On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425, 2019."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134012"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2015.7113353"},{"key":"e_1_2_1_19_1","volume-title":"Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977","author":"Kairouz P.","year":"2019","unstructured":"P. Kairouz , H. B. McMahan , B. Avent , A. Bellet , M. Bennis , A. N. Bhagoji , K. Bonawitz , Z. Charles , G. Cormode , R. Cummings , Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 , 2019 . P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1611835114"},{"key":"e_1_2_1_21_1","volume-title":"Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492","author":"Konecn\u00fd J.","year":"2016","unstructured":"J. Konecn\u00fd , H. B. McMahan , F. X. Yu , P. Richt\u00e1rik , A. T. Suresh , and D. Bacon . Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 , 2016 . J. Konecn\u00fd, H. B. McMahan, F. X. Yu, P. Richt\u00e1rik, A. T. Suresh, and D. Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE48307.2020.00049"},{"key":"e_1_2_1_23_1","volume-title":"Learning multiple layers of features from tiny images","author":"Krizhevsky A.","year":"2009","unstructured":"A. Krizhevsky , G. Hinton , Learning multiple layers of features from tiny images . 2009 . A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. 2009."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9780898719628"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-57454-7_48"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2020.2975749"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611976236.22"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-55753-3_34"},{"key":"e_1_2_1_30_1","volume-title":"Ensemble distillation for robust model fusion in federated learning. arXiv preprint arXiv:2006.07242","author":"Lin T.","year":"2020","unstructured":"T. Lin , L. Kong , S. U. Stich , and M. Jaggi . Ensemble distillation for robust model fusion in federated learning. arXiv preprint arXiv:2006.07242 , 2020 . T. Lin, L. Kong, S. U. Stich, and M. Jaggi. Ensemble distillation for robust model fusion in federated learning. arXiv preprint arXiv:2006.07242, 2020."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.5555\/2002472.2002491"},{"key":"e_1_2_1_32_1","first-page":"1273","volume-title":"Artificial Intelligence and Statistics","author":"McMahan B.","year":"2017","unstructured":"B. McMahan , E. Moore , D. Ramage , S. Hampson , and B. A. y Arcas . Communication-efficient learning of deep networks from decentralized data . In Artificial Intelligence and Statistics , pages 1273 -- 1282 , 2017 . B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273--1282, 2017."},{"key":"e_1_2_1_33_1","volume-title":"Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963","author":"McMahan H. B.","year":"2017","unstructured":"H. B. McMahan , D. Ramage , K. Talwar , and L. Zhang . Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963 , 2017 . H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963, 2017."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/WCNC45663.2020.9120532"},{"key":"e_1_2_1_35_1","volume-title":"arXiv preprint arXiv:1901.08244","author":"Papyan V.","year":"2019","unstructured":"V. Papyan . Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hessians. arXiv preprint arXiv:1901.08244 , 2019 . V. Papyan. Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hessians. arXiv preprint arXiv:1901.08244, 2019."},{"key":"e_1_2_1_36_1","volume-title":"A survey of privacy attacks in machine learning. arXiv preprint arXiv:2007.07646","author":"Rigaki M.","year":"2020","unstructured":"M. Rigaki and S. Garcia . A survey of privacy attacks in machine learning. arXiv preprint arXiv:2007.07646 , 2020 . M. Rigaki and S. Garcia. A survey of privacy attacks in machine learning. arXiv preprint arXiv:2007.07646, 2020."},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58951-6_24"},{"key":"e_1_2_1_38_1","volume-title":"ICML","author":"Wang D.","year":"2019","unstructured":"D. Wang , C. Chen , and J. Xu . Differentially private empirical risk minimization with non-convex loss functions . In ICML , 2019 . D. Wang, C. Chen, and J. Xu. Differentially private empirical risk minimization with non-convex loss functions. In ICML, 2019."},{"key":"e_1_2_1_39_1","volume-title":"AISTATS. PMLR","author":"Wang Y.-X.","year":"2019","unstructured":"Y.-X. Wang , B. Balle , and S. P. Kasiviswanathan . Subsampled r\u00e9nyi differential privacy and analytical moments accountant . In AISTATS. PMLR , 2019 . Y.-X. Wang, B. Balle, and S. P. Kasiviswanathan. Subsampled r\u00e9nyi differential privacy and analytical moments accountant. In AISTATS. PMLR, 2019."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2019.8737416"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2020.2988575"},{"key":"e_1_2_1_42_1","volume-title":"Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms","author":"Xiao H.","year":"2017","unstructured":"H. Xiao , K. Rasul , and R. Vollgraf . Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms , 2017 . H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-57186-7_28"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00019"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2019.07.035"},{"key":"e_1_2_1_46_1","volume-title":"9th International Conference on Learning Representations, ICLR 2021","author":"Zhou Y.","year":"2021","unstructured":"Y. Zhou , S. Wu , and A. Banerjee . Bypassing the ambient dimension: Private SGD with gradient subspace identification . In 9th International Conference on Learning Representations, ICLR 2021 , Virtual Event, Austria , May 3-7, 2021 , 2021. Y. Zhou, S. Wu, and A. Banerjee. Bypassing the ambient dimension: Private SGD with gradient subspace identification. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, 2021."},{"key":"e_1_2_1_47_1","first-page":"14774","volume-title":"Advances in Neural Information Processing Systems","author":"Zhu L.","year":"2019","unstructured":"L. Zhu , Z. Liu , and S. Han . Deep leakage from gradients . In Advances in Neural Information Processing Systems , pages 14774 -- 14784 , 2019 . L. Zhu, Z. Liu, and S. Han. Deep leakage from gradients. In Advances in Neural Information Processing Systems, pages 14774--14784, 2019."}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3503585.3503592","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,28]],"date-time":"2022-12-28T10:27:20Z","timestamp":1672223240000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3503585.3503592"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,12]]},"references-count":47,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2021,12]]}},"alternative-id":["10.14778\/3503585.3503592"],"URL":"https:\/\/doi.org\/10.14778\/3503585.3503592","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2021,12]]}}}