{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T14:38:07Z","timestamp":1767969487431,"version":"3.49.0"},"reference-count":0,"publisher":"Privacy Enhancing Technologies Symposium Advisory Board","issue":"1","license":[{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["PoPETs"],"abstract":"<jats:p>Federated Learning (FL) enables the distributed training of a model across multiple data owners under the orchestration of a central server responsible for aggregating the models generated by the different clients. However, the original approach of FL has significant shortcomings related to privacy and fairness requirements. Specifically, the observation of the model updates may lead to privacy issues, such as membership inference attacks, while the use of imbalanced local datasets can introduce or amplify classification biases, especially for minority groups. In this work, we show that these biases can be exploited to increase the likelihood of privacy attacks against these groups. To do so, we propose a novel inference attack exploiting the knowledge of group fairness metrics during the training of the global model. Then to thwart this attack, we define a fairness-aware encrypted-domain aggregation algorithm that is differentially-private by design thanks to the approximate precision loss of the threshold multi-key CKKS homomorphic encryption scheme. Finally, we demonstrate the good performance of our proposal both in terms of fairness and privacy through experiments conducted over three real datasets.<\/jats:p>","DOI":"10.56553\/popets-2025-0044","type":"journal-article","created":{"date-parts":[[2024,11,10]],"date-time":"2024-11-10T19:21:16Z","timestamp":1731266476000},"page":"845-865","source":"Crossref","is-referenced-by-count":5,"title":["Towards Privacy-preserving and Fairness-aware Federated Learning Framework"],"prefix":"10.56553","volume":"2025","author":[{"given":"Adda-Akram","family":"Bendoukha","sequence":"first","affiliation":[{"name":"Samovar, T\u00e9l\u00e9com SudParis, Institut Polytechnique de Paris, France"}]},{"given":"Didem","family":"Demirag","sequence":"additional","affiliation":[{"name":"Universit\u00e9 du Qu\u00e9bec \u00e0 Montr\u00e9al (UQAM), Canada"}]},{"given":"Nesrine","family":"Kaaniche","sequence":"additional","affiliation":[{"name":"Samovar, T\u00e9l\u00e9com SudParis, Institut Polytechnique de Paris, France"}]},{"given":"Aymen","family":"Boudguiga","sequence":"additional","affiliation":[{"name":"CEA List, Universit\u00e9 Paris-Saclay, France"}]},{"given":"Renaud","family":"Sirdey","sequence":"additional","affiliation":[{"name":"CEA List, Universit\u00e9 Paris-Saclay, France"}]},{"given":"S\u00e9bastien","family":"Gambs","sequence":"additional","affiliation":[{"name":"Samovar, T\u00e9l\u00e9com SudParis, Institut Polytechnique de Paris, France"}]}],"member":"35752","published-online":{"date-parts":[[2025,1]]},"container-title":["Proceedings on Privacy Enhancing Technologies"],"original-title":[],"deposited":{"date-parts":[[2024,11,13]],"date-time":"2024-11-13T19:21:07Z","timestamp":1731525667000},"score":1,"resource":{"primary":{"URL":"https:\/\/petsymposium.org\/popets\/2025\/popets-2025-0044.php"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1]]},"references-count":0,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,1]]}},"alternative-id":["10.56553\/popets-2025-0044"],"URL":"https:\/\/doi.org\/10.56553\/popets-2025-0044","relation":{},"ISSN":["2299-0984"],"issn-type":[{"value":"2299-0984","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1]]}}}