{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,10]],"date-time":"2026-01-10T19:23:22Z","timestamp":1768073002116,"version":"3.49.0"},"reference-count":34,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,5,14]],"date-time":"2022-05-14T00:00:00Z","timestamp":1652486400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,5,14]],"date-time":"2022-05-14T00:00:00Z","timestamp":1652486400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"IITP","award":["2021-0-01547-00"],"award-info":[{"award-number":["2021-0-01547-00"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cluster Comput"],"published-print":{"date-parts":[[2023,2]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Federated Learning (FL) is a technology that facilitates a sophisticated way to train distributed data. As the FL does not expose sensitive data in the training process, it was considered privacy-safe deep learning. However, a few recent studies proved that it is possible to expose the hidden data by exploiting the shared models only. One common solution for the data exposure is differential privacy that adds noise to hinder such an attack, however, it inevitably involves a trade-off between privacy and utility. This paper demonstrates the effectiveness of image augmentation as an alternative defense strategy that has less impact of the trade-off. We conduct comprehensive experiments on the CIFAR-10 and CIFAR-100 datasets with 14 augmentations and 9 magnitudes. As a result, the best combination of augmentation and magnitude for each image class in the datasets was discovered. Also, our results show that a well-fitted augmentation strategy can outperform differential privacy.<\/jats:p>","DOI":"10.1007\/s10586-022-03596-1","type":"journal-article","created":{"date-parts":[[2022,5,14]],"date-time":"2022-05-14T15:03:02Z","timestamp":1652540582000},"page":"349-366","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["An empirical analysis of image augmentation against model inversion attack in federated learning"],"prefix":"10.1007","volume":"26","author":[{"given":"Seunghyeon","family":"Shin","sequence":"first","affiliation":[]},{"given":"Mallika","family":"Boyapati","sequence":"additional","affiliation":[]},{"given":"Kun","family":"Suo","sequence":"additional","affiliation":[]},{"given":"Kyungtae","family":"Kang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6206-083X","authenticated-orcid":false,"given":"Junggab","family":"Son","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,5,14]]},"reference":[{"key":"3596_CR1","unstructured":"McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. Artif. Intell. Statist. pp. 1273\u20131282 (2017)"},{"key":"3596_CR2","doi-asserted-by":"crossref","unstructured":"Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3\u201318. IEEE (2017)","DOI":"10.1109\/SP.2017.41"},{"key":"3596_CR3","unstructured":"Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In: 23rd $$\\{$$USENIX$$\\}$$ Security Symposium ($$\\{$$USENIX$$\\}$$ Security 14), pp. 17\u201332 (2014)"},{"key":"3596_CR4","doi-asserted-by":"crossref","unstructured":"Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322\u20131333 (2015)","DOI":"10.1145\/2810103.2813677"},{"key":"3596_CR5","doi-asserted-by":"crossref","unstructured":"Hidano, S., Murakami, T., Katsumata, S., Kiyomoto, S., Hanaoka, G.: Model inversion attacks for prediction systems: without knowledge of non-sensitive attributes. In: 2017 15th Annual Conference on Privacy, Security and Trust (PST), pp. 115\u201311509. IEEE (2017)","DOI":"10.1109\/PST.2017.00023"},{"key":"3596_CR6","doi-asserted-by":"crossref","unstructured":"Zhu, L., Han, S.: Deep leakage from gradients. In: Federated Learning, pp. 17\u201331. Springer (2020)","DOI":"10.1007\/978-3-030-63076-8_2"},{"key":"3596_CR7","unstructured":"Zhao, B., Mopuri, K.R., Bilen, H.: idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020)"},{"key":"3596_CR8","unstructured":"Geiping, J., Bauermeister, H., Dr\u00f6ge, H., Moeller, M.: Inverting gradients\u2013how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020)"},{"key":"3596_CR9","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Jia, R., Pei, H., Wang, W., Li, B., Song, D.: The secret revealer: generative model-inversion attacks against deep neural networks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 253\u2013261 (2020)","DOI":"10.1109\/CVPR42600.2020.00033"},{"key":"3596_CR10","doi-asserted-by":"crossref","unstructured":"He, Z., Zhang, T., Lee, R.B.: Model inversion attacks against collaborative inference. In: Proceedings of the 35th Annual Computer Security Applications Conference, pp. 148\u2013162 (2019)","DOI":"10.1145\/3359789.3359824"},{"key":"3596_CR11","doi-asserted-by":"crossref","unstructured":"Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Theory of Cryptography Conference, pp. 265\u2013284. Springer (2006)","DOI":"10.1007\/11681878_14"},{"key":"3596_CR12","doi-asserted-by":"crossref","unstructured":"McSherry, F., Talwar, K.: Mechanism design via differential privacy. In: 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS\u201907), pp. 94\u2013103. IEEE (2007)","DOI":"10.1109\/FOCS.2007.66"},{"key":"3596_CR13","doi-asserted-by":"crossref","unstructured":"Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310\u20131321 (2015)","DOI":"10.1145\/2810103.2813687"},{"key":"3596_CR14","doi-asserted-by":"crossref","unstructured":"Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308\u2013318 (2016)","DOI":"10.1145\/2976749.2978318"},{"key":"3596_CR15","doi-asserted-by":"crossref","unstructured":"Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive laplace mechanism: Differential privacy preservation in deep learning. In: 2017 IEEE International Conference on Data Mining (ICDM), pp. 385\u2013394. IEEE (2017)","DOI":"10.1109\/ICDM.2017.48"},{"key":"3596_CR16","doi-asserted-by":"crossref","unstructured":"Mironov, I.: R\u00e9nyi differential privacy. In: 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 263\u2013275 (2017). IEEE","DOI":"10.1109\/CSF.2017.11"},{"key":"3596_CR17","doi-asserted-by":"crossref","unstructured":"Truex, S., Liu, L., Chow, K.-H., Gursoy, M.E., Wei, W.: Ldp-fed: Federated learning with local differential privacy. In: Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 61\u201366 (2020)","DOI":"10.1145\/3378679.3394533"},{"key":"3596_CR18","unstructured":"Girgis, A., Data, D., Diggavi, S., Kairouz, P., Suresh, A.T.: Shuffled model of differential privacy in federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2521\u20132529. PMLR (2021)"},{"key":"3596_CR19","unstructured":"DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552 (2017)"},{"key":"3596_CR20","doi-asserted-by":"crossref","unstructured":"Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 13001\u201313008 (2020)","DOI":"10.1609\/aaai.v34i07.7000"},{"key":"3596_CR21","unstructured":"Wu, R., Yan, S., Shan, Y., Dang, Q., Sun, G.: Deep image: Scaling up image recognition. 7(8) (2015). arXiv preprint arXiv:1501.02876"},{"key":"3596_CR22","doi-asserted-by":"crossref","unstructured":"Zheng, Z., Zheng, L., Yang, Y.: Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3754\u20133762 (2017)","DOI":"10.1109\/ICCV.2017.405"},{"key":"3596_CR23","unstructured":"Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. (2020) arXiv preprint arXiv:2011.13456"},{"key":"3596_CR24","doi-asserted-by":"publisher","first-page":"132","DOI":"10.1016\/j.neunet.2020.09.001","volume":"133","author":"J Lin","year":"2021","unstructured":"Lin, J., Li, Y., Yang, G.: Fpgan: face de-identification method with generative adversarial networks for social robots. Neural Netw. 133, 132\u2013147 (2021)","journal-title":"Neural Netw."},{"key":"3596_CR25","doi-asserted-by":"crossref","unstructured":"Wu, H., Zheng, S., Zhang, J., Huang, K.: Gp-gan: Towards realistic high-resolution image blending. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 2487\u20132495 (2019)","DOI":"10.1145\/3343031.3350944"},{"key":"3596_CR26","doi-asserted-by":"crossref","unstructured":"Choi, Y., Uh, Y., Yoo, J., Ha, J.-W.: Stargan v2: Diverse image synthesis for multiple domains. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 8188\u20138197 (2020)","DOI":"10.1109\/CVPR42600.2020.00821"},{"key":"3596_CR27","doi-asserted-by":"crossref","unstructured":"Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: Autoaugment: earning augmentation policies from data. (2018) arXiv preprint arXiv:1805.09501","DOI":"10.1109\/CVPR.2019.00020"},{"key":"3596_CR28","unstructured":"Lim, S., Kim, I., Kim, T., Kim, C., Kim, S.: Fast autoaugment. arXiv preprint arXiv:1905.00397 (2019)"},{"key":"3596_CR29","doi-asserted-by":"crossref","unstructured":"Hataya, R., Zdenek, J., Yoshizoe, K., Nakayama, H.: Faster autoaugment: learning augmentation strategies using backpropagation. In: European Conference on Computer Vision, pp. 1\u201316. Springer (2020)","DOI":"10.1007\/978-3-030-58595-2_1"},{"key":"3596_CR30","doi-asserted-by":"crossref","unstructured":"Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702\u2013703 (2020)","DOI":"10.1109\/CVPRW50498.2020.00359"},{"issue":"3\u20134","key":"3596_CR31","first-page":"211","volume":"9","author":"C Dwork","year":"2014","unstructured":"Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends Theor. Compt. Sci. 9(3\u20134), 211\u2013407 (2014)","journal-title":"Found. Trends Theor. Compt. Sci."},{"issue":"1","key":"3596_CR32","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-019-0197-0","volume":"6","author":"C Shorten","year":"2019","unstructured":"Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1), 1\u201348 (2019)","journal-title":"J. Big Data"},{"key":"3596_CR33","unstructured":"Waites, C.: Pyvacy: towards practical differential privacy for deep learning (2019)"},{"key":"3596_CR34","unstructured":"Opacus PyTorch library. https:\/\/opacus.ai"}],"container-title":["Cluster Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10586-022-03596-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10586-022-03596-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10586-022-03596-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,27]],"date-time":"2023-02-27T14:17:57Z","timestamp":1677507477000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10586-022-03596-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,5,14]]},"references-count":34,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,2]]}},"alternative-id":["3596"],"URL":"https:\/\/doi.org\/10.1007\/s10586-022-03596-1","relation":{},"ISSN":["1386-7857","1573-7543"],"issn-type":[{"value":"1386-7857","type":"print"},{"value":"1573-7543","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,5,14]]},"assertion":[{"value":"6 November 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 February 2022","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 April 2022","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 May 2022","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This paper has never been submitted or introduced in any form to any journals or conferences before.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}}]}}