{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T03:26:49Z","timestamp":1740108409965,"version":"3.37.3"},"reference-count":40,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2022,3,10]],"date-time":"2022-03-10T00:00:00Z","timestamp":1646870400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,3,10]],"date-time":"2022-03-10T00:00:00Z","timestamp":1646870400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100005711","name":"Universit\u00e4t Hamburg","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100005711","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Informatik Spektrum"],"published-print":{"date-parts":[[2022,6]]},"abstract":"<jats:title>Zusammenfassung<\/jats:title><jats:p>Verfahren des maschinellen Lernens (ML) beruhen auf dem Prinzip, dass ein Algorithmus Muster und statistische Zusammenh\u00e4nge in Datens\u00e4tzen erkennt, diese in einem Modell abbildet und das Modell anschlie\u00dfend auf andere Datens\u00e4tze anwenden kann. Neben den gro\u00dfen Chancen, die maschinelle Lernverfahren mit sich bringen, birgt diese Technologie allerdings auch Risiken f\u00fcr die Privatsph\u00e4re, die in diesem Artikel in Form von Privatsph\u00e4reangriffen beleuchtet werden.<\/jats:p><jats:p>Angriffe wie <jats:italic>Model Inversion<\/jats:italic> zielen auf oftmals sensible Informationen ab, die sich w\u00e4hrend der Trainingsphase eines ML-Algorithmus ungewollt in einem Modell etabliert haben. Wenn Trainingsdaten Personenbezug aufweisen, insbesondere wenn es sich etwa um vertrauliche medizinische Daten handelt, kann dies problematisch f\u00fcr Betroffene sein.<\/jats:p><jats:p>Demgegen\u00fcber stehen Techniken des privatsph\u00e4refreundlichen maschinellen Lernens wie <jats:italic>Federated Learning<\/jats:italic>, die eine Risikominimierung f\u00fcr ein breites Spektrum an Privatsph\u00e4reverletzungen erm\u00f6glichen. Ausgew\u00e4hlte Techniken aus diesem Bereich werden in diesem Artikel ausf\u00fchrlich dargestellt.<\/jats:p><jats:p>Dies ist der zweite Teil einer zweiteiligen Artikelserie, deren Auftakt unter dem Titel <jats:italic>Grundlagen und Verfahren<\/jats:italic> bereits in der letzten Ausgabe des Informatik Spektrums erschienen ist.<\/jats:p>","DOI":"10.1007\/s00287-022-01440-9","type":"journal-article","created":{"date-parts":[[2022,3,10]],"date-time":"2022-03-10T14:03:07Z","timestamp":1646920987000},"page":"137-145","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Privatsph\u00e4refreundliches maschinelles Lernen"],"prefix":"10.1007","volume":"45","author":[{"given":"Joshua","family":"Stock","sequence":"first","affiliation":[]},{"given":"Tom","family":"Petersen","sequence":"additional","affiliation":[]},{"given":"Christian-Alexander","family":"Behrendt","sequence":"additional","affiliation":[]},{"given":"Hannes","family":"Federrath","sequence":"additional","affiliation":[]},{"given":"Thea","family":"Kreutzburg","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,3,10]]},"reference":[{"key":"1440_CR1","volume-title":"Proceedings of the ACM SIGSAC conference on computer and communications security","author":"M Abadi","year":"2016","unstructured":"Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: Proceedings of the ACM SIGSAC conference on computer and communications security"},{"issue":"3","key":"1440_CR2","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1504\/IJSN.2015.071829","volume":"10","author":"G Ateniese","year":"2015","unstructured":"Ateniese\u00a0G, Mancini\u00a0LV, Spognardi\u00a0A, Villani\u00a0A, Vitali\u00a0D, Felici\u00a0G (2015) Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int J Secur Networks 10(3):137\u2013150","journal-title":"Int J Secur Networks"},{"key":"1440_CR3","volume-title":"Workshop on privacy-preserving machine learning in practice (PPMLP)","author":"F Boemer","year":"2020","unstructured":"Boemer F, Cammarota R, Demmler D, Schneider T, Yalame H (2020) MP2ML: A mixed-protocol machine learning framework for private inference. In: Workshop on privacy-preserving machine learning in practice (PPMLP)"},{"key":"1440_CR4","volume-title":"Proceedings of the ACM SIGSAC conference on computer and communications security","author":"K Bonawitz","year":"2017","unstructured":"Bonawitz K, Ivanov V, Kreuter B, Marcedone A, McMahan HB, Patel S, Ramage D, Segal A, Seth K (2017) Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the ACM SIGSAC conference on computer and communications security"},{"key":"1440_CR5","series-title":"IACR cryptology ePrint archive, report","volume-title":"Motion\u2011a framework for mixed-protocol multi-party computation","author":"L Braun","year":"2020","unstructured":"Braun L, Demmler D, Schneider T, Tkachenko O (2020) MOTION \u2013 a framework for mixed-protocol multi-party computation. IACR cryptology ePrint archive, report, Bd. 2020\/1137"},{"issue":"4","key":"1440_CR6","first-page":"139","volume":"2021","author":"J Cabrero-Holgueras","year":"2021","unstructured":"Cabrero-Holgueras\u00a0J, Pastrana\u00a0S (2021) SoK: Privacy-preserving computation techniques for deep learning. Proc Priv Enhanc Technol 2021(4):139\u2013162","journal-title":"Proc Priv Enhanc Technol"},{"key":"1440_CR7","volume-title":"International conference on machine learning","author":"CA Choquette-Choo","year":"2021","unstructured":"Choquette-Choo CA, Tramer F, Carlini N, Papernot N (2021) Label-only membership inference attacks. In: International conference on machine learning"},{"key":"1440_CR8","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9781107337756","volume-title":"Secure multiparty computation","author":"R Cramer","year":"2015","unstructured":"Cramer R, Damg\u00e5rd IB et al (2015) Secure multiparty computation. Cambridge University Press, Cambridge"},{"key":"1440_CR9","volume-title":"Secure encrypted virtualization is unsecure","author":"ZH Du","year":"2017","unstructured":"Du ZH, Ying Z, Ma Z, Mai Y, Wang P, Liu J, Fang J (2017) Secure encrypted virtualization is unsecure (arXiv\u00a01712.05090)"},{"key":"1440_CR10","volume-title":"Theory of cryptography conference","author":"C Dwork","year":"2006","unstructured":"Dwork C, McSherry F, Nissim K, Smith A (2006) Calibrating noise to sensitivity in private data analysis. In: Theory of cryptography conference"},{"key":"1440_CR11","volume-title":"Proceedings of the ACM SIGSAC conference on computer and communications security","author":"M Fredrikson","year":"2015","unstructured":"Fredrikson M, Jha S, Ristenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the ACM SIGSAC conference on computer and communications security"},{"key":"1440_CR12","volume-title":"Proceedings of the ACM SIGSAC conference on computer and communications security","author":"K Ganju","year":"2018","unstructured":"Ganju K, Wang Q, Yang W, Gunter CA, Borisov N (2018) Property inference attacks on fully connected neural networks using permutation invariant representations. In: Proceedings of the ACM SIGSAC conference on computer and communications security"},{"key":"1440_CR13","volume-title":"Proceedings of the forty-first annual ACM symposium on theory of computing","author":"C Gentry","year":"2009","unstructured":"Gentry C (2009) Fully homomorphic encryption using ideal lattices. In: Proceedings of the forty-first annual ACM symposium on theory of computing"},{"key":"1440_CR14","volume-title":"Explaining and harnessing adversarial examples","author":"IJ Goodfellow","year":"2015","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples (arXiv\u00a01412.6572)"},{"key":"1440_CR15","volume-title":"International conference on information security and cryptology","author":"T Graepel","year":"2012","unstructured":"Graepel T, Lauter K, Naehrig M (2012) ML confidential: Machine learning on encrypted data. In: International conference on information security and cryptology"},{"key":"1440_CR16","volume-title":"Federated learning for mobile keyboard prediction","author":"A Hard","year":"2019","unstructured":"Hard A, Rao K, Mathews R, Ramaswamy S, Beaufays F, Augenstein S, Eichner H, Kiddon C, Ramage D (2019) Federated learning for mobile keyboard prediction (arXiv\u00a01811.03604)"},{"key":"1440_CR17","volume-title":"Annual Computer Security Applications Conference (ACSAC)","author":"E Hesamifard","year":"2016","unstructured":"Hesamifard E, Takabi H, Ghasemi M (2016) CryptoDL: towards deep learning over encrypted data. In: Annual Computer Security Applications Conference (ACSAC)"},{"key":"1440_CR18","volume-title":"Proceedings of the ACM SIGSAC Conference on Computer and Communications Security","author":"B Hitaj","year":"2017","unstructured":"Hitaj B, Ateniese G, Perez-Cruz F (2017) Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security"},{"key":"1440_CR19","unstructured":"Hussain SS, Wang P (2022) PrivacyRaven \u2013 privacy testing for deep learning. https:\/\/github.com\/trailofbits\/PrivacyRaven. Zugegriffen: 27. Jan. 2022"},{"issue":"6","key":"1440_CR20","doi-asserted-by":"publisher","first-page":"305","DOI":"10.1038\/s42256-020-0186-1","volume":"2","author":"GA Kaissis","year":"2020","unstructured":"Kaissis\u00a0GA, Makowski\u00a0MR, R\u00fcckert\u00a0D, Braren\u00a0RF (2020) Secure, privacy-preserving and federated machine learning in medical imaging. Nat Mach Intell 2(6):305\u2013311","journal-title":"Nat Mach Intell"},{"key":"1440_CR21","volume-title":"Proceedings of the ACM SIGSAC Conference on Computer and Communications Security","author":"M Keller","year":"2020","unstructured":"Keller M (2020) MP-SPDZ: A versatile framework for multi-party computation. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security"},{"key":"1440_CR22","first-page":"1605","volume-title":"USENIX Security Symposium","author":"K Leino","year":"2020","unstructured":"Leino K, Fredrikson M (2020) Stolen memories: Leveraging model memorization for calibrated white-box membership inference. In: USENIX Security Symposium, S. 1605\u20131622"},{"issue":"3","key":"1440_CR23","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1109\/MSP.2020.2975749","volume":"37","author":"T Li","year":"2020","unstructured":"Li\u00a0T, Sahu\u00a0AK, Talwalkar\u00a0A, Smith\u00a0V (2020) Federated learning: Challenges, methods, and future directions. IEEE Signal Process Mag 37(3):50\u201360","journal-title":"IEEE Signal Process Mag"},{"key":"1440_CR24","volume-title":"Proceedings of the ACM SIGSAC conference on computer and communications security","author":"J Liu","year":"2017","unstructured":"Liu J, Juuti M, Lu Y, Asokan N (2017) Oblivious neural network predictions via minionn transformations. In: Proceedings of the ACM SIGSAC conference on computer and communications security"},{"key":"1440_CR25","volume-title":"Artificial intelligence and statistics","author":"B McMahan","year":"2017","unstructured":"McMahan B, Moore E, Ramage D, Hampson S, y Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics"},{"key":"1440_CR26","volume-title":"IEEE symposium on security and privacy (SP)","author":"L Melis","year":"2019","unstructured":"Melis L, Song C, De Cristofaro E, Shmatikov V (2019) Exploiting unintended feature leakage in collaborative learning. In: IEEE symposium on security and privacy (SP)"},{"key":"1440_CR27","volume-title":"International conference on cryptographic hardware and embedded systems","author":"A Moghimi","year":"2017","unstructured":"Moghimi A, Irazoqui G, Eisenbarth T (2017) Cachezoom: How sgx amplifies the power of cache attacks. In: International conference on cryptographic hardware and embedded systems"},{"key":"1440_CR28","volume-title":"IEEE symposium on security and privacy (SP)","author":"P Mohassel","year":"2017","unstructured":"Mohassel P, Zhang Y (2017) SecureML: A system for scalable privacy-preserving machine learning. In: IEEE symposium on security and privacy (SP)"},{"key":"1440_CR29","volume-title":"Ml privacy meter: Aiding regulatory compliance by quantifying the privacy risks of machine learning","author":"SK Murakonda","year":"2020","unstructured":"Murakonda SK, Shokri R (2020) ML privacy meter: Aiding regulatory compliance by quantifying the privacy risks of machine learning (arXiv\u00a02007.09339)"},{"key":"1440_CR30","volume-title":"25th USENIX Security Symposium","author":"O Ohrimenko","year":"2016","unstructured":"Ohrimenko O, Schuster F, Fournet C, Mehta A, Nowozin S, Vaswani K, Costa M (2016) Oblivious multi-party machine learning on trusted processors. In: 25th USENIX Security Symposium"},{"key":"1440_CR31","volume-title":"ACM Asia conference on computer and communications security","author":"N Papernot","year":"2017","unstructured":"Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: ACM Asia conference on computer and communications security"},{"key":"1440_CR32","volume-title":"Differentially-private \u201cdraw and discard\u201d machine learning","author":"V Pihur","year":"2018","unstructured":"Pihur V, Korolova A, Liu F, Sankuratripati S, Yung M, Huang D, Zeng R (2018) Differentially-private \u201cdraw and discard\u201d machine learning (arXiv\u00a01807.04369)"},{"key":"1440_CR33","volume-title":"A survey of privacy attacks in machine learning","author":"M Rigaki","year":"2020","unstructured":"Rigaki M, Garcia S (2020) A survey of privacy attacks in machine learning (arXiv\u00a02007.07646)"},{"issue":"11","key":"1440_CR34","first-page":"169","volume":"4","author":"RL Rivest","year":"1978","unstructured":"Rivest\u00a0RL, Adleman\u00a0L, Dertouzos\u00a0ML et\u00a0al (1978) On data banks and privacy homomorphisms. Found Secur Comput 4(11):169\u2013180","journal-title":"Found Secur Comput"},{"key":"1440_CR35","volume-title":"IEEE Trustcom\/BigDataSE\/ISPA","author":"M Sabt","year":"2015","unstructured":"Sabt M, Achemlal M, Bouabdallah A (2015) Trusted execution environment: what it is, and what it is not. In: IEEE Trustcom\/BigDataSE\/ISPA"},{"key":"1440_CR36","volume-title":"IEEE symposium on security and privacy (SP)","author":"R Shokri","year":"2017","unstructured":"Shokri R, Stronati M, Song C, Shmatikov V (2017) Membership inference attacks against machine learning models. In: IEEE symposium on security and privacy (SP)"},{"key":"1440_CR37","first-page":"355","volume-title":"Computer security foundations symposium (CSF)","author":"X Wu","year":"2016","unstructured":"Wu X, Fredrikson M, Jha S, Naughton JF (2016) A methodology for formalizing model-inversion attacks. In: Computer security foundations symposium (CSF), S 355\u2013370"},{"key":"1440_CR38","volume-title":"IEEE symposium on foundations of computer science","author":"AC Yao","year":"1986","unstructured":"Yao AC (1986) How to generate and exchange secrets. In: IEEE symposium on foundations of computer science"},{"key":"1440_CR39","volume-title":"IEEE international conference on image processing (ICIP)","author":"Z You","year":"2019","unstructured":"You Z, Ye J, Li K, Xu Z, Wang P (2019) Adversarial noise layer: Regularize neural network by adding noise. In: IEEE international conference on image processing (ICIP)"},{"key":"1440_CR40","volume-title":"IEEE\/CVF conference on computer vision and pattern recognition","author":"Y Zhang","year":"2020","unstructured":"Zhang Y, Jia R, Pei H, Wang W, Li B, Song D (2020) The secret revealer: Generative model-inversion attacks against deep neural networks. In: IEEE\/CVF conference on computer vision and pattern recognition"}],"container-title":["Informatik Spektrum"],"original-title":[],"language":"de","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00287-022-01440-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00287-022-01440-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00287-022-01440-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,6,13]],"date-time":"2022-06-13T08:03:54Z","timestamp":1655107434000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00287-022-01440-9"}},"subtitle":["Teil 2: Privatsph\u00e4reangriffe und Privacy-Preserving Machine Learning"],"short-title":[],"issued":{"date-parts":[[2022,3,10]]},"references-count":40,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,6]]}},"alternative-id":["1440"],"URL":"https:\/\/doi.org\/10.1007\/s00287-022-01440-9","relation":{},"ISSN":["0170-6012","1432-122X"],"issn-type":[{"type":"print","value":"0170-6012"},{"type":"electronic","value":"1432-122X"}],"subject":[],"published":{"date-parts":[[2022,3,10]]},"assertion":[{"value":"8 February 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 March 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}