{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,17]],"date-time":"2026-04-17T22:16:45Z","timestamp":1776464205647,"version":"3.51.2"},"publisher-location":"New York, NY, USA","reference-count":78,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T00:00:00Z","timestamp":1667779200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,11,7]]},"DOI":"10.1145\/3548606.3560554","type":"proceedings-article","created":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T11:41:28Z","timestamp":1667821288000},"page":"2779-2792","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":55,"title":["Truth Serum"],"prefix":"10.1145","author":[{"given":"Florian","family":"Tram\u00e8r","sequence":"first","affiliation":[{"name":"ETH Z\u00fcrich, Z\u00fcrich , Switzerland"}]},{"given":"Reza","family":"Shokri","sequence":"additional","affiliation":[{"name":"National University of Singapore, Singapore, Singapore"}]},{"given":"Ayrton","family":"San Joaquin","sequence":"additional","affiliation":[{"name":"Yale-NUS College, Singapore, Singapore"}]},{"given":"Hoang","family":"Le","sequence":"additional","affiliation":[{"name":"Oregon State University, Corvallis, OR, USA"}]},{"given":"Matthew","family":"Jagielski","sequence":"additional","affiliation":[{"name":"Google, Cambridge, MA, USA"}]},{"given":"Sanghyun","family":"Hong","sequence":"additional","affiliation":[{"name":"Oregon State University, Corvallis, OR, USA"}]},{"given":"Nicholas","family":"Carlini","sequence":"additional","affiliation":[{"name":"Google, Mountain View, CA, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,11,7]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978318"},{"issue":"5","key":"e_1_3_2_1_2_1","first-page":"1333","article-title":"Privacy-preserving deep learning via additively homomorphic encryption","volume":"13","author":"Aono Yoshinori","year":"2017","unstructured":"Yoshinori Aono , Takuya Hayashi , Lihua Wang , and Shiho Moriai . Privacy-preserving deep learning via additively homomorphic encryption . IEEE Transactions on Information Forensics and Security , 13 ( 5 ): 1333 -- 1345 , 2017 . Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 13(5):1333--1345, 2017.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"key":"e_1_3_2_1_3_1","first-page":"1505","volume-title":"USENIX Security Symposium","author":"Bagdasaryan Eugene","year":"2021","unstructured":"Eugene Bagdasaryan and Vitaly Shmatikov . Blind backdoors in deep learning models . In USENIX Security Symposium , pages 1505 -- 1521 , 2021 . Eugene Bagdasaryan and Vitaly Shmatikov. Blind backdoors in deep learning models. In USENIX Security Symposium, pages 1505--1521, 2021."},{"key":"e_1_3_2_1_4_1","first-page":"2938","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Bagdasaryan Eugene","year":"2020","unstructured":"Eugene Bagdasaryan , Andreas Veit , Yiqing Hua , Deborah Estrin , and Vitaly Shmatikov . How to backdoor federated learning . In International Conference on Artificial Intelligence and Statistics , pages 2938 -- 2948 . PMLR, 2020 . Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics, pages 2938--2948. PMLR, 2020."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445922"},{"key":"e_1_3_2_1_6_1","first-page":"634","volume-title":"International Conference on Machine Learning","author":"Bhagoji Arjun Nitin","year":"2019","unstructured":"Arjun Nitin Bhagoji , Supriyo Chakraborty , Prateek Mittal , and Seraphin Calo . Analyzing federated learning through an adversarial lens . In International Conference on Machine Learning , pages 634 -- 643 . PMLR, 2019 . Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning, pages 634--643. PMLR, 2019."},{"key":"e_1_3_2_1_7_1","volume-title":"International Conference on Machine Learning","author":"Biggio Battista","year":"2012","unstructured":"Battista Biggio , Blaine Nelson , and Pavel Laskov . Poisoning attacks against support vector machines . In International Conference on Machine Learning , 2012 . Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In International Conference on Machine Learning, 2012."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1007\/BFb0055716"},{"key":"e_1_3_2_1_9_1","volume-title":"Ilia Shumailov, and Nicolas Papernot. When the curious abandon honesty: Federated learning is not private. arXiv preprint arXiv:2112.02918","author":"Boenisch Franziska","year":"2021","unstructured":"Franziska Boenisch , Adam Dziedzic , Roei Schuster , Ali Shahin Shamsabadi , Ilia Shumailov, and Nicolas Papernot. When the curious abandon honesty: Federated learning is not private. arXiv preprint arXiv:2112.02918 , 2021 . Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, and Nicolas Papernot. When the curious abandon honesty: Federated learning is not private. arXiv preprint arXiv:2112.02918, 2021."},{"key":"e_1_3_2_1_10_1","unstructured":"Dan Boneh and Victor Shoup. A Graduate Course in Applied Cryptography. http:\/\/toc.cryptobook.us\/ 2020.  Dan Boneh and Victor Shoup. A Graduate Course in Applied Cryptography. http:\/\/toc.cryptobook.us\/ 2020."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP46214.2022.9833649"},{"key":"e_1_3_2_1_12_1","volume-title":"Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646","author":"Carlini Nicholas","year":"2022","unstructured":"Nicholas Carlini , Daphne Ippolito , Matthew Jagielski , Katherine Lee , Florian Tramer , and Chiyuan Zhang . Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646 , 2022 . Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646, 2022."},{"key":"e_1_3_2_1_13_1","first-page":"267","volume-title":"USENIX Security Symposium","author":"Carlini Nicholas","year":"2019","unstructured":"Nicholas Carlini , Chang Liu , \u00dalfar Erlingsson , Jernej Kos , and Dawn Song . The se- cret sharer: Evaluating and testing unintended memorization in neural networks . In USENIX Security Symposium , pages 267 -- 284 , 2019 . Nicholas Carlini, Chang Liu, \u00dalfar Erlingsson, Jernej Kos, and Dawn Song. The se- cret sharer: Evaluating and testing unintended memorization in neural networks. In USENIX Security Symposium, pages 267--284, 2019."},{"key":"e_1_3_2_1_14_1","volume-title":"USENIX Security Symposium","author":"Carlini Nicholas","year":"2021","unstructured":"Nicholas Carlini , Florian Tramer , Eric Wallace , Matthew Jagielski , Ariel Herbert- Voss , Katherine Lee , Adam Roberts , Tom Brown , Dawn Song , Ulfar Erlingsson , Extracting training data from large language models . In USENIX Security Symposium , 2021 . Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert- Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In USENIX Security Symposium, 2021."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3055399.3055491"},{"key":"e_1_3_2_1_16_1","first-page":"1596","volume-title":"International Conference on Machine Learning","author":"Diakonikolas Ilias","year":"2019","unstructured":"Ilias Diakonikolas , Gautam Kamath , Daniel Kane , Jerry Li , Jacob Steinhardt , and Alistair Stewart . Sever : A robust meta-algorithm for stochastic optimization . In International Conference on Machine Learning , pages 1596 -- 1606 . PMLR, 2019 . Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. Sever: A robust meta-algorithm for stochastic optimization. In International Conference on Machine Learning, pages 1596--1606. PMLR, 2019."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/11681878_14"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357713.3384290"},{"key":"e_1_3_2_1_19_1","volume-title":"Decepticons: Corrupted transformers breach privacy in federated learning for language models. arXiv preprint arXiv:2201.12675","author":"Fowl Liam","year":"2022","unstructured":"Liam Fowl , Jonas Geiping , Steven Reich , Yuxin Wen , Wojtek Czaja , Micah Goldblum , and Tom Goldstein . Decepticons: Corrupted transformers breach privacy in federated learning for language models. arXiv preprint arXiv:2201.12675 , 2022 . Liam Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojtek Czaja, Micah Goldblum, and Tom Goldstein. Decepticons: Corrupted transformers breach privacy in federated learning for language models. arXiv preprint arXiv:2201.12675, 2022."},{"key":"e_1_3_2_1_20_1","first-page":"34","article-title":"Adversarial examples make strong poisons","author":"Fowl Liam","year":"2021","unstructured":"Liam Fowl , Micah Goldblum , Ping-yeh Chiang, Jonas Geiping , Wojciech Czaja , and Tom Goldstein . Adversarial examples make strong poisons . Advances in Neural Information Processing Systems , 34 , 2021 . Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojciech Czaja, and Tom Goldstein. Adversarial examples make strong poisons. Advances in Neural Information Processing Systems, 34, 2021.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813677"},{"key":"e_1_3_2_1_22_1","volume-title":"USENIX Security Symposium","author":"Fredrikson Matthew","year":"2014","unstructured":"Matthew Fredrikson , Eric Lantz , Somesh Jha , Simon Lin , David Page , and Thomas Ristenpart . Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing . In USENIX Security Symposium , 2014 . Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In USENIX Security Symposium, 2014."},{"key":"e_1_3_2_1_23_1","volume-title":"International Conference on Learning Representations","author":"Geiping Jonas","year":"2021","unstructured":"Jonas Geiping , Liam Fowl , Ronny Huang , Wojciech Czaja , Gavin Taylor , Michael Moeller , and Tom Goldstein . Witches' brew : Industrial scale data poisoning via gradient matching . In International Conference on Learning Representations , 2021 . Jonas Geiping, Liam Fowl, Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, and Tom Goldstein. Witches' brew: Industrial scale data poisoning via gradient matching. In International Conference on Learning Representations, 2021."},{"key":"e_1_3_2_1_24_1","unstructured":"Yoel Gluck Neal Harris and Angelo Prado. Breach: reviving the crime attack. http:\/\/breachattack.com 2013.  Yoel Gluck Neal Harris and Angelo Prado. Breach: reviving the crime attack. http:\/\/breachattack.com 2013."},{"key":"e_1_3_2_1_25_1","volume-title":"ACM SIGACT Symposium on Theory of Computing","author":"Goldreich Oded","year":"1987","unstructured":"Oded Goldreich , Silvio Micali , and Avi Wigderson . How to play any mental game, or a completeness theorem for protocols with honest majority . In ACM SIGACT Symposium on Theory of Computing , 1987 . Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental game, or a completeness theorem for protocols with honest majority. In ACM SIGACT Symposium on Theory of Computing, 1987."},{"key":"e_1_3_2_1_26_1","first-page":"7","article-title":"Evaluating backdooring attacks on deep neural networks","author":"Gu Tianyu","year":"2019","unstructured":"Tianyu Gu , Kang Liu , Brendan Dolan-Gavitt , and Siddharth Garg . BadNets : Evaluating backdooring attacks on deep neural networks . IEEE Access , 7 , 2019 . Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7, 2019.","journal-title":"IEEE Access"},{"key":"e_1_3_2_1_27_1","volume-title":"Strong baseline defenses against clean-label poisoning attacks. https:\/\/openreview.net\/forum?id=B1xgv0NtwH","author":"Gupta Neal","year":"2019","unstructured":"Neal Gupta , W Ronny Huang , Liam Fowl , Chen Zhu , Soheil Feizi , Tom Goldstein , and John Dickerson . Strong baseline defenses against clean-label poisoning attacks. https:\/\/openreview.net\/forum?id=B1xgv0NtwH , 2019 . Neal Gupta, W Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, and John Dickerson. Strong baseline defenses against clean-label poisoning attacks. https:\/\/openreview.net\/forum?id=B1xgv0NtwH, 2019."},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134012"},{"key":"e_1_3_2_1_29_1","volume-title":"Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS genetics","author":"Homer Nils","year":"2008","unstructured":"Nils Homer , Szabolcs Szelinger , Margot Redman , David Duggan , Waibhav Tembe , Jill Muehling , John Pearson , Dietrich Stephan , Stanley Nelson , and David Craig . Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS genetics , 2008 . Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill Muehling, John Pearson, Dietrich Stephan, Stanley Nelson, and David Craig. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS genetics, 2008."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/1866307.1866376"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2018.00057"},{"key":"e_1_3_2_1_32_1","first-page":"22205","article-title":"Auditing differentially private machine learning: How private is private SGD?","volume":"33","author":"Jagielski Matthew","year":"2020","unstructured":"Matthew Jagielski , Jonathan Ullman , and Alina Oprea . Auditing differentially private machine learning: How private is private SGD? Advances in Neural Information Processing Systems , 33 : 22205 - 22216 , 2020 . Matthew Jagielski, Jonathan Ullman, and Alina Oprea. Auditing differentially private machine learning: How private is private SGD? Advances in Neural Information Processing Systems, 33:22205-22216, 2020.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.2478\/popets-2021-0031"},{"key":"e_1_3_2_1_34_1","first-page":"259","volume-title":"ACM SIGSAC Conference on Computer and Communications Security","author":"Jia Jinyuan","year":"2019","unstructured":"Jinyuan Jia , Ahmed Salem , Michael Backes , Yang Zhang , and Neil Zhenqiang Gong . MemGuard : Defending against black-box membership inference attacks via adversarial examples . In ACM SIGSAC Conference on Computer and Communications Security , pages 259 -- 274 , 2019 . Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. MemGuard: Defending against black-box membership inference attacks via adversarial examples. In ACM SIGSAC Conference on Computer and Communications Security, pages 259--274, 2019."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1561\/2200000083"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-45661-9_21"},{"key":"e_1_3_2_1_37_1","volume-title":"UCI machine learning repository: Adult data set. https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/adult","author":"Kohavi Ronny","year":"1996","unstructured":"Ronny Kohavi and Barry Becker . UCI machine learning repository: Adult data set. https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/adult , 1996 . Ronny Kohavi and Barry Becker. UCI machine learning repository: Adult data set. https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/adult, 1996."},{"key":"e_1_3_2_1_38_1","volume-title":"Learning multiple layers of features from tiny images","author":"Krizhevsky Alex","year":"2009","unstructured":"Alex Krizhevsky and Geoffrey Hinton . Learning multiple layers of features from tiny images , 2009 . Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009."},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23291"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCD.2017.16"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP48549.2020.00040"},{"key":"e_1_3_2_1_42_1","volume-title":"International Conference on Learning Representations","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . Towards deep learning models resistant to adversarial attacks . In International Conference on Learning Representations , 2018 . Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018."},{"key":"e_1_3_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP46214.2022.9833623"},{"key":"e_1_3_2_1_44_1","first-page":"34","article-title":"Antipodes of label differential privacy","author":"Esmaeili Mani Malek","year":"2021","unstructured":"Mani Malek Esmaeili , Ilya Mironov , Karthik Prasad , Igor Shilov , and Florian Tramer . Antipodes of label differential privacy : PATE and ALIBI. Advances in Neural Information Processing Systems , 34 , 2021 . Mani Malek Esmaeili, Ilya Mironov, Karthik Prasad, Igor Shilov, and Florian Tramer. Antipodes of label differential privacy: PATE and ALIBI. Advances in Neural Information Processing Systems, 34, 2021.","journal-title":"PATE and ALIBI. Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_45_1","volume-title":"USENIX Security Symposium","author":"Mehnaz Shagufta","year":"2022","unstructured":"Shagufta Mehnaz , Sayanton V Dibbo , Ehsanul Kabir , Ninghui Li , and Elisa Bertino . Are your sensitive attributes private? Novel model inversion attribute inference attacks on classification models . In USENIX Security Symposium , 2022 . Shagufta Mehnaz, Sayanton V Dibbo, Ehsanul Kabir, Ninghui Li, and Elisa Bertino. Are your sensitive attributes private? Novel model inversion attribute inference attacks on classification models. In USENIX Security Symposium, 2022."},{"key":"e_1_3_2_1_46_1","first-page":"691","volume-title":"IEEE Symposium on Security and Privacy","author":"Melis Luca","year":"2019","unstructured":"Luca Melis , Congzheng Song , Emiliano De Cristofaro , and Vitaly Shmatikov . Ex- ploiting unintended feature leakage in collaborative learning . In IEEE Symposium on Security and Privacy , pages 691 -- 706 . IEEE, 2019 . Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Ex- ploiting unintended feature leakage in collaborative learning. In IEEE Symposium on Security and Privacy, pages 691--706. IEEE, 2019."},{"key":"e_1_3_2_1_47_1","volume-title":"International Conference on Learning Representations","author":"Merity Stephen","year":"2017","unstructured":"Stephen Merity , Caiming Xiong , James Bradbury , and Richard Socher . Pointer sentinel mixture models . In International Conference on Learning Representations , 2017 . Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017."},{"key":"e_1_3_2_1_48_1","volume-title":"Quantifying privacy risks of masked language models using membership inference attacks. arXiv preprint arXiv:2203.03929","author":"Mireshghallah Fatemehsadat","year":"2022","unstructured":"Fatemehsadat Mireshghallah , Kartik Goyal , Archit Uniyal , Taylor Berg-Kirkpatrick , and Reza Shokri . Quantifying privacy risks of masked language models using membership inference attacks. arXiv preprint arXiv:2203.03929 , 2022 . Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri. Quantifying privacy risks of masked language models using membership inference attacks. arXiv preprint arXiv:2203.03929, 2022."},{"key":"e_1_3_2_1_49_1","first-page":"35","volume-title":"ACM SIGSAC Conference on Computer and Communications Security","author":"Mohassel Payman","year":"2018","unstructured":"Payman Mohassel and Peter Rindal . ABY3 : A mixed protocol framework for machine learning . In ACM SIGSAC Conference on Computer and Communications Security , pages 35 -- 52 , 2018 . Payman Mohassel and Peter Rindal. ABY3: A mixed protocol framework for machine learning. In ACM SIGSAC Conference on Computer and Communications Security, pages 35--52, 2018."},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.12"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140451"},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3243855"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00065"},{"key":"e_1_3_2_1_54_1","first-page":"866","volume-title":"IEEE Symposium on Security and Privacy","author":"Nasr Milad","year":"2021","unstructured":"Milad Nasr , Shuang Songi , Abhradeep Thakurta , Nicolas Papemoti , and Nicholas Carlin . Adversary instantiation : Lower bounds for differentially private machine learning . In IEEE Symposium on Security and Privacy , pages 866 -- 882 . IEEE, 2021 . Milad Nasr, Shuang Songi, Abhradeep Thakurta, Nicolas Papemoti, and Nicholas Carlin. Adversary instantiation: Lower bounds for differentially private machine learning. In IEEE Symposium on Security and Privacy, pages 866--882. IEEE, 2021."},{"key":"e_1_3_2_1_55_1","first-page":"8748","volume-title":"International Conference on Machine Learning","author":"Radford Alec","year":"2021","unstructured":"Alec Radford , Jong Wook Kim , Chris Hallacy , Aditya Ramesh , Gabriel Goh , Sandhini Agarwal , Girish Sastry , Amanda Askell , Pamela Mishkin , Jack Clark , Learning transferable visual models from natural language supervision . In International Conference on Machine Learning , pages 8748 -- 8763 . PMLR, 2021 . Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748--8763. PMLR, 2021."},{"key":"e_1_3_2_1_56_1","volume-title":"Language models are unsupervised multitask learners. OpenAI blog","author":"Radford Alec","year":"2019","unstructured":"Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , Ilya Sutskever , Language models are unsupervised multitask learners. OpenAI blog , 2019 . Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 2019."},{"key":"e_1_3_2_1_57_1","volume-title":"Training production language models without memorizing user data. arXiv preprint arXiv:2009.10031","author":"Ramaswamy Swaroop","year":"2020","unstructured":"Swaroop Ramaswamy , Om Thakkar , Rajiv Mathews , Galen Andrew , H Brendan McMahan , and Fran\u00e7oise Beaufays . Training production language models without memorizing user data. arXiv preprint arXiv:2009.10031 , 2020 . Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H Brendan McMahan, and Fran\u00e7oise Beaufays. Training production language models without memorizing user data. arXiv preprint arXiv:2009.10031, 2020."},{"key":"e_1_3_2_1_58_1","volume-title":"International Conference on Machine Learning","author":"Sablayrolles Alexandre","year":"2019","unstructured":"Alexandre Sablayrolles , Matthijs Douze , Cordelia Schmid , Yann Ollivier , and Herv\u00e9 J\u00e9gou . White-box vs black-box: Bayes optimal strategies for membership inference . In International Conference on Machine Learning , 2019 . Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, and Herv\u00e9 J\u00e9gou. White-box vs black-box: Bayes optimal strategies for membership inference. In International Conference on Machine Learning, 2019."},{"key":"e_1_3_2_1_59_1","first-page":"34","article-title":"Designing objects for robust vision","author":"Salman Hadi","year":"2021","unstructured":"Hadi Salman , Andrew Ilyas , Logan Engstrom , Sai Vemprala , Aleksander Madry , and Ashish Kapoor . Unadversarial examples : Designing objects for robust vision . Advances in Neural Information Processing Systems , 34 , 2021 . Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, and Ashish Kapoor. Unadversarial examples: Designing objects for robust vision. Advances in Neural Information Processing Systems, 34, 2021.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_60_1","first-page":"1559","volume-title":"USENIX Security Symposium","author":"Schuster Roei","year":"2021","unstructured":"Roei Schuster , Congzheng Song , Eran Tromer , and Vitaly Shmatikov . You auto- complete me: Poisoning vulnerabilities in neural code completion . In USENIX Security Symposium , pages 1559 -- 1575 , 2021 . Roei Schuster, Congzheng Song, Eran Tromer, and Vitaly Shmatikov. You auto- complete me: Poisoning vulnerabilities in neural code completion. In USENIX Security Symposium, pages 1559--1575, 2021."},{"key":"e_1_3_2_1_61_1","volume-title":"Advances in Neural Information Processing Systems","author":"Shafahi Ali","year":"2018","unstructured":"Ali Shafahi , Ronny Huang , Mahyar Najibi , Octavian Suciu , Christoph Studer , Tudor Dumitras , and Tom Goldstein . Poison frogs! Targeted clean-label poisoning attacks on neural networks . Advances in Neural Information Processing Systems , 2018 . Ali Shafahi, Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! Targeted clean-label poisoning attacks on neural networks. Advances in Neural Information Processing Systems, 2018."},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.41"},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134077"},{"key":"e_1_3_2_1_64_1","volume-title":"USENIX Security Symposium","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu , Radu Marginean , Yigitcan Kaya , Hal Daume III, and Tudor Dumitras . When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks . In USENIX Security Symposium , 2018 . Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. In USENIX Security Symposium, 2018."},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.privatenlp-1.1"},{"key":"e_1_3_2_1_66_1","first-page":"31","article-title":"Spectral signatures in backdoor attacks","author":"Tran Brandon","year":"2018","unstructured":"Brandon Tran , Jerry Li , and Aleksander Madry . Spectral signatures in backdoor attacks . Advances in Neural Information Processing Systems , 31 , 2018 . Brandon Tran, Jerry Li, and Aleksander Madry. Spectral signatures in backdoor attacks. Advances in Neural Information Processing Systems, 31, 2018.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_67_1","volume-title":"Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771","author":"Turner Alexander","year":"2019","unstructured":"Alexander Turner , Dimitris Tsipras , and Aleksander Madry . Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771 , 2019 . Alexander Turner, Dimitris Tsipras, and Aleksander Madry. Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771, 2019."},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-46035-7_35"},{"key":"e_1_3_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.2478\/popets-2019-0035"},{"key":"e_1_3_2_1_70_1","volume-title":"International Conference on Learning Representations","author":"Watson Lauren","year":"2022","unstructured":"Lauren Watson , Chuan Guo , Graham Cormode , and Alexandre Sablayrolles . On the importance of difficulty calibration in membership inference attacks . In International Conference on Learning Representations , 2022 . Lauren Watson, Chuan Guo, Graham Cormode, and Alexandre Sablayrolles. On the importance of difficulty calibration in membership inference attacks. In International Conference on Learning Representations, 2022."},{"key":"e_1_3_2_1_71_1","first-page":"23668","volume-title":"International Conference on Machine Learning","author":"Wen Yuxin","year":"2022","unstructured":"Yuxin Wen , Jonas A. Geiping , Liam Fowl , Micah Goldblum , and Tom Goldstein . Fishing for user data in large-batch federated learning via gradient magnification . In International Conference on Machine Learning , pages 23668 -- 23684 . PMLR, 2022 . Yuxin Wen, Jonas A. Geiping, Liam Fowl, Micah Goldblum, and Tom Goldstein. Fishing for user data in large-batch federated learning via gradient magnification. In International Conference on Machine Learning, pages 23668--23684. PMLR, 2022."},{"key":"e_1_3_2_1_72_1","volume-title":"Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144","author":"Wu Yonghui","year":"2016","unstructured":"Yonghui Wu , Mike Schuster , Zhifeng Chen , Quoc V. Le , Mohammad Norouzi , Wolfgang Macherey , Maxim Krikun , Yuan Cao , Qin Gao , Klaus Macherey , Jeff Klingner , Apurva Shah , Melvin Johnson , Xiaobing Liu , Lukasz Kaiser , Stephan Gouws , Yoshikiyo Kato , Taku Kudo , Hideto Kazawa , Keith Stevens , George Kurian , Nishant Patil , Wei Wang , Cliff Young , Jason Smith , Jason Riesa , Alex Rudnick , Oriol Vinyals , Greg Corrado , Macduff Hughes , and Jeffrey Dean . Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 , 2016 . Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016."},{"key":"e_1_3_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.5555\/1382436.1382751"},{"key":"e_1_3_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1145\/3548606.3560675"},{"key":"e_1_3_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1109\/CSF.2018.00027"},{"key":"e_1_3_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.5244\/C.30.87"},{"key":"e_1_3_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3372297.3417880"},{"key":"e_1_3_2_1_78_1","first-page":"7614","volume-title":"International Conference on Machine Learning","author":"Zhu Chen","year":"2019","unstructured":"Chen Zhu , W Ronny Huang , Hengduo Li , Gavin Taylor , Christoph Studer , and Tom Goldstein . Transferable clean-label poisoning attacks on deep neural nets . In International Conference on Machine Learning , pages 7614 -- 7623 . PMLR, 2019 . Chen Zhu, W Ronny Huang, Hengduo Li, Gavin Taylor, Christoph Studer, and Tom Goldstein. Transferable clean-label poisoning attacks on deep neural nets. In International Conference on Machine Learning, pages 7614--7623. PMLR, 2019."}],"event":{"name":"CCS '22: 2022 ACM SIGSAC Conference on Computer and Communications Security","location":"Los Angeles CA USA","acronym":"CCS '22","sponsor":["SIGSAC ACM Special Interest Group on Security, Audit, and Control"]},"container-title":["Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3548606.3560554","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3548606.3560554","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:50:57Z","timestamp":1750182657000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3548606.3560554"}},"subtitle":["Poisoning Machine Learning Models to Reveal Their Secrets"],"short-title":[],"issued":{"date-parts":[[2022,11,7]]},"references-count":78,"alternative-id":["10.1145\/3548606.3560554","10.1145\/3548606"],"URL":"https:\/\/doi.org\/10.1145\/3548606.3560554","relation":{},"subject":[],"published":{"date-parts":[[2022,11,7]]},"assertion":[{"value":"2022-11-07","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}