{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T15:30:10Z","timestamp":1773329410442,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":87,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T00:00:00Z","timestamp":1667779200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"NSF (National Science Foundation)","doi-asserted-by":"publisher","award":["CNS1949650"],"award-info":[{"award-number":["CNS1949650"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000185","name":"Defense Advanced Research Projects Agency","doi-asserted-by":"publisher","award":["GARD"],"award-info":[{"award-number":["GARD"]}],"id":[{"id":"10.13039\/100000185","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,11,7]]},"DOI":"10.1145\/3548606.3560561","type":"proceedings-article","created":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T11:41:28Z","timestamp":1667821288000},"page":"2611-2625","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["Post-breach Recovery"],"prefix":"10.1145","author":[{"given":"Shawn","family":"Shan","sequence":"first","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]},{"given":"Wenxin","family":"Ding","sequence":"additional","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]},{"given":"Emily","family":"Wenger","sequence":"additional","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]},{"given":"Haitao","family":"Zheng","sequence":"additional","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]},{"given":"Ben Y.","family":"Zhao","sequence":"additional","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,11,7]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3474370.3485659"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3127960"},{"key":"e_1_3_2_1_3_1","volume-title":"Proc. of ICML. PMLR, 274--283","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye , Nicholas Carlini , and David Wagner . 2018 . Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples . In Proc. of ICML. PMLR, 274--283 . Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proc. of ICML. PMLR, 274--283."},{"key":"e_1_3_2_1_4_1","unstructured":"Tara Bernard Tiffany Hsu Nicole Perlroth and Ron Lieber. 2017. Equifax Says Cyberattack May Have Affected 143 Million in the U.S. https:\/\/www.nytimes.com\/2017\/09\/07\/business\/equifaxcyberattack.html..  Tara Bernard Tiffany Hsu Nicole Perlroth and Ron Lieber. 2017. Equifax Says Cyberattack May Have Affected 143 Million in the U.S. https:\/\/www.nytimes.com\/2017\/09\/07\/business\/equifaxcyberattack.html.."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP40001.2021.00019"},{"key":"e_1_3_2_1_6_1","unstructured":"broadcom.com. 2022. Stop Threats in Their Tracks Wherever They Attack. https:\/\/www.broadcom.com\/products\/cyber-security\/endpoint.  broadcom.com. 2022. Stop Threats in Their Tracks Wherever They Attack. https:\/\/www.broadcom.com\/products\/cyber-security\/endpoint."},{"key":"e_1_3_2_1_7_1","volume-title":"Evading adversarial example detection defenses with orthogonal projected gradient descent. arXiv preprint arXiv:2106.15023","author":"Bryniarski Oliver","year":"2021","unstructured":"Oliver Bryniarski , Nabeel Hingun , Pedro Pachuca , Vincent Wang , and Nicholas Carlini . 2021. Evading adversarial example detection defenses with orthogonal projected gradient descent. arXiv preprint arXiv:2106.15023 ( 2021 ). Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, and Nicholas Carlini. 2021. Evading adversarial example detection defenses with orthogonal projected gradient descent. arXiv preprint arXiv:2106.15023 (2021)."},{"key":"e_1_3_2_1_8_1","volume-title":"Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. arXiv preprint arXiv:2203.09123","author":"Byun Junyoung","year":"2022","unstructured":"Junyoung Byun , Seungju Cho , Myung-Joon Kwon , Hee-Seon Kim , and Changick Kim . 2022. Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. arXiv preprint arXiv:2203.09123 ( 2022 ). Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, and Changick Kim. 2022. Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. arXiv preprint arXiv:2203.09123 (2022)."},{"key":"e_1_3_2_1_9_1","volume-title":"A partial break of the honeypots defense to catch adversarial attacks. arXiv preprint arXiv:2009.10975","author":"Carlini Nicholas","year":"2020","unstructured":"Nicholas Carlini . 2020. A partial break of the honeypots defense to catch adversarial attacks. arXiv preprint arXiv:2009.10975 ( 2020 ). Nicholas Carlini. 2020. A partial break of the honeypots defense to catch adversarial attacks. arXiv preprint arXiv:2009.10975 (2020)."},{"key":"e_1_3_2_1_10_1","volume-title":"Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311","author":"Carlini Nicholas","year":"2016","unstructured":"Nicholas Carlini and David Wagner . 2016. Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 ( 2016 ). Nicholas Carlini and David Wagner. 2016. Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 (2016)."},{"key":"e_1_3_2_1_11_1","volume-title":"Magnet and efficient defenses against adversarial attacks are not robust to adversarial examples. arXiv preprint arXiv:1711.08478","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David Wagner . 2017a. Magnet and efficient defenses against adversarial attacks are not robust to adversarial examples. arXiv preprint arXiv:1711.08478 ( 2017 ). Nicholas Carlini and David Wagner. 2017a. Magnet and efficient defenses against adversarial attacks are not robust to adversarial examples. arXiv preprint arXiv:1711.08478 (2017)."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.49"},{"key":"e_1_3_2_1_13_1","volume-title":"Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069","author":"Chakraborty Anirban","year":"2018","unstructured":"Anirban Chakraborty , Manaar Alam , Vishal Dey , Anupam Chattopadhyay , and Debdeep Mukhopadhyay . 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069 ( 2018 ). Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069 (2018)."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP40000.2020.00045"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11302"},{"key":"e_1_3_2_1_16_1","volume-title":"Proc. of ICML.","author":"Cohen Jeremy","year":"2019","unstructured":"Jeremy Cohen , Elan Rosenfeld , and Zico Kolter . 2019 . Certified Adversarial Robustness via Randomized Smoothing . In Proc. of ICML. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019. Certified Adversarial Robustness via Randomized Smoothing. In Proc. of ICML."},{"key":"e_1_3_2_1_17_1","volume-title":"A video game for cyber security training and awareness. computers & security","author":"Cone Benjamin D","year":"2007","unstructured":"Benjamin D Cone , Cynthia E Irvine , Michael F Thompson , and Thuy D Nguyen . 2007. A video game for cyber security training and awareness. computers & security , Vol. 26 , 1 ( 2007 ), 63--72. Benjamin D Cone, Cynthia E Irvine, Michael F Thompson, and Thuy D Nguyen. 2007. A video game for cyber security training and awareness. computers & security, Vol. 26, 1 (2007), 63--72."},{"key":"e_1_3_2_1_18_1","volume-title":"Proc. of USENIX Security. 321--338","author":"Demontis Ambra","year":"2019","unstructured":"Ambra Demontis , Marco Melis , Maura Pintor , Matthew Jagielski , Battista Biggio , Alina Oprea , Cristina Nita-Rotaru , and Fabio Roli . 2019 . Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks . In Proc. of USENIX Security. 321--338 . Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. 2019. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In Proc. of USENIX Security. 321--338."},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3136889"},{"key":"e_1_3_2_1_21_1","volume-title":"Proc. of ICML. PMLR","author":"Frosst Nicholas","year":"2019","unstructured":"Nicholas Frosst , Nicolas Papernot , and Geoffrey Hinton . 2019 . Analyzing and improving representations with the soft nearest neighbor loss . In Proc. of ICML. PMLR , 2012--2020. Nicholas Frosst, Nicolas Papernot, and Geoffrey Hinton. 2019. Analyzing and improving representations with the soft nearest neighbor loss. In Proc. of ICML. PMLR, 2012--2020."},{"key":"e_1_3_2_1_22_1","volume-title":"Journal of Physics","author":"Gao Chenxiang","year":"2025","unstructured":"Chenxiang Gao and Wei Wu. 2022. Boosting the Transferability of Adversarial Examples with More Efficient Data Augmentation . In Journal of Physics , Vol. 2189 . IOP Publishing , 01 2025 . Chenxiang Gao and Wei Wu. 2022. Boosting the Transferability of Adversarial Examples with More Efficient Data Augmentation. In Journal of Physics, Vol. 2189. IOP Publishing, 012025."},{"key":"e_1_3_2_1_23_1","volume-title":"Proc. of NeurIPS","author":"Goodfellow Ian","year":"2014","unstructured":"Ian Goodfellow , Jean Pouget-Abadie , Mehdi Mirza , Bing Xu , David Warde-Farley , Sherjil Ozair , Aaron Courville , and Yoshua Bengio . 2014 . Generative adversarial nets . Proc. of NeurIPS (2014). Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Proc. of NeurIPS (2014)."},{"key":"e_1_3_2_1_24_1","volume-title":"Uncovering the limits of adversarial training against norm-bounded adversarial examples. arXiv preprint arXiv:2010.03593","author":"Gowal Sven","year":"2020","unstructured":"Sven Gowal , Chongli Qin , Jonathan Uesato , Timothy Mann , and Pushmeet Kohli . 2020. Uncovering the limits of adversarial training against norm-bounded adversarial examples. arXiv preprint arXiv:2010.03593 ( 2020 ). Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. 2020. Uncovering the limits of adversarial training against norm-bounded adversarial examples. arXiv preprint arXiv:2010.03593 (2020)."},{"key":"e_1_3_2_1_25_1","volume-title":"Dan Andrei Calian, and Timothy A Mann","author":"Gowal Sven","year":"2021","unstructured":"Sven Gowal , Sylvestre-Alvise Rebuffi , Olivia Wiles , Florian Stimberg , Dan Andrei Calian, and Timothy A Mann . 2021 . Improving robustness using generated data. Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy A Mann. 2021. Improving robustness using generated data."},{"key":"e_1_3_2_1_26_1","volume-title":"Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030","author":"Guo Chuan","year":"2019","unstructured":"Chuan Guo , Tom Goldstein , Awni Hannun , and Laurens Van Der Maaten . 2019. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030 ( 2019 ). Chuan Guo, Tom Goldstein, Awni Hannun, and Laurens Van Der Maaten. 2019. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030 (2019)."},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3460120.3485378"},{"key":"e_1_3_2_1_28_1","unstructured":"Mohammad Sazzadul Hoque Md Mukit Md Bikas Abu Naser etal 2012. An implementation of intrusion detection system using genetic algorithm. arXiv preprint arXiv:1204.1336 (2012).  Mohammad Sazzadul Hoque Md Mukit Md Bikas Abu Naser et al. 2012. An implementation of intrusion detection system using genetic algorithm. arXiv preprint arXiv:1204.1336 (2012)."},{"key":"e_1_3_2_1_29_1","unstructured":"Xing Hu Ling Liang Lei Deng Yu Ji Yufei Ding Zidong Du Qi Guo Timothy Sherwood Yuan Xie etal 2021. A systematic view of leakage risks in deep neural network systems. (2021).  Xing Hu Ling Liang Lei Deng Yu Ji Yufei Ding Zidong Du Qi Guo Timothy Sherwood Yuan Xie et al. 2021. A systematic view of leakage risks in deep neural network systems. (2021)."},{"key":"e_1_3_2_1_30_1","volume-title":"Proc. of DAC. IEEE, 1--6.","author":"Hua Weizhe","year":"2018","unstructured":"Weizhe Hua , Zhiru Zhang , and G Edward Suh . 2018 . Reverse engineering convolutional neural networks through side-channel information leaks . In Proc. of DAC. IEEE, 1--6. Weizhe Hua, Zhiru Zhang, and G Edward Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In Proc. of DAC. IEEE, 1--6."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.243"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1007\/11507840_9"},{"key":"e_1_3_2_1_33_1","unstructured":"I Jibilian and K Canales. 2021. The US is readying sanctions against Russia over the solarwinds cyber attack. Here's a simple explanation of how the massive hack happened and why it's such a big deal.  I Jibilian and K Canales. 2021. The US is readying sanctions against Russia over the solarwinds cyber attack. Here's a simple explanation of how the massive hack happened and why it's such a big deal."},{"key":"e_1_3_2_1_34_1","volume-title":"Improving adversarial robustness of ensembles with diversity training. arXiv preprint arXiv:1901.09981","author":"Kariyappa Sanjay","year":"2019","unstructured":"Sanjay Kariyappa and Moinuddin K Qureshi . 2019. Improving adversarial robustness of ensembles with diversity training. arXiv preprint arXiv:1901.09981 ( 2019 ). Sanjay Kariyappa and Moinuddin K Qureshi. 2019. Improving adversarial robustness of ensembles with diversity training. arXiv preprint arXiv:1901.09981 (2019)."},{"key":"e_1_3_2_1_35_1","volume-title":"Proc. of ICLR","author":"Karras Tero","year":"2017","unstructured":"Tero Karras , Timo Aila , Samuli Laine , and Jaakko Lehtinen . 2017 . Progressive growing of gans for improved quality, stability, and variation . Proc. of ICLR (2017). Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2017. Progressive growing of gans for improved quality, stability, and variation. Proc. of ICLR (2017)."},{"key":"e_1_3_2_1_36_1","unstructured":"Jeff King. 2020. How We Review Content.  Jeff King. 2020. How We Review Content."},{"key":"e_1_3_2_1_37_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Jimmy Ba . 2014 . Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_3_2_1_38_1","volume-title":"Proc. of NeurIPS.","author":"Zico Kolter J.","year":"2017","unstructured":"J. Zico Kolter and Eric Wong . 2017 . Provable defenses against adversarial examples via the convex outer adversarial polytope . In Proc. of NeurIPS. J. Zico Kolter and Eric Wong. 2017. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proc. of NeurIPS."},{"key":"e_1_3_2_1_40_1","volume-title":"Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin , Ian Goodfellow , and Samy Bengio . 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 ( 2016 ). Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)."},{"key":"e_1_3_2_1_41_1","volume-title":"Proc. of USENIX Security","author":"Li Huiying","year":"2022","unstructured":"Huiying Li , Shawn Shan , Emily Wenger , Jiayun Zhang , Haitao Zheng , and Ben Y Zhao . 2022 . Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks . In Proc. of USENIX Security . Boston, MA. Huiying Li, Shawn Shan, Emily Wenger, Jiayun Zhang, Haitao Zheng, and Ben Y Zhao. 2022. Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks. In Proc. of USENIX Security. Boston, MA."},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jnca.2012.09.004"},{"key":"e_1_3_2_1_43_1","volume-title":"Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770","author":"Liu Yanpei","year":"2016","unstructured":"Yanpei Liu , Xinyun Chen , Chang Liu , and Dawn Song . 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 ( 2016 ). Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016)."},{"key":"e_1_3_2_1_44_1","volume-title":"Proc. of ICLR.","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . 2018 . Towards deep learning models resistant to adversarial attacks . In Proc. of ICLR. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proc. of ICLR."},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134057"},{"key":"e_1_3_2_1_46_1","volume-title":"Proc. of USENIX Security.","author":"Mink Jaron","year":"2022","unstructured":"Jaron Mink , Licheng Luo , Nat a M Barbosa , Olivia Figueira , Yang Wang , and Gang Wang . 2022 . DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks . In Proc. of USENIX Security. Jaron Mink, Licheng Luo, Nat a M Barbosa, Olivia Figueira, Yang Wang, and Gang Wang. 2022. DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks. In Proc. of USENIX Security."},{"key":"e_1_3_2_1_47_1","unstructured":"mitre.org. 2022. MITRE Matrix. https:\/\/attack.mitre.org\/matrices\/enterprise\/. .  mitre.org. 2022. MITRE Matrix. https:\/\/attack.mitre.org\/matrices\/enterprise\/. ."},{"key":"e_1_3_2_1_48_1","volume-title":"Proc. of ICML. PMLR, 4636--4645","author":"Moon Seungyong","year":"2019","unstructured":"Seungyong Moon , Gaon An , and Hyun Oh Song . 2019 . Parsimonious black-box adversarial attacks via efficient combinatorial optimization . In Proc. of ICML. PMLR, 4636--4645 . Seungyong Moon, Gaon An, and Hyun Oh Song. 2019. Parsimonious black-box adversarial attacks via efficient combinatorial optimization. In Proc. of ICML. PMLR, 4636--4645."},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_1_51_1","volume-title":"Proc. of IEEE S&P.","author":"Papernot N.","unstructured":"N. Papernot , P. McDaniel , X. Wu , S. Jha , and A. Swami . 2016. Distillation as a defense to adversarial perturbations against deep neural networks . In Proc. of IEEE S&P. N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In Proc. of IEEE S&P."},{"key":"e_1_3_2_1_52_1","volume-title":"Adversarial Attack across Datasets. arXiv preprint arXiv:2110.07718","author":"Qin Yunxiao","year":"2021","unstructured":"Yunxiao Qin , Yuanhao Xiong , Jinfeng Yi , and Cho-Jui Hsieh . 2021. Adversarial Attack across Datasets. arXiv preprint arXiv:2110.07718 ( 2021 ). Yunxiao Qin, Yuanhao Xiong, Jinfeng Yi, and Cho-Jui Hsieh. 2021. Adversarial Attack across Datasets. arXiv preprint arXiv:2110.07718 (2021)."},{"key":"e_1_3_2_1_53_1","volume-title":"Fan Yao, and Deliang Fan.","author":"Rakin Adnan Siraj","year":"2021","unstructured":"Adnan Siraj Rakin , Md Hafizul Islam Chowdhuryy , Fan Yao, and Deliang Fan. 2021 . Deepsteal : Advanced model extractions leveraging efficient weight stealing in memories. arXiv preprint arXiv:2111.04625 (2021). Adnan Siraj Rakin, Md Hafizul Islam Chowdhuryy, Fan Yao, and Deliang Fan. 2021. Deepsteal: Advanced model extractions leveraging efficient weight stealing in memories. arXiv preprint arXiv:2111.04625 (2021)."},{"key":"e_1_3_2_1_54_1","volume-title":"Fixing data augmentation to improve adversarial robustness. arXiv preprint arXiv:2103.01946","author":"Rebuffi Sylvestre-Alvise","year":"2021","unstructured":"Sylvestre-Alvise Rebuffi , Sven Gowal , Dan A Calian , Florian Stimberg , Olivia Wiles , and Timothy Mann . 2021. Fixing data augmentation to improve adversarial robustness. arXiv preprint arXiv:2103.01946 ( 2021 ). Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A Calian, Florian Stimberg, Olivia Wiles, and Timothy Mann. 2021. Fixing data augmentation to improve adversarial robustness. arXiv preprint arXiv:2103.01946 (2021)."},{"key":"e_1_3_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICMLA.2015.152"},{"key":"e_1_3_2_1_56_1","volume-title":"Bloomberg Businessweek","volume":"13","author":"Riley Michael","year":"2014","unstructured":"Michael Riley , Ben Elgin , Dune Lawrence , and Carol Matlack . 2014 . Missed alarms and 40 million stolen credit card numbers: How target blew it . Bloomberg Businessweek , Vol. 13 (2014). Michael Riley, Ben Elgin, Dune Lawrence, and Carol Matlack. 2014. Missed alarms and 40 million stolen credit card numbers: How target blew it. Bloomberg Businessweek, Vol. 13 (2014)."},{"key":"e_1_3_2_1_57_1","volume-title":"Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122","author":"Sabour Sara","year":"2015","unstructured":"Sara Sabour , Yanshuai Cao , Fartash Faghri , and David J Fleet . 2015. Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122 ( 2015 ). Sara Sabour, Yanshuai Cao, Fartash Faghri, and David J Fleet. 2015. Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122 (2015)."},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00474"},{"key":"e_1_3_2_1_59_1","volume-title":"Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein.","author":"Shafahi Ali","year":"2019","unstructured":"Ali Shafahi , Mahyar Najibi , Mohammad Amin Ghiasi , Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. 2019 . Adversarial training for free! Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free!"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3474369.3486875"},{"key":"e_1_3_2_1_61_1","volume-title":"Proc. of USENIX Security.","author":"Shan Shawn","year":"2022","unstructured":"Shawn Shan , Arjun Nitin Bhagoji , Haitao Zheng , and Ben Y Zhao . 2022 . Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks . Proc. of USENIX Security. Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, and Ben Y Zhao. 2022. Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks. Proc. of USENIX Security."},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3372297.3417231"},{"key":"e_1_3_2_1_63_1","volume-title":"Proc. of USENIX Security. 1589--1604","author":"Shan Shawn","year":"2020","unstructured":"Shawn Shan , Emily Wenger , Jiayun Zhang , Huiying Li , Haitao Zheng , and Ben Y Zhao . 2020 b. Fawkes: Protecting privacy against unauthorized deep learning models . In Proc. of USENIX Security. 1589--1604 . Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Y Zhao. 2020b. Fawkes: Protecting privacy against unauthorized deep learning models. In Proc. of USENIX Security. 1589--1604."},{"key":"e_1_3_2_1_64_1","volume-title":"Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research","author":"Srivastava Nitish","year":"2014","unstructured":"Nitish Srivastava , Geoffrey Hinton , Alex Krizhevsky , Ilya Sutskever , and Ruslan Salakhutdinov . 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , Vol. 15 , 1 ( 2014 ), 1929--1958. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, Vol. 15, 1 (2014), 1929--1958."},{"key":"e_1_3_2_1_65_1","volume-title":"Somesh Jha, and Long Lu.","author":"Sun Zhichuang","year":"2020","unstructured":"Zhichuang Sun , Ruimin Sun , Changming Liu , Amrita Roy Chowdhury , Somesh Jha, and Long Lu. 2020 . ShadowNet : A secure and efficient system for on-device model inference. arXiv preprint arXiv:2011.05905 (2020). Zhichuang Sun, Ruimin Sun, Changming Liu, Amrita Roy Chowdhury, Somesh Jha, and Long Lu. 2020. ShadowNet: A secure and efficient system for on-device model inference. arXiv preprint arXiv:2011.05905 (2020)."},{"key":"e_1_3_2_1_66_1","volume-title":"Proc. of ICML. PMLR, 6105--6114","author":"Tan Mingxing","year":"2019","unstructured":"Mingxing Tan and Quoc Le . 2019 . Efficientnet: Rethinking model scaling for convolutional neural networks . In Proc. of ICML. PMLR, 6105--6114 . Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proc. of ICML. PMLR, 6105--6114."},{"key":"e_1_3_2_1_67_1","first-page":"1633","article-title":"On adaptive attacks to adversarial example defenses","volume":"33","author":"Tramer Florian","year":"2020","unstructured":"Florian Tramer , Nicholas Carlini , Wieland Brendel , and Aleksander Madry . 2020 . On adaptive attacks to adversarial example defenses . Proc. of NeurIPS , Vol. 33 (2020), 1633 -- 1645 . Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. Proc. of NeurIPS, Vol. 33 (2020), 1633--1645.","journal-title":"Proc. of NeurIPS"},{"key":"e_1_3_2_1_68_1","volume-title":"Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204","author":"Tram\u00e8r Florian","year":"2017","unstructured":"Florian Tram\u00e8r , Alexey Kurakin , Nicolas Papernot , Ian Goodfellow , Dan Boneh , and Patrick McDaniel . 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 ( 2017 ). Florian Tram\u00e8r, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)."},{"key":"e_1_3_2_1_69_1","unstructured":"trustwave.com. 2020. Trustwave Global Security Report. https:\/\/www.trustwave.com\/en-us\/resources\/library\/documents\/2020-trustwave-global-security-report\/..  trustwave.com. 2020. Trustwave Global Security Report. https:\/\/www.trustwave.com\/en-us\/resources\/library\/documents\/2020-trustwave-global-security-report\/.."},{"key":"e_1_3_2_1_70_1","volume-title":"The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data","author":"Tschandl Philipp","year":"2018","unstructured":"Philipp Tschandl , Cliff Rosendahl , and Harald Kittler . 2018. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data , Vol. 5 , 1 ( 2018 ), 1--9. Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. 2018. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, Vol. 5, 1 (2018), 1--9."},{"key":"e_1_3_2_1_71_1","volume-title":"Adversarial risk and the dangers of evaluating against weak attacks. arXiv preprint arXiv:1802.05666","author":"Uesato Jonathan","year":"2018","unstructured":"Jonathan Uesato , Brendan O'Donoghue , Aaron van den Oord , and Pushmeet Kohli . 2018. Adversarial risk and the dangers of evaluating against weak attacks. arXiv preprint arXiv:1802.05666 ( 2018 ). Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, and Pushmeet Kohli. 2018. Adversarial risk and the dangers of evaluating against weak attacks. arXiv preprint arXiv:1802.05666 (2018)."},{"key":"e_1_3_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01237-3_34"},{"key":"e_1_3_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00196"},{"key":"e_1_3_2_1_74_1","volume-title":"Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994","author":"Wong Eric","year":"2020","unstructured":"Eric Wong , Leslie Rice , and J Zico Kolter . 2020. Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994 ( 2020 ). Eric Wong, Leslie Rice, and J Zico Kolter. 2020. Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994 (2020)."},{"key":"e_1_3_2_1_75_1","unstructured":"Lei Wu Zhanxing Zhu Cheng Tai and Weinan E. 2018. Understanding and enhancing the transferability of adversarial examples. arXiv preprint arXiv:1802.09707 (2018).  Lei Wu Zhanxing Zhu Cheng Tai and Weinan E. 2018. Understanding and enhancing the transferability of adversarial examples. arXiv preprint arXiv:1802.09707 (2018)."},{"key":"e_1_3_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00284"},{"key":"e_1_3_2_1_77_1","volume-title":"Sara Khodeir, Yingyezhe Jin, Frank Li, Shawn Shan, Sagar Patel","author":"Xu Teng","year":"2021","unstructured":"Teng Xu , Gerard Goossen , Huseyin Kerem Cevahir , Sara Khodeir, Yingyezhe Jin, Frank Li, Shawn Shan, Sagar Patel , David Freeman , and Paul Pearce. 2021 . Deep entity classification: Abusive account detection for online social networks. In Proc. of USENIX Security. Teng Xu, Gerard Goossen, Huseyin Kerem Cevahir, Sara Khodeir, Yingyezhe Jin, Frank Li, Shawn Shan, Sagar Patel, David Freeman, and Paul Pearce. 2021. Deep entity classification: Abusive account detection for online social networks. In Proc. of USENIX Security."},{"key":"e_1_3_2_1_78_1","first-page":"5505","article-title":"DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles","volume":"33","author":"Yang Huanrui","year":"2020","unstructured":"Huanrui Yang , Jingyang Zhang , Hongliang Dong , Nathan Inkawhich , Andrew Gardner , Andrew Touchet , Wesley Wilkes , Heath Berry , and Hai Li . 2020 . DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles . Proc. of NeurIPS , Vol. 33 (2020), 5505 -- 5515 . Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, and Hai Li. 2020. DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles. Proc. of NeurIPS, Vol. 33 (2020), 5505--5515.","journal-title":"Proc. of NeurIPS"},{"key":"e_1_3_2_1_79_1","volume-title":"Proc. of NeurIPS","author":"Yang Zhuolin","year":"2021","unstructured":"Zhuolin Yang , Linyi Li , Xiaojun Xu , Shiliang Zuo , Qian Chen , Pan Zhou , Benjamin Rubinstein , Ce Zhang , and Bo Li . 2021 . TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness . Proc. of NeurIPS (2021). Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin Rubinstein, Ce Zhang, and Bo Li. 2021. TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness. Proc. of NeurIPS (2021)."},{"key":"e_1_3_2_1_80_1","volume-title":"Proc. of IMC","author":"Yao Yuanshun","unstructured":"Yuanshun Yao , Zhujun Xiao , Bolun Wang , Bimal Viswanath , Haitao Zheng , and Ben Y. Zhao . 2017. Complexity vs. Performance: Empirical Analysis of Machine Learning as a Service . In Proc. of IMC . London, UK. Yuanshun Yao, Zhujun Xiao, Bolun Wang, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2017. Complexity vs. Performance: Empirical Analysis of Machine Learning as a Service. In Proc. of IMC. London, UK."},{"key":"e_1_3_2_1_81_1","unstructured":"YouTube 2011. https:\/\/www.cs.tau.ac.il\/~wolf\/ytfaces\/. YouTube Faces DB.  YouTube 2011. https:\/\/www.cs.tau.ac.il\/~wolf\/ytfaces\/. YouTube Faces DB."},{"key":"e_1_3_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2020.24178"},{"key":"e_1_3_2_1_83_1","volume-title":"ES attack: Model stealing against deep neural networks without data hurdles","author":"Yuan Xiaoyong","year":"2022","unstructured":"Xiaoyong Yuan , Leah Ding , Lan Zhang , Xiaolin Li , and Dapeng Oliver Wu. 2022. ES attack: Model stealing against deep neural networks without data hurdles . IEEE Trans. on ETCI ( 2022 ). Xiaoyong Yuan, Leah Ding, Lan Zhang, Xiaolin Li, and Dapeng Oliver Wu. 2022. ES attack: Model stealing against deep neural networks without data hurdles. IEEE Trans. on ETCI (2022)."},{"key":"e_1_3_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140449"},{"key":"e_1_3_2_1_85_1","volume-title":"Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701","author":"Zeiler Matthew D","year":"2012","unstructured":"Matthew D Zeiler . 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 ( 2012 ). Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012)."},{"key":"e_1_3_2_1_86_1","volume-title":"Proc. of ICML. PMLR, 7472--7482","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang , Yaodong Yu , Jiantao Jiao , Eric Xing , Laurent El Ghaoui , and Michael Jordan . 2019 . Theoretically principled trade-off between robustness and accuracy . In Proc. of ICML. PMLR, 7472--7482 . Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In Proc. of ICML. PMLR, 7472--7482."},{"key":"e_1_3_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.485"},{"key":"e_1_3_2_1_88_1","volume-title":"Exploring the Effect of Randomness on Transferability of Adversarial Samples against Deep Neural Networks","author":"Zhou Yan","year":"2021","unstructured":"Yan Zhou , Murat Kantarcioglu , and Bowei Xi. 2021. Exploring the Effect of Randomness on Transferability of Adversarial Samples against Deep Neural Networks . IEEE Transactions on Dependable and Secure Computing ( 2021 ). Yan Zhou, Murat Kantarcioglu, and Bowei Xi. 2021. Exploring the Effect of Randomness on Transferability of Adversarial Samples against Deep Neural Networks. IEEE Transactions on Dependable and Secure Computing (2021)."}],"event":{"name":"CCS '22: 2022 ACM SIGSAC Conference on Computer and Communications Security","location":"Los Angeles CA USA","acronym":"CCS '22","sponsor":["SIGSAC ACM Special Interest Group on Security, Audit, and Control"]},"container-title":["Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3548606.3560561","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3548606.3560561","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3548606.3560561","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:50:57Z","timestamp":1750182657000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3548606.3560561"}},"subtitle":["Protection against White-box Adversarial Examples for Leaked DNN Models"],"short-title":[],"issued":{"date-parts":[[2022,11,7]]},"references-count":87,"alternative-id":["10.1145\/3548606.3560561","10.1145\/3548606"],"URL":"https:\/\/doi.org\/10.1145\/3548606.3560561","relation":{},"subject":[],"published":{"date-parts":[[2022,11,7]]},"assertion":[{"value":"2022-11-07","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}