{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T22:41:51Z","timestamp":1772836911387,"version":"3.50.1"},"publisher-location":"Cham","reference-count":91,"publisher":"Springer International Publishing","isbn-type":[{"value":"9783030334314","type":"print"},{"value":"9783030334321","type":"electronic"}],"license":[{"start":{"date-parts":[[2020,1,1]],"date-time":"2020-01-01T00:00:00Z","timestamp":1577836800000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/www.springer.com\/tdm"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2020]]},"DOI":"10.1007\/978-3-030-33432-1_2","type":"book-chapter","created":{"date-parts":[[2020,2,4]],"date-time":"2020-02-04T19:02:44Z","timestamp":1580842964000},"page":"23-40","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Defending Against Machine Learning Based Inference Attacks via Adversarial Examples: Opportunities and Challenges"],"prefix":"10.1007","author":[{"given":"Jinyuan","family":"Jia","sequence":"first","affiliation":[]},{"given":"Neil Zhenqiang","family":"Gong","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,2,5]]},"reference":[{"key":"2_CR1","doi-asserted-by":"crossref","unstructured":"Jahna Otterbacher. Inferring gender of movie reviewers: exploiting writing style, content and metadata. In CIKM, 2010.","DOI":"10.1145\/1871437.1871487"},{"key":"2_CR2","doi-asserted-by":"crossref","unstructured":"Udi Weinsberg, Smriti Bhagat, Stratis Ioannidis, and Nina Taft. Blurme: Inferring and obfuscating user gender based on ratings. In RecSys, 2012.","DOI":"10.1145\/2365952.2365989"},{"key":"2_CR3","doi-asserted-by":"crossref","unstructured":"E. Zheleva and L. Getoor. To join or not to join: The illusion of privacy in social networks with mixed public and private user profiles. In WWW, 2009.","DOI":"10.1145\/1526709.1526781"},{"key":"2_CR4","unstructured":"Abdelberi Chaabane, Gergely Acs, and Mohamed Ali Kaafar. You are what you like! information leakage through users\u2019 interests. In NDSS, 2012."},{"key":"2_CR5","doi-asserted-by":"crossref","unstructured":"Michal Kosinski, David Stillwell, and Thore Graepel. Private traits and attributes are predictable from digital records of human behavior. PNAS, 2013.","DOI":"10.1073\/pnas.1218772110"},{"issue":"2","key":"2_CR6","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2594455","volume":"5","author":"Neil Zhenqiang Gong","year":"2014","unstructured":"Neil Zhenqiang Gong, Ameet Talwalkar, Lester Mackey, Ling Huang, Eui Chul Richard Shin, Emil Stefanov, Elaine(Runting) Shi, and Dawn Song. Joint link prediction and attribute inference using a social-attribute network. ACM TIST, 5(2), 2014.","journal-title":"ACM Transactions on Intelligent Systems and Technology"},{"key":"2_CR7","unstructured":"Neil Zhenqiang Gong and Bin Liu. You are who you know and how you behave: Attribute inference attacks via users\u2019 social friends and behaviors. In USENIX Security Symposium, 2016."},{"key":"2_CR8","unstructured":"Jinyuan Jia, Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. AttriInfer: Inferring user attributes in online social networks using markov random fields. In WWW, 2017."},{"key":"2_CR9","doi-asserted-by":"crossref","unstructured":"Neil Zhenqiang Gong and Bin Liu. Attribute inference attacks in online social networks. ACM TOPS, 21(1), 2018.","DOI":"10.1145\/3154793"},{"key":"2_CR10","doi-asserted-by":"crossref","unstructured":"Yang Zhang, Mathias Humbert, Tahleen Rahman, Cheng-Te Li, Jun Pang, and Michael Backes. Tagvisor: A privacy advisor for sharing hashtags. In WWW, 2018.","DOI":"10.1145\/3178876.3186095"},{"key":"2_CR11","doi-asserted-by":"crossref","unstructured":"Arvind Narayanan, Hristo Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. On the feasibility of internet-scale author identification. In IEEE S&P, 2012.","DOI":"10.1109\/SP.2012.46"},{"issue":"1","key":"2_CR12","doi-asserted-by":"publisher","first-page":"200","DOI":"10.1109\/TIFS.2014.2368355","volume":"10","author":"Mathias Payer","year":"2015","unstructured":"Mathias Payer, Ling Huang, Neil Zhenqiang Gong, Kevin Borgolte, and Mario Frank. What you submit is who you are: A multi-modal approach for deanonymizing scientific publications. IEEE Transactions on Information Forensics and Security, 10(1), 2015.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"key":"2_CR13","unstructured":"Aylin Caliskan-Islam, Richard Harang, Andrew Liu, Arvind Narayanan, Clare Voss, Fabian Yamaguchi, and Rachel Greenstadt. De-anonymizing programmers via code stylometry. In USENIX Security Symposium, 2015."},{"key":"2_CR14","doi-asserted-by":"crossref","unstructured":"Aylin Caliskan, Fabian Yamaguchi, Edwin Tauber, Richard Harang, Konrad Rieck, Rachel Greenstadt, and Arvind Narayanan. When coding style survives compilation: De-anonymizing programmers from executable binaries. In NDSS, 2018.","DOI":"10.14722\/ndss.2018.23304"},{"key":"2_CR15","unstructured":"Rakshith Shetty, Bernt Schiele, and Mario Fritz. A4nt: Author attribute anonymity by adversarial training of neural machine translation. In USENIX Security Symposium, 2018."},{"key":"2_CR16","doi-asserted-by":"crossref","unstructured":"Mohammed Abuhamad, Tamer AbuHmed, Aziz Mohaisen, and DaeHun Nyang. Large-scale and language-oblivious code authorship identification. In CCS, 2018.","DOI":"10.1145\/3243734.3243738"},{"key":"2_CR17","doi-asserted-by":"crossref","unstructured":"Dominik Herrmann, Rolf Wendolsky, and Hannes Federrath. Website fingerprinting: attacking popular privacy enhancing technologies with the multinomial na\u00efve-bayes classifier. In ACM Workshop on Cloud Computing Security, 2009.","DOI":"10.1145\/1655008.1655013"},{"key":"2_CR18","doi-asserted-by":"crossref","unstructured":"Andriy Panchenko, Lukas Niessen, Andreas Zinnen, and Thomas Engel. Website fingerprinting in onion routing based anonymization networks. In ACM workshop on Privacy in the Electronic Society, 2011.","DOI":"10.1145\/2046556.2046570"},{"key":"2_CR19","doi-asserted-by":"crossref","unstructured":"Xiang Cai, Xin Cheng Zhang, Brijesh Joshi, and Rob Johnson. Touching from a distance: Website fingerprinting attacks and defenses. In CCS, 2012.","DOI":"10.1145\/2382196.2382260"},{"key":"2_CR20","doi-asserted-by":"crossref","unstructured":"Marc Juarez, Sadia Afroz, Gunes Acar, Claudia Diaz, and Rachel Greenstadt. A critical evaluation of website fingerprinting attacks. In CCS, 2014.","DOI":"10.1145\/2660267.2660368"},{"key":"2_CR21","unstructured":"Tao Wang, Xiang Cai, Rishab Nithyanand, Rob Johnson, and Ian Goldberg. Effective attacks and provable defenses for website fingerprinting. In USENIX Security Symposium, 2014."},{"key":"2_CR22","unstructured":"Liran Lerman, Gianluca Bontempi, and Olivier Markowitch. Side channel attack: an approach based on machine learning. In COSADE, 2011."},{"key":"2_CR23","doi-asserted-by":"crossref","unstructured":"Yinqian Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. Cross-vm side channels and their use to extract private keys. In CCS, 2012.","DOI":"10.1145\/2382196.2382230"},{"key":"2_CR24","doi-asserted-by":"crossref","unstructured":"Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership Inference Attacks Against Machine Learning Models. In IEEE S&P, 2017.","DOI":"10.1109\/SP.2017.41"},{"key":"2_CR25","doi-asserted-by":"crossref","unstructured":"Milad Nasr, Reza Shokri, and Amir Houmansadr. Machine Learning with Membership Privacy using Adversarial Regularization. In CCS, 2018.","DOI":"10.1145\/3243734.3243855"},{"key":"2_CR26","doi-asserted-by":"crossref","unstructured":"Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In NDSS, 2019.","DOI":"10.14722\/ndss.2019.23119"},{"key":"2_CR27","unstructured":"Y. Michalevsky, G. Nakibly, A. Schulman, and D. Boneh. Powerspy: Location tracking using mobile device power analysis. In USENIX Security Symposium, 2015."},{"key":"2_CR28","doi-asserted-by":"crossref","unstructured":"Sashank Narain, Triet D. Vo-Huu, Kenneth Block, and Guevara Noubir. Inferring user routes and locations using zero-permission mobile sensors. In IEEE S & P, 2016.","DOI":"10.1109\/SP.2016.31"},{"key":"2_CR29","unstructured":"Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In USENIX Security Symposium, 2014."},{"key":"2_CR30","doi-asserted-by":"crossref","unstructured":"S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In CSF, 2018.","DOI":"10.1109\/CSF.2018.00027"},{"key":"2_CR31","unstructured":"Guixin Ye, Zhanyong Tang, Dingyi Fang, Zhanxing Zhu, Yansong Feng, Pengfei Xu, Xiaojiang Chen, and Zheng Wang. Yet another text captcha solver: A generative adversarial network based approach. In CCS, 2018."},{"key":"2_CR32","doi-asserted-by":"crossref","unstructured":"Elie Bursztein, Romain Beauxis, Hristo Paskov, Daniele Perito, Celine Fabry, and John Mitchell. The failure of noise-based non-continuous audio captchas. In IEEE S & P, 2011.","DOI":"10.1109\/SP.2011.14"},{"key":"2_CR33","doi-asserted-by":"crossref","unstructured":"Elie Bursztein, Matthieu Martin, and John C. Mitchell. Text-based captcha strengths and weaknesses. In CCS, 2011.","DOI":"10.1145\/2046707.2046724"},{"key":"2_CR34","unstructured":"Cambridge Analytica. https:\/\/goo.gl\/PqRjjX , May 2018."},{"key":"2_CR35","doi-asserted-by":"crossref","unstructured":"Reza Shokri, George Theodorakopoulos, and Carmela Troncoso. Protecting location privacy: Optimal strategy against localization attacks. In CCS, 2012.","DOI":"10.1145\/2382196.2382261"},{"key":"2_CR36","doi-asserted-by":"crossref","unstructured":"Reza Shokri. Privacy games: Optimal user-centric data obfuscation. In PETS, 2015.","DOI":"10.1515\/popets-2015-0024"},{"key":"2_CR37","doi-asserted-by":"crossref","unstructured":"Reza Shokri, George Theodorakopoulos, and Carmela Troncoso. Privacy games along location traces: A game-theoretic framework for optimizing location privacy. ACM TOPS, 19(4), 2016.","DOI":"10.1145\/3009908"},{"key":"2_CR38","doi-asserted-by":"crossref","unstructured":"Nadia Fawaz Fl\u00e1vio du Pin Calmon. Privacy against statistical inference. In Allerton, 2012.","DOI":"10.1109\/Allerton.2012.6483382"},{"key":"2_CR39","unstructured":"Jinyuan Jia and Neil Zhenqiang Gong. Attriguard: A practical defense against attribute inference attacks via adversarial machine learning. In USENIX Security Symposium, 2018."},{"key":"2_CR40","doi-asserted-by":"crossref","unstructured":"Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In TCC, 2006.","DOI":"10.1007\/11681878_14"},{"issue":"309","key":"2_CR41","doi-asserted-by":"publisher","first-page":"63","DOI":"10.1080\/01621459.1965.10480775","volume":"60","author":"Stanley L. Warner","year":"1965","unstructured":"S. Warner. Randomized response: a survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309), 1965.","journal-title":"Journal of the American Statistical Association"},{"key":"2_CR42","doi-asserted-by":"crossref","unstructured":"J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In FOCS, 2013.","DOI":"10.1109\/FOCS.2013.53"},{"key":"2_CR43","unstructured":"Aleksandra Korolova \u00dalfar Erlingsson, Vasyl Pihur. Rappor: Randomized aggregatable privacy-preserving ordinal response. In CCS, 2014."},{"key":"2_CR44","doi-asserted-by":"crossref","unstructured":"R. Bassily and A. D. Smith. Local, private, efficient protocols for succinct histograms. In STOC, 2015.","DOI":"10.1145\/2746539.2746632"},{"key":"2_CR45","unstructured":"Tianhao Wang, Jeremiah Blocki, Ninghui Li, and Somesh Jha. Locally differentially private protocols for frequency estimation. In USENIX Security Symposium, 2017."},{"key":"2_CR46","unstructured":"Jinyuan Jia and Neil Zhenqiang Gong. Calibrate: Frequency estimation and heavy hitter identification with local differential privacy via incorporating prior knowledge. In INFOCOM, 2019."},{"key":"2_CR47","doi-asserted-by":"crossref","unstructured":"Salman Salamatian, Amy Zhang, Flavio du Pin Calmon, Sandilya Bhamidipati, Nadia Fawaz, Branislav Kveton, Pedro Oliveira, and Nina Taft. Managing your private and public data: Bringing down inference attacks against your privacy. In IEEE Journal of Selected Topics in Signal Processing, 2015.","DOI":"10.1109\/JSTSP.2015.2442227"},{"key":"2_CR48","doi-asserted-by":"crossref","unstructured":"Marco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. Can machine learning be secure? In ACM ASIACCS, 2006.","DOI":"10.1145\/1128817.1128824"},{"key":"2_CR49","doi-asserted-by":"crossref","unstructured":"Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim \u015arndi\u0107Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In ECML-PKDD, 2013.","DOI":"10.1007\/978-3-642-40994-3_25"},{"key":"2_CR50","unstructured":"Jonathon Shlens Ian J. Goodfellow and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2014."},{"key":"2_CR51","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In AsiaCCS, 2017.","DOI":"10.1145\/3052973.3053009"},{"key":"2_CR52","unstructured":"Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In ICLR, 2017."},{"key":"2_CR53","doi-asserted-by":"crossref","unstructured":"Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE S & P, 2017.","DOI":"10.1109\/SP.2017.49"},{"key":"2_CR54","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In EuroS&P, 2016.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"2_CR55","doi-asserted-by":"crossref","unstructured":"Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and K Michael Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In CCS, 2016.","DOI":"10.1145\/2976749.2978392"},{"key":"2_CR56","doi-asserted-by":"crossref","unstructured":"Convex Optimization. Cambridge University Press, 2004.","DOI":"10.1017\/S275390670000070X"},{"key":"2_CR57","unstructured":"Neil Zhenqiang Gong, Wenchang Xu, Ling Huang, Prateek Mittal, Emil Stefanov, Vyas Sekar, and Dawn Song. Evolution of social-attribute networks: Measurements, modeling, and implications using google+. In IMC, 2012."},{"key":"2_CR58","doi-asserted-by":"crossref","unstructured":"Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, and Ram Rajagopal. Generative adversarial privacy. In Privacy in Machine Learning and Artificial Intelligence Workshop, 2018.","DOI":"10.3390\/e19120656"},{"key":"2_CR59","doi-asserted-by":"crossref","unstructured":"Terence Chen, Roksana Boreli, Mohamed-Ali Kaafar, and Arik Friedman. On the effectiveness of obfuscation techniques in online social networks. In PETS, 2014.","DOI":"10.1007\/978-3-319-08506-7_3"},{"key":"2_CR60","unstructured":"cvxpy. https:\/\/www.cvxpy.org\/ , June 2019."},{"key":"2_CR61","unstructured":"Mehmet Sinan Inci, Thomas Eisenbarth, and Berk Sunar. Deepcloak: Adversarial crafting as a defensive measure to cloak processes. In arxiv, 2018."},{"key":"2_CR62","unstructured":"Mohsen Imani, Mohammad Saidur Rahman, Nate Mathews, and Matthew Wright. Mockingbird: Defending against deep-learning-based website fingerprinting attacks with adversarial traces. In arxiv, 2019."},{"key":"2_CR63","unstructured":"Xiaozhu Meng, Barton P Miller, and Somesh Jha. Adversarial binaries for authorship identification. In arxiv, 2018."},{"key":"2_CR64","unstructured":"Erwin Quiring, Alwin Maier, and Konrad Rieck. Misleading authorship attribution of source code using adversarial learning. In USENIX Security Symposium, 2019."},{"key":"2_CR65","unstructured":"Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In ICML, 2012."},{"key":"2_CR66","doi-asserted-by":"crossref","unstructured":"Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In IEEE S & P, 2018.","DOI":"10.1109\/SP.2018.00057"},{"key":"2_CR67","unstructured":"Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. Data poisoning attacks on factorization-based collaborative filtering. In NIPS, 2016."},{"key":"2_CR68","doi-asserted-by":"crossref","unstructured":"Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. Fake co-visitation injection attacks to recommender systems. In NDSS, 2017.","DOI":"10.14722\/ndss.2017.23020"},{"key":"2_CR69","doi-asserted-by":"crossref","unstructured":"Luis Mu\u00f1oz-Gonz\u00e1lez, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. Towards poisoning of deep learning algorithms with back-gradient optimization. In AISec, 2017.","DOI":"10.1145\/3128572.3140451"},{"key":"2_CR70","unstructured":"Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. In NeurIPS, 2018."},{"key":"2_CR71","unstructured":"Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. When does machine learning fail? generalized transferability for evasion and poisoning attacks. In Usenix Security Symposium, 2018."},{"key":"2_CR72","doi-asserted-by":"crossref","unstructured":"Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. Poisoning attacks to graph-based recommender systems. In ACSAC, 2018.","DOI":"10.1145\/3274694.3274706"},{"key":"2_CR73","doi-asserted-by":"crossref","unstructured":"H. Yu, M. Kaminsky, P. B. Gibbons, and A. Flaxman. SybilGuard: Defending against Sybil attacks via social networks. In SIGCOMM, 2006.","DOI":"10.1145\/1159913.1159945"},{"key":"2_CR74","unstructured":"Qiang Cao, Michael Sirivianos, Xiaowei Yang, and Tiago Pregueiro. Aiding the detection of fake accounts in large scale social online services. In NSDI, 2012."},{"key":"2_CR75","unstructured":"Gang Wang, Tristan Konolige, Christo Wilson, and Xiao Wang. You are how you click: Clickstream analysis for sybil detection. In Usenix Security Symposium, 2013."},{"issue":"6","key":"2_CR76","doi-asserted-by":"publisher","first-page":"976","DOI":"10.1109\/TIFS.2014.2316975","volume":"9","author":"Neil Zhenqiang Gong","year":"2014","unstructured":"Neil Zhenqiang Gong, Mario Frank, and Prateek Mittal. Sybilbelief: A semi-supervised learning approach for structure-based sybil detection. IEEE Transactions on Information Forensics and Security, 9(6):976\u2013987, 2014.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"key":"2_CR77","doi-asserted-by":"crossref","unstructured":"Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. Sybilscar: Sybil detection in online social networks via local rule based propagation. In INFOCOM, 2017.","DOI":"10.1109\/INFOCOM.2017.8057066"},{"key":"2_CR78","doi-asserted-by":"crossref","unstructured":"Binghui Wang, Neil Zhenqiang Gong, and Hao Fu. Gang: Detecting fraudulent users in online social networks via guilt-by-association on directed graphs. In ICDM, 2017.","DOI":"10.1109\/ICDM.2017.56"},{"key":"2_CR79","doi-asserted-by":"crossref","unstructured":"Peng Gao, Binghui Wang, Neil Zhenqiang Gong, Sanjeev R. Kulkarni, Kurt Thomas, and Prateek Mittal. Sybilfuse: Combining local attributes with global structure to perform robust sybil detection. In CNS, 2018.","DOI":"10.1109\/CNS.2018.8433147"},{"key":"2_CR80","doi-asserted-by":"crossref","unstructured":"Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. Sybilblind: Detecting fake users in online social networks without manual labels. In RAID, 2018.","DOI":"10.1007\/978-3-030-00470-5_11"},{"key":"2_CR81","doi-asserted-by":"crossref","unstructured":"Binghui Wang, Jinyuan Jia, and Neil Zhenqiang Gong. Graph-based security and privacy analytics via collective classification with joint weight learning and propagation. In NDSS, 2019.","DOI":"10.14722\/ndss.2019.23226"},{"key":"2_CR82","unstructured":"Zenghua Xia, Chang Liu, Neil Zhenqiang Gong, Qi Li, Yong Cui, and Dawn Song. Characterizing and detecting malicious accounts in privacy-centric mobile social networks: A case study. In KDD, 2019."},{"key":"2_CR83","unstructured":"Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischof. On detecting adversarial perturbations. In ICLR, 2017."},{"key":"2_CR84","unstructured":"Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. In NDSS, 2018."},{"key":"2_CR85","doi-asserted-by":"crossref","unstructured":"Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In CCS, 2017.","DOI":"10.1145\/3133956.3134057"},{"key":"2_CR86","unstructured":"Warren He, Bo Li, and Dawn Song. Decision boundary analysis of adversarial examples. In ICLR, 2018."},{"key":"2_CR87","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE S & P, 2016.","DOI":"10.1109\/SP.2016.41"},{"key":"2_CR88","unstructured":"Xiaoyu Cao and Neil Zhenqiang Gong. Mitigating evasion attacks to deep neural networks via region-based classification. In ACSAC, 2017."},{"key":"2_CR89","doi-asserted-by":"crossref","unstructured":"Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In IEEE S & P, 2019.","DOI":"10.1109\/SP.2019.00044"},{"key":"2_CR90","unstructured":"Jeremy M Cohen, Elan Rosenfeld, and J. Zico Kolter. Certified adversarial robustness via randomized smoothing. In ICML, 2019."},{"key":"2_CR91","unstructured":"Shiqi Wang, Yizheng Chen, Ahmed Abdou, and Suman Jana. Mixtrain: Scalable training of verifiably robust neural networks. In arxiv, 2018."}],"container-title":["Adaptive Autonomous Secure Cyber Systems"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-030-33432-1_2","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,10,14]],"date-time":"2022-10-14T11:11:48Z","timestamp":1665745908000},"score":1,"resource":{"primary":{"URL":"http:\/\/link.springer.com\/10.1007\/978-3-030-33432-1_2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020]]},"ISBN":["9783030334314","9783030334321"],"references-count":91,"URL":"https:\/\/doi.org\/10.1007\/978-3-030-33432-1_2","relation":{},"subject":[],"published":{"date-parts":[[2020]]},"assertion":[{"value":"5 February 2020","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}}]}}