{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T12:57:35Z","timestamp":1776085055075,"version":"3.50.1"},"reference-count":125,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2021,5,25]],"date-time":"2021-05-25T00:00:00Z","timestamp":1621900800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2022,6,30]]},"abstract":"<jats:p>In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker\u2019 s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions.<\/jats:p>","DOI":"10.1145\/3453158","type":"journal-article","created":{"date-parts":[[2021,5,25]],"date-time":"2021-05-25T13:02:30Z","timestamp":1621947750000},"page":"1-36","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":222,"title":["Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain"],"prefix":"10.1145","volume":"54","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3509-4329","authenticated-orcid":false,"given":"Ishai","family":"Rosenberg","sequence":"first","affiliation":[{"name":"Ben-Gurion University of the Negev"}]},{"given":"Asaf","family":"Shabtai","sequence":"additional","affiliation":[{"name":"Ben-Gurion University of the Negev"}]},{"given":"Yuval","family":"Elovici","sequence":"additional","affiliation":[{"name":"Ben-Gurion University of the Negev"}]},{"given":"Lior","family":"Rokach","sequence":"additional","affiliation":[{"name":"Ben-Gurion University of the Negev"}]}],"member":"320","published-online":{"date-parts":[[2021,5,25]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"Skylight. 2019. Cylance I Kill You! Retrieved August 24 2019 from https:\/\/skylightcyber.com\/2019\/07\/18\/cylance-i-kill-you.  Skylight. 2019. Cylance I Kill You! Retrieved August 24 2019 from https:\/\/skylightcyber.com\/2019\/07\/18\/cylance-i-kill-you."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDCS.2019.00130"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2807385"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3375708.3380315"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1316"},{"key":"e_1_2_2_6_1","volume-title":"Proc. of Big Data. 1168--1177","author":"Anand A.","unstructured":"A. Anand , K. Gorde , J. R. Antony Moniz , N. Park , T. Chakraborty , and B. Chu . 2018. Phishing URL detection with oversampling based on text generative adversarial networks . In Proc. of Big Data. 1168--1177 . A. Anand, K. Gorde, J. R. Antony Moniz, N. Park, T. Chakraborty, and B. Chu. 2018. Phishing URL detection with oversampling based on text generative adversarial networks. In Proc. of Big Data. 1168--1177."},{"key":"e_1_2_2_7_1","unstructured":"Hyrum S. Anderson Anant Kharkar Bobby Filar David Evans and Phil Roth. 2018. Learning to evade static PE machine learning malware models via reinforcement learning. arXiv:1801.08917  Hyrum S. Anderson Anant Kharkar Bobby Filar David Evans and Phil Roth. 2018. Learning to evade static PE machine learning malware models via reinforcement learning. arXiv:1801.08917"},{"key":"e_1_2_2_8_1","volume-title":"Proc. of Black Hat USA.","author":"Anderson Hyrum S.","year":"2017","unstructured":"Hyrum S. Anderson , Anant Kharkar , Bobby Filar , and Phil Roth . 2017 . Evading machine learning malware detection . In Proc. of Black Hat USA. Hyrum S. Anderson, Anant Kharkar, Bobby Filar, and Phil Roth. 2017. Evading machine learning malware detection. In Proc. of Black Hat USA."},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2996758.2996767"},{"key":"e_1_2_2_10_1","volume-title":"Proc. of USENIX Security. 491--506","author":"Antonakakis Manos","year":"2012","unstructured":"Manos Antonakakis , Roberto Perdisci , Yacin Nadji , Nikolaos Vasiloglou , Saeed Abu-Nimeh , Wenke Lee , and David Dagon . 2012 . From throw-away traffic to bots: Detecting the rise of DGA-based malware . In Proc. of USENIX Security. 491--506 . Manos Antonakakis, Roberto Perdisci, Yacin Nadji, Nikolaos Vasiloglou, Saeed Abu-Nimeh, Wenke Lee, and David Dagon. 2012. From throw-away traffic to bots: Detecting the rise of DGA-based malware. In Proc. of USENIX Security. 491--506."},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2014.23247"},{"key":"e_1_2_2_12_1","volume-title":"Wagner","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye , Nicholas Carlini , and David A . Wagner . 2018 . Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proc. of ICML. 274--283. Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proc. of ICML. 274--283."},{"key":"e_1_2_2_13_1","volume-title":"Proc. of eCrime. 1--8.","author":"Bahnsen A. C.","unstructured":"A. C. Bahnsen , E. C. Bohorquez , S. Villegas , J. Vargas , and F. A. Gonz\u00e1lez . 2017. Classifying phishing URLs using recurrent neural networks . In Proc. of eCrime. 1--8. A. C. Bahnsen, E. C. Bohorquez, S. Villegas, J. Vargas, and F. A. Gonz\u00e1lez. 2017. Classifying phishing URLs using recurrent neural networks. In Proc. of eCrime. 1--8."},{"key":"e_1_2_2_14_1","volume-title":"Retrieved","author":"Bahnsen Alejandro Correa","year":"2018","unstructured":"Alejandro Correa Bahnsen , Ivan Torroledo , Luis David Camacho , and Sergio Villegas . 2018 . DeepPhish : Simulating Malicious AI . Retrieved March 29, 2021 from https:\/\/albahnsen.com\/2018\/06\/03\/deepphish-simulating-malicious-ai\/. Alejandro Correa Bahnsen, Ivan Torroledo, Luis David Camacho, and Sergio Villegas. 2018. DeepPhish : Simulating Malicious AI. Retrieved March 29, 2021 from https:\/\/albahnsen.com\/2018\/06\/03\/deepphish-simulating-malicious-ai\/."},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-010-5188-5"},{"key":"e_1_2_2_16_1","volume-title":"Proc. of USENIX Security. 515--532","author":"Batina Lejla","year":"2019","unstructured":"Lejla Batina , Shivam Bhasin , Dirmanto Jap , and Stjepan Picek . 2019 . CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel . In Proc. of USENIX Security. 515--532 . Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2019. CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel. In Proc. of USENIX Security. 515--532."},{"key":"e_1_2_2_17_1","volume-title":"A survey of deep learning methods for cyber security. Information 10 (April","author":"Berman Daniel","year":"2019","unstructured":"Daniel Berman , Anna Buczak , Jeffrey Chavis , and Cherita Corbett . 2019. A survey of deep learning methods for cyber security. Information 10 (April 2019 ), 122. Daniel Berman, Anna Buczak, Jeffrey Chavis, and Cherita Corbett. 2019. A survey of deep learning methods for cyber security. Information 10 (April 2019), 122."},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2013.57"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3264418"},{"key":"e_1_2_2_20_1","first-page":"1","article-title":"Static prediction games for adversarial learning problems","volume":"13","author":"Br\u00fcckner Michael","year":"2012","unstructured":"Michael Br\u00fcckner , Christian Kanzow , and Tobias Scheffer . 2012 . Static prediction games for adversarial learning problems . Journal of Machine Learning Research 13 , 1 (Sept. 2012), 2617--2654. Michael Br\u00fcckner, Christian Kanzow, and Tobias Scheffer. 2012. Static prediction games for adversarial learning problems. Journal of Machine Learning Research 13, 1 (Sept. 2012), 2617--2654.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_2_21_1","unstructured":"Wilson Cai Anish Doshi and Rafael Valle. 2018. Attacking speaker recognition with deep generative models. arXiv:1801.02384  Wilson Cai Anish Doshi and Rafael Valle. 2018. Attacking speaker recognition with deep generative models. arXiv:1801.02384"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140444"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2017.11.007"},{"key":"e_1_2_2_24_1","unstructured":"Xinyun Chen Chang Liu Bo Li Kimberly Lu and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526  Xinyun Chen Chang Liu Bo Li Kimberly Lu and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526"},{"key":"e_1_2_2_25_1","volume-title":"Mok","author":"Chung Simon P.","year":"2006","unstructured":"Simon P. Chung and Aloysius K . Mok . 2006 . Allergy attack against automatic signature generation. In Recent Advances in Intrusion Detection. Lecture Notes in Computer Science, Vol. 4219 . Springer . 61--80. Simon P. Chung and Aloysius K. Mok. 2006. Allergy attack against automatic signature generation. In Recent Advances in Intrusion Detection. Lecture Notes in Computer Science, Vol. 4219. Springer. 61--80."},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TrustCom\/BigDataSE.2018.00079"},{"key":"e_1_2_2_27_1","unstructured":"Joseph Clements Yuzhe Yang Ankur A. Sharma Hongxin Hu and Yingjie Lao. 2019. Rallying adversarial techniques against deep learning for network security. arXiv:1903.11688  Joseph Clements Yuzhe Yang Ankur A. Sharma Hongxin Hu and Yingjie Lao. 2019. Rallying adversarial techniques against deep learning for network security. arXiv:1903.11688"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3133978"},{"key":"e_1_2_2_29_1","volume-title":"Beyah","author":"Du Tianyu","year":"2019","unstructured":"Tianyu Du , Shouling Ji , Jinfeng Li , Qinchen Gu , Ting Wang , and Raheem A . Beyah . 2019 . SirenAttack : Generating adversarial audio for end-to-end acoustic systems. arXiv:1901.07846 Tianyu Du, Shouling Ji, Jinfeng Li, Qinchen Gu, Ting Wang, and Raheem A. Beyah. 2019. SirenAttack: Generating adversarial audio for end-to-end acoustic systems. arXiv:1901.07846"},{"key":"e_1_2_2_30_1","volume-title":"Balas","author":"Duddu Vasisht","year":"2018","unstructured":"Vasisht Duddu , Debasis Samanta , D. Vijay Rao , and Valentina E . Balas . 2018 . Stealing neural networks via timing side channels. arXiv:1812.11720 Vasisht Duddu, Debasis Samanta, D. Vijay Rao, and Valentina E. Balas. 2018. Stealing neural networks via timing side channels. arXiv:1812.11720"},{"key":"e_1_2_2_31_1","unstructured":"Alessandro Erba Riccardo Taormina Stefano Galelli Marcello Pogliani Michele Carminati Stefano Zanero and Nils Ole Tippenhauer. 2019. Real-time evasion attacks with physical constraints on deep learning-based anomaly detectors in industrial control systems. arXiv:1907.07487  Alessandro Erba Riccardo Taormina Stefano Galelli Marcello Pogliani Michele Carminati Stefano Zanero and Nils Ole Tippenhauer. 2019. Real-time evasion attacks with physical constraints on deep learning-based anomaly detectors in industrial control systems. arXiv:1907.07487"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00175"},{"key":"e_1_2_2_33_1","unstructured":"Cheng Feng Tingting Li Zhanxing Zhu and Deeph Chana. 2017. A deep learning-based framework for conducting stealthy attacks in industrial control systems. arXiv:1709.06397  Cheng Feng Tingting Li Zhanxing Zhu and Deeph Chana. 2017. A deep learning-based framework for conducting stealthy attacks in industrial control systems. arXiv:1709.06397"},{"key":"e_1_2_2_34_1","volume-title":"Proc. of SPW.","author":"Gao J.","unstructured":"J. Gao , J. Lanchantin , M. L. Soffa , and Y. Qi . 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers . In Proc. of SPW. J. Gao, J. Lanchantin, M. L. Soffa, and Y. Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proc. of SPW."},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359789.3359790"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3092566"},{"key":"e_1_2_2_37_1","volume-title":"Koutsoukos","author":"Ghafouri Amin","year":"2018","unstructured":"Amin Ghafouri , Yevgeniy Vorobeychik , and Xenofon D . Koutsoukos . 2018 . Adversarial regression for detecting attacks in cyber-physical systems. In Proc. of IJCAI. 3769--3775. Amin Ghafouri, Yevgeniy Vorobeychik, and Xenofon D. Koutsoukos. 2018. Adversarial regression for detecting attacks in cyber-physical systems. In Proc. of IJCAI. 3769--3775."},{"key":"e_1_2_2_38_1","unstructured":"Yuan Gong and Christian Poellabauer. 2017. Crafting adversarial examples for speech paralinguistics applications. arXiv:1711.03280  Yuan Gong and Christian Poellabauer. 2017. Crafting adversarial examples for speech paralinguistics applications. arXiv:1711.03280"},{"key":"e_1_2_2_39_1","volume-title":"Proc. of ICLR.","author":"Goodfellow I. J.","unstructured":"I. J. Goodfellow , J. Shlens , and C. Szegedy . 2015. Explaining and harnessing adversarial examples . In Proc. of ICLR. I. J. Goodfellow, J. Shlens, and C. Szegedy. 2015. Explaining and harnessing adversarial examples. In Proc. of ICLR."},{"key":"e_1_2_2_40_1","doi-asserted-by":"crossref","unstructured":"K. Grosse N. Papernot P. Manoharan M. Backes and P. McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv:1606.04435  K. Grosse N. Papernot P. Manoharan M. Backes and P. McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv:1606.04435","DOI":"10.1109\/SP.2016.41"},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-66399-9_4"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2909068"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2019.2894031"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3270101.3270111"},{"key":"e_1_2_2_45_1","volume-title":"Proc. of WOOT.","author":"He Warren","year":"2017","unstructured":"Warren He , James Wei , Xinyun Chen , Nicholas Carlini , and Dawn Song . 2017 . Adversarial example defense: Ensembles of weak defenses are not strong . In Proc. of WOOT. Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. 2017. Adversarial example defense: Ensembles of weak defenses are not strong. In Proc. of WOOT."},{"key":"e_1_2_2_46_1","unstructured":"Weiwei Hu and Ying Tan. 2017. Black-box attacks against RNN based malware detection algorithms. arXiv:1705.08131  Weiwei Hu and Ying Tan. 2017. Black-box attacks against RNN based malware detection algorithms. arXiv:1705.08131"},{"key":"e_1_2_2_47_1","unstructured":"Weiwei Hu and Ying Tan. 2017. Generating adversarial malware examples for black-box attacks based on GAN. arXiv:1702.05983  Weiwei Hu and Ying Tan. 2017. Generating adversarial malware examples for black-box attacks based on GAN. arXiv:1702.05983"},{"key":"e_1_2_2_48_1","volume-title":"Proc. ofDAC. Article 4, 6 pages.","author":"Hua Weizhe","unstructured":"Weizhe Hua , Zhiru Zhang , and G. Edward Suh . 2018. Reverse engineering convolutional neural networks through side-channel information leaks . In Proc. ofDAC. Article 4, 6 pages. Weizhe Hua, Zhiru Zhang, and G. Edward Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In Proc. ofDAC. Article 4, 6 pages."},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-13-1059-1_17"},{"key":"e_1_2_2_50_1","volume-title":"Proc. of AISec. 43--58","author":"Huang Ling","unstructured":"Ling Huang , Anthony D. Joseph , Blaine Nelson , Benjamin I. P. Rubinstein , and J. D. Tygar . 2011. Adversarial machine learning . In Proc. of AISec. 43--58 . Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2011. Adversarial machine learning. In Proc. of AISec. 43--58."},{"key":"e_1_2_2_51_1","doi-asserted-by":"crossref","unstructured":"Olakunle Ibitoye M. Omair Shafiq and Ashraf Matrawy. 2019. Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. arXiv:1905.05137  Olakunle Ibitoye M. Omair Shafiq and Ashraf Matrawy. 2019. Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. arXiv:1905.05137","DOI":"10.1109\/GLOBECOM38437.2019.9014337"},{"key":"e_1_2_2_52_1","volume-title":"Dimakis","author":"Ilyas Andrew","year":"2017","unstructured":"Andrew Ilyas , Ajil Jalal , Eirini Asteri , Constantinos Daskalakis , and Alexandros G . Dimakis . 2017 . The robust manifold defense: Adversarial training using generative models. arXiv:1712.09196 Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G. Dimakis. 2017. The robust manifold defense: Adversarial training using generative models. arXiv:1712.09196"},{"key":"e_1_2_2_53_1","volume-title":"Kochenderfer","author":"Katz Guy","year":"2017","unstructured":"Guy Katz , Clark Barrett , David L. Dill , Kyle Julian , and Mykel J . Kochenderfer . 2017 . Reluplex : An efficient SMT solver for verifying deep neural networks. In Proc. of CAV. 97--117. Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proc. of CAV. 97--117."},{"key":"e_1_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2017.09.053"},{"key":"e_1_2_2_55_1","volume-title":"Proc. of NIPS. 971--980","author":"Klambauer G\u00fcnter","year":"2017","unstructured":"G\u00fcnter Klambauer , Thomas Unterthiner , Andreas Mayr , and Sepp Hochreiter . 2017 . Self-normalizing neural networks . In Proc. of NIPS. 971--980 . G\u00fcnter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. In Proc. of NIPS. 971--980."},{"key":"e_1_2_2_56_1","volume-title":"Proc. of USENIX Security. 351--366","author":"Kolbitsch Clemens","year":"2009","unstructured":"Clemens Kolbitsch , Paolo Milani Comparetti , Christopher Kruegel , Engin Kirda , Xiaoyong Zhou , and XiaoFeng Wang . 2009 . Effective and efficient malware detection at the end host . In Proc. of USENIX Security. 351--366 . Clemens Kolbitsch, Paolo Milani Comparetti, Christopher Kruegel, Engin Kirda, Xiaoyong Zhou, and XiaoFeng Wang. 2009. Effective and efficient malware detection at the end host. In Proc. of USENIX Security. 351--366."},{"key":"e_1_2_2_57_1","doi-asserted-by":"publisher","DOI":"10.23919\/EUSIPCO.2018.8553214"},{"key":"e_1_2_2_58_1","doi-asserted-by":"crossref","unstructured":"Moshe Kravchik and Asaf Shabtai. 2019. Efficient cyber attacks detection in industrial control systems using lightweight neural networks. arXiv:1907.01216  Moshe Kravchik and Asaf Shabtai. 2019. Efficient cyber attacks detection in industrial control systems using lightweight neural networks. arXiv:1907.01216","DOI":"10.1145\/3264888.3264896"},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2018.8462693"},{"key":"e_1_2_2_60_1","unstructured":"Felix Kreuk Assi Barak Shir Aviv-Reuven Moran Baruch Benny Pinkas and Joseph Keshet. 2018. Adversarial examples on discrete sequences for beating whole-binary malware detection. arXiv:1802.04528  Felix Kreuk Assi Barak Shir Aviv-Reuven Moran Baruch Benny Pinkas and Joseph Keshet. 2018. Adversarial examples on discrete sequences for beating whole-binary malware detection. arXiv:1802.04528"},{"key":"e_1_2_2_61_1","unstructured":"Volodymyr Kuleshov Shantanu Thakoor Tingfung Lau and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. Unpublished Manuscript.  Volodymyr Kuleshov Shantanu Thakoor Tingfung Lau and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. Unpublished Manuscript."},{"key":"e_1_2_2_62_1","first-page":"1","article-title":"Black box attacks on deep anomaly detectors","volume":"21","author":"Kuppa Aditya","year":"2019","unstructured":"Aditya Kuppa , Slawomir Grzonkowski , Muhammad Rizwan Asghar , and Nhien-An Le-Khac . 2019 . Black box attacks on deep anomaly detectors . In Proc. ofARES. 21 : 1 -- 21 :10. Aditya Kuppa, Slawomir Grzonkowski, Muhammad Rizwan Asghar, and Nhien-An Le-Khac. 2019. Black box attacks on deep anomaly detectors. In Proc. ofARES. 21:1--21:10.","journal-title":"Proc. ofARES."},{"key":"e_1_2_2_63_1","volume-title":"Proc. of IJCNN. 1--8.","author":"Kuppa A.","unstructured":"A. Kuppa and N. A. Le-Khac . 2020. Black box attacks on explainable artificial intelligence (XAI) methods in cyber security . In Proc. of IJCNN. 1--8. A. Kuppa and N. A. Le-Khac. 2020. Black box attacks on explainable artificial intelligence (XAI) methods in cyber security. In Proc. of IJCNN. 1--8."},{"key":"e_1_2_2_64_1","unstructured":"Qi Lei Lingfei Wu Pin-Yu Chen Alexandros G. Dimakis Inderjit S. Dhillon and Michael Witbrock. 2018. Discrete attacks and submodular optimization with applications to text classification. arXiv:1812.00151  Qi Lei Lingfei Wu Pin-Yu Chen Alexandros G. Dimakis Inderjit S. Dhillon and Michael Witbrock. 2018. Discrete attacks and submodular optimization with applications to text classification. arXiv:1812.00151"},{"key":"e_1_2_2_65_1","volume-title":"Yingyuan Yang, Jinyuan Stella Sun, and Kevin Tomsovic.","author":"Li Jiangnan","year":"2020","unstructured":"Jiangnan Li , Jin Young Lee , Yingyuan Yang, Jinyuan Stella Sun, and Kevin Tomsovic. 2020 . ConAML : Constrained adversarial machine learning for cyber-physical systems. arXiv:3004.05631 Jiangnan Li, Jin Young Lee, Yingyuan Yang, Jinyuan Stella Sun, and Kevin Tomsovic. 2020. ConAML: Constrained adversarial machine learning for cyber-physical systems. arXiv:3004.05631"},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2020.02.075"},{"key":"e_1_2_2_67_1","volume-title":"IDSGAN: Generative adversarial networks for attack generation against intrusion detection. arXiv:1809.02077","author":"Lin Zilong","year":"2018","unstructured":"Zilong Lin , Yong Shi , and Zhi Xue . 2018 . IDSGAN: Generative adversarial networks for attack generation against intrusion detection. arXiv:1809.02077 Zilong Lin, Yong Shi, and Zhi Xue. 2018. IDSGAN: Generative adversarial networks for attack generation against intrusion detection. arXiv:1809.02077"},{"key":"e_1_2_2_68_1","doi-asserted-by":"publisher","DOI":"10.3390\/s19040974"},{"key":"e_1_2_2_69_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23291"},{"key":"e_1_2_2_70_1","volume-title":"Torr","author":"Liu Zhengzhe","year":"2020","unstructured":"Zhengzhe Liu , Xiaojuan Qi , and Philip H. S . Torr . 2020 . Global texture enhancement for fake face detection in the wild. In Proc. of CVPR. Zhengzhe Liu, Xiaojuan Qi, and Philip H. S. Torr. 2020. Global texture enhancement for fake face detection in the wild. In Proc. of CVPR."},{"key":"e_1_2_2_71_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-28166-7_24"},{"key":"e_1_2_2_72_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23204"},{"key":"e_1_2_2_73_1","volume-title":"Proc. ofAISec. 27--38","author":"Gonz\u00e1lez Luis Mu\u00f1oz","year":"2017","unstructured":"Luis Mu\u00f1oz Gonz\u00e1lez , Battista Biggio , Ambra Demontis , Andrea Paudice , Vasin Wongrassamee , Emil C. Lupu , and Fabio Roli . 2017 . Towards poisoning of deep learning algorithms with back-gradient optimization . In Proc. ofAISec. 27--38 . Luis Mu\u00f1oz Gonz\u00e1lez, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proc. ofAISec. 27--38."},{"key":"e_1_2_2_74_1","volume-title":"Proc. of LEET.","author":"Nelson Blaine","year":"2008","unstructured":"Blaine Nelson , Marco Barreno , Fuching Jack Chi , Anthony D. Joseph , Benjamin I. P. Rubinstein , Udam Saini , Charles A. Sutton , J. Doug Tygar , and Kai Xia . 2008 . Exploiting machine learning to subvert your spam filter . In Proc. of LEET. Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles A. Sutton, J. Doug Tygar, and Kai Xia. 2008. Exploiting machine learning to subvert your spam filter. In Proc. of LEET."},{"key":"e_1_2_2_75_1","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2019.2906826"},{"key":"e_1_2_2_76_1","volume-title":"Proc. of BMVC.","author":"Parkhi O. M.","unstructured":"O. M. Parkhi , A. Vedaldi , and A. Zisserman . 2015. Deep face recognition . In Proc. of BMVC. O. M. Parkhi, A. Vedaldi, and A. Zisserman. 2015. Deep face recognition. In Proc. of BMVC."},{"key":"e_1_2_2_77_1","volume-title":"Proc. ofNIPS.","author":"Peck Jonathan","year":"2017","unstructured":"Jonathan Peck , Joris Roels , Bart Goossens , and Yvan Saeys . 2017 . Lower bounds on the robustness to adversarial perturbations . In Proc. ofNIPS. Jonathan Peck, Joris Roels, Bart Goossens, and Yvan Saeys. 2017. Lower bounds on the robustness to adversarial perturbations. In Proc. ofNIPS."},{"key":"e_1_2_2_78_1","volume-title":"Proc. of CV-COPS.","author":"Prabhu Vinay Uday","year":"2017","unstructured":"Vinay Uday Prabhu and John Whaley . 2017 . Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations . In Proc. of CV-COPS. Vinay Uday Prabhu and John Whaley. 2017. Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations. In Proc. of CV-COPS."},{"key":"e_1_2_2_79_1","volume-title":"Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences 9 (March","author":"Qiu Shilin","year":"2019","unstructured":"Shilin Qiu , Qihe Liu , Shijie Zhou , and Chunjiang Wu. 2019. Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences 9 (March 2019 ), 909. Shilin Qiu, Qihe Liu, Shijie Zhou, and Chunjiang Wu. 2019. Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences 9 (March 2019), 909."},{"key":"e_1_2_2_80_1","volume-title":"Proc. of AAAI Workshops. 268--276","author":"Raff Edward","unstructured":"Edward Raff , Jon Barker , Jared Sylvester , Robert Brandon , Bryan Catanzaro , and Charles K. Nicholas . 2018. Malware detection by eating a whole EXE . In Proc. of AAAI Workshops. 268--276 . Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, and Charles K. Nicholas. 2018. Malware detection by eating a whole EXE. In Proc. of AAAI Workshops. 268--276."},{"key":"e_1_2_2_81_1","doi-asserted-by":"publisher","DOI":"10.5555\/2011216.2011217"},{"key":"e_1_2_2_82_1","volume-title":"Proc. of NATO IST-152","author":"Rigaki Maria","year":"2017","unstructured":"Maria Rigaki and Ahmed Elragal . 2017 . Adversarial deep learning against intrusion detection classifiers . In Proc. of NATO IST-152 . Maria Rigaki and Ahmed Elragal. 2017. Adversarial deep learning against intrusion detection classifiers. In Proc. of NATO IST-152."},{"key":"e_1_2_2_83_1","first-page":"107","article-title":"A statistical approach to the spam problem","volume":"2003","author":"Robinson Gary","year":"2003","unstructured":"Gary Robinson . 2003 . A statistical approach to the spam problem . Linux Journal 2003 , 107 (Jan. 2003), 3. Gary Robinson. 2003. A statistical approach to the spam problem. Linux Journal 2003, 107 (Jan. 2003), 3.","journal-title":"Linux Journal"},{"key":"e_1_2_2_84_1","first-page":"16","article-title":"Bypassing system calls-based intrusion detection systems","volume":"29","author":"Rosenberg Ishai","year":"2016","unstructured":"Ishai Rosenberg and Ehud Gudes . 2016 . Bypassing system calls-based intrusion detection systems . Concurrency and Computation: Practice and Experience 29 , 16 (Nov. 2016), e4023. Ishai Rosenberg and Ehud Gudes. 2016. Bypassing system calls-based intrusion detection systems. Concurrency and Computation: Practice and Experience 29, 16 (Nov. 2016), e4023.","journal-title":"Concurrency and Computation: Practice and Experience"},{"key":"e_1_2_2_85_1","volume-title":"Proc. of Black Hat Europe.","author":"Rosenberg Ishai","year":"2020","unstructured":"Ishai Rosenberg and Shai Meir . 2020 . Bypassing NGAV for fun and profit . In Proc. of Black Hat Europe. Ishai Rosenberg and Shai Meir. 2020. Bypassing NGAV for fun and profit. In Proc. of Black Hat Europe."},{"key":"e_1_2_2_86_1","volume-title":"Proc. ofIJCNN. 1--10","author":"Rosenberg I.","unstructured":"I. Rosenberg , S. Meir , J. Berrebi , I. Gordon , G. Sicard , and E. Omid David . 2020. Generating end-to-end adversarial examples for malware classifiers using explainability . In Proc. ofIJCNN. 1--10 . I. Rosenberg, S. Meir, J. Berrebi, I. Gordon, G. Sicard, and E. Omid David. 2020. Generating end-to-end adversarial examples for malware classifiers using explainability. In Proc. ofIJCNN. 1--10."},{"key":"e_1_2_2_87_1","unstructured":"Ishai Rosenberg Asaf Shabtai Yuval Elovici and Lior Rokach. 2018. Query-efficient black-box attack against sequence-based malware classifiers. arXiv:1804.08778  Ishai Rosenberg Asaf Shabtai Yuval Elovici and Lior Rokach. 2018. Query-efficient black-box attack against sequence-based malware classifiers. arXiv:1804.08778"},{"key":"e_1_2_2_88_1","unstructured":"Ishai Rosenberg Asaf Shabtai Yuval Elovici and Lior Rokach. 2019. Defense methods against adversarial examples for recurrent neural networks. arXiv:1901.09963  Ishai Rosenberg Asaf Shabtai Yuval Elovici and Lior Rokach. 2019. Defense methods against adversarial examples for recurrent neural networks. arXiv:1901.09963"},{"key":"e_1_2_2_89_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00470-5_23"},{"key":"e_1_2_2_90_1","volume-title":"Proc. of AAAI.","author":"Ross Andrew","year":"2018","unstructured":"Andrew Ross and Finale Doshi-Velez . 2018 . Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients . In Proc. of AAAI. Andrew Ross and Finale Doshi-Velez. 2018. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proc. of AAAI."},{"key":"e_1_2_2_91_1","doi-asserted-by":"publisher","DOI":"10.1109\/MALWARE.2015.7413680"},{"key":"e_1_2_2_92_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-08509-8_11"},{"key":"e_1_2_2_93_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.02.007"},{"key":"e_1_2_2_94_1","volume-title":"Reiter","author":"Sharif Mahmood","year":"2016","unstructured":"Mahmood Sharif , Sruti Bhagavatula , Lujo Bauer , and Michael K . Reiter . 2016 . Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proc . ofCCS. 1528--1540. Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proc. ofCCS. 1528--1540."},{"key":"e_1_2_2_95_1","series-title":"Lecture Notes in Computer Science","volume-title":"Data and Applications Security and Privacy XXXIII","author":"Shirazi Hossein","unstructured":"Hossein Shirazi , Bruhadeshwar Bezawada , Indrakshi Ray , and Charles Anderson . 2019. Adversarial sampling attacks against phishing detection . In Data and Applications Security and Privacy XXXIII . Lecture Notes in Computer Science , Vol. 11559 . Springer , 83--101. Hossein Shirazi, Bruhadeshwar Bezawada, Indrakshi Ray, and Charles Anderson. 2019. Adversarial sampling attacks against phishing detection. In Data and Applications Security and Privacy XXXIII. Lecture Notes in Computer Science, Vol. 11559. Springer, 83--101."},{"key":"e_1_2_2_96_1","doi-asserted-by":"crossref","unstructured":"Ilia Shumailov Yiren Zhao Daniel Bates Nicolas Papernot Robert Mullins and Ross Anderson. 2020. Sponge examples: Energy-latency attacks on neural networks. arXiv:2006.03463  Ilia Shumailov Yiren Zhao Daniel Bates Nicolas Papernot Robert Mullins and Ross Anderson. 2020. Sponge examples: Energy-latency attacks on neural networks. arXiv:2006.03463","DOI":"10.1109\/EuroSP51992.2021.00024"},{"key":"e_1_2_2_97_1","doi-asserted-by":"crossref","unstructured":"Lior Sidi Asaf Nadler and Asaf Shabtai. 2019. MaskDGA: A black-box evasion technique against DGA classifiers and adversarial defenses. arXiv:1902.08909  Lior Sidi Asaf Nadler and Asaf Shabtai. 2019. MaskDGA: A black-box evasion technique against DGA classifiers and adversarial defenses. arXiv:1902.08909","DOI":"10.1109\/ACCESS.2020.3020964"},{"key":"e_1_2_2_98_1","doi-asserted-by":"crossref","unstructured":"Sobhan Soleymani Ali Dabouei J. Dawson and N.M. Nasrabadi. 2019. Defending against adversarial iris examples using wavelet decomposition. arXiv:1908.03176  Sobhan Soleymani Ali Dabouei J. Dawson and N.M. Nasrabadi. 2019. Defending against adversarial iris examples using wavelet decomposition. arXiv:1908.03176","DOI":"10.1109\/BTAS46853.2019.9186006"},{"key":"e_1_2_2_99_1","volume-title":"Nasrabadi","author":"Soleymani Sobhan","year":"2019","unstructured":"Sobhan Soleymani , Ali Dabouei , Jeremy Dawson , and Nasser M . Nasrabadi . 2019 . Adversarial examples to fool iris recognition systems. arXiv:1906.09300 Sobhan Soleymani, Ali Dabouei, Jeremy Dawson, and Nasser M. Nasrabadi. 2019. Adversarial examples to fool iris recognition systems. arXiv:1906.09300"},{"key":"e_1_2_2_100_1","doi-asserted-by":"publisher","DOI":"10.1109\/INDIN.2018.8472060"},{"key":"e_1_2_2_101_1","volume-title":"Proc. of SP. 197--211","author":"Srndic Nedim","year":"2014","unstructured":"Nedim Srndic and Pavel Laskov . 2014 . Practical evasion of a learning-based classifier: A case study . In Proc. of SP. 197--211 . Nedim Srndic and Pavel Laskov. 2014. Practical evasion of a learning-based classifier: A case study. In Proc. of SP. 197--211."},{"key":"e_1_2_2_102_1","unstructured":"Rock Stevens Octavian Suciu Andrew Ruef Sanghyun Hong Michael W. Hicks and Tudor Dumitras. 2017. Summoning demons: The pursuit of exploitable bugs in machine learning. arXiv:1701.04739  Rock Stevens Octavian Suciu Andrew Ruef Sanghyun Hong Michael W. Hicks and Tudor Dumitras. 2017. Summoning demons: The pursuit of exploitable bugs in machine learning. arXiv:1701.04739"},{"key":"e_1_2_2_103_1","doi-asserted-by":"crossref","unstructured":"Jack W. Stokes De Wang Mady Marinescu Marc Marino and Brian Bussone. 2017. Attack and defense of dynamic analysis-based adversarial neural malware classification models. arXiv:1712.05919  Jack W. Stokes De Wang Mady Marinescu Marc Marino and Brian Bussone. 2017. Attack and defense of dynamic analysis-based adversarial neural malware classification models. arXiv:1712.05919","DOI":"10.1109\/MILCOM.2018.8599855"},{"key":"e_1_2_2_104_1","volume-title":"Proc. of ALEC. 11--16","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu , Scott E. Coull , and Jeffrey Johns . 2018 . Exploring adversarial examples in malware detection . In Proc. of ALEC. 11--16 . Octavian Suciu, Scott E. Coull, and Jeffrey Johns. 2018. Exploring adversarial examples in malware detection. In Proc. of ALEC. 11--16."},{"key":"e_1_2_2_105_1","volume-title":"Proc. of USENIX Security. 1299--1316","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu , Radu Marginean , Yigitcan Kaya , Hal Daume III, and Tudor Dumitras . 2018 . When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks . In Proc. of USENIX Security. 1299--1316 . Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. 2018. When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. In Proc. of USENIX Security. 1299--1316."},{"key":"e_1_2_2_106_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.244"},{"key":"e_1_2_2_107_1","volume-title":"RazorNet: Adversarial training and noise training on a deep neural network fooled by a shallow neural network. Big Data and Cognitive Computing 3 (July","author":"Taheri Shayan","year":"2019","unstructured":"Shayan Taheri , Milad Salem , and Jiann-Shiun Yuan . 2019. RazorNet: Adversarial training and noise training on a deep neural network fooled by a shallow neural network. Big Data and Cognitive Computing 3 (July 2019 ), 43. Shayan Taheri, Milad Salem, and Jiann-Shiun Yuan. 2019. RazorNet: Adversarial training and noise training on a deep neural network fooled by a shallow neural network. Big Data and Cognitive Computing 3 (July 2019), 43."},{"key":"e_1_2_2_108_1","unstructured":"Florian Tramer Nicholas Carlini Wieland Brendel and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. arxiv:2002.08347  Florian Tramer Nicholas Carlini Wieland Brendel and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. arxiv:2002.08347"},{"key":"e_1_2_2_109_1","volume-title":"Proc. of USENIX Security. 601--618","author":"Tram\u00e8r Florian","year":"2016","unstructured":"Florian Tram\u00e8r , Fan Zhang , Ari Juels , Michael K. Reiter , and Thomas Ristenpart . 2016 . Stealing machine learning models via prediction APIs . In Proc. of USENIX Security. 601--618 . Florian Tram\u00e8r, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction APIs. In Proc. of USENIX Security. 601--618."},{"key":"e_1_2_2_110_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308897.3308959"},{"key":"e_1_2_2_111_1","doi-asserted-by":"publisher","DOI":"10.1145\/2699026.2699115"},{"key":"e_1_2_2_112_1","volume-title":"Proc. of SP. 36--52","author":"Wang B.","unstructured":"B. Wang and N. Z. Gong . 2018. Stealing hyperparameters in machine learning . In Proc. of SP. 36--52 . B. Wang and N. Z. Gong. 2018. Stealing hyperparameters in machine learning. In Proc. of SP. 36--52."},{"key":"e_1_2_2_113_1","volume-title":"Zhao","author":"Wang Bolun","year":"2019","unstructured":"Bolun Wang , Yuanshun Yao , Shawn Shan , Huiying Li , Bimal Viswanath , Haitao Zheng , and Ben Y . Zhao . 2019 . Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proc . ofSP. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proc. ofSP."},{"key":"e_1_2_2_114_1","volume-title":"Zhao","author":"Wang Bolun","year":"2018","unstructured":"Bolun Wang , Yuanshun Yao , Bimal Viswanath , Haitao Zheng , and Ben Y . Zhao . 2018 . With great training comes great vulnerability: Practical attacks against transfer learning. In Proc. of USENIX Security . 1281--1297. Bolun Wang, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2018. With great training comes great vulnerability: Practical attacks against transfer learning. In Proc. of USENIX Security. 1281--1297."},{"key":"e_1_2_2_115_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2854599"},{"key":"e_1_2_2_116_1","doi-asserted-by":"publisher","DOI":"10.1109\/INISTA.2018.8466271"},{"key":"e_1_2_2_117_1","volume-title":"Proc. of ICLR.","author":"Weng Tsui-Wei","year":"2018","unstructured":"Tsui-Wei Weng , Huan Zhang , Pin-Yu Chen , Jinfeng Yi , Dong Su , Yupeng Gao , Cho-Jui Hsieh , and Luca Daniel . 2018 . Evaluating the robustness of neural networks: An extreme value theory approach . In Proc. of ICLR. Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. 2018. Evaluating the robustness of neural networks: An extreme value theory approach. In Proc. of ICLR."},{"key":"e_1_2_2_118_1","volume-title":"Proc. of SPW. 123--128","author":"Xiao Q.","unstructured":"Q. Xiao , K. Li , D. Zhang , and W. Xu . 2018. Security risks in deep learning implementations . In Proc. of SPW. 123--128 . Q. Xiao, K. Li, D. Zhang, and W. Xu. 2018. Security risks in deep learning implementations. In Proc. of SPW. 123--128."},{"key":"e_1_2_2_119_1","volume-title":"Proc. of ACMAC.","author":"Xu Peng","year":"2020","unstructured":"Peng Xu , Bojan Kolosnjaji , Claudia Eckert , and Apostolis Zarras . 2020 . MANIS: Evading malware detection system on graph structure . In Proc. of ACMAC. Peng Xu, Bojan Kolosnjaji, Claudia Eckert, and Apostolis Zarras. 2020. MANIS: Evading malware detection system on graph structure. In Proc. of ACMAC."},{"key":"e_1_2_2_120_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23198"},{"key":"e_1_2_2_121_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2016.23115"},{"key":"e_1_2_2_122_1","doi-asserted-by":"publisher","DOI":"10.5555\/2428696.2428722"},{"key":"e_1_2_2_123_1","doi-asserted-by":"publisher","DOI":"10.1145\/1879141.1879148"},{"key":"e_1_2_2_124_1","doi-asserted-by":"publisher","DOI":"10.1145\/3302504.3311814"},{"key":"e_1_2_2_125_1","volume-title":"Proc. of MILCOM. 559--564","author":"Yang K.","unstructured":"K. Yang , J. Liu , C. Zhang , and Y. Fang . 2018. Adversarial examples against the deep learning based network intrusion detection systems . In Proc. of MILCOM. 559--564 . K. Yang, J. Liu, C. Zhang, and Y. Fang. 2018. Adversarial examples against the deep learning based network intrusion detection systems. In Proc. of MILCOM. 559--564."}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3453158","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3453158","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:28:39Z","timestamp":1750195719000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3453158"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,25]]},"references-count":125,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,6,30]]}},"alternative-id":["10.1145\/3453158"],"URL":"https:\/\/doi.org\/10.1145\/3453158","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,25]]},"assertion":[{"value":"2020-07-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-02-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-05-25","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}