{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,20]],"date-time":"2026-02-20T07:03:33Z","timestamp":1771571013899,"version":"3.50.1"},"reference-count":86,"publisher":"Association for Computing Machinery (ACM)","issue":"4","funder":[{"name":"UK EPSRC Grant","award":["EP\/X015971\/2"],"award-info":[{"award-number":["EP\/X015971\/2"]}]},{"name":"Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany\u2019s Excellence Strategy","award":["EXC 2092 CASA \u2013 390781972"],"award-info":[{"award-number":["EXC 2092 CASA \u2013 390781972"]}]},{"name":"IFI program of the German Academic Exchange Service"},{"name":"Vienna Science and Technology Fund (WWTF) through the BREADS project","award":["10.47379\/VRG23011"],"award-info":[{"award-number":["10.47379\/VRG23011"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Priv. Secur."],"published-print":{"date-parts":[[2025,11,30]]},"abstract":"<jats:p>Recent research efforts on adversarial machine learning (ML) have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain underexplored.<\/jats:p>\n          <jats:p>\n            This article makes three major contributions. Firstly, we propose a general formalization for adversarial ML evasion attacks in the problem-space, which includes the definition of a comprehensive set of constraints on available transformations, preserved semantics, absent artifacts, and plausibility. We shed light on the relationship between feature space and problem space, and we introduce the concept of\n            <jats:italic toggle=\"yes\">side-effect features<\/jats:italic>\n            as the by-product of the inverse feature-mapping problem. This enables us to define and prove necessary and sufficient conditions for the existence of problem-space attacks. Secondly, building on our general formalization, we propose a novel problem-space attack on Android malware that overcomes past limitations in terms of semantics and artifacts. We have tested our approach on a dataset with 150K Android apps from 2016 and 2018 which show the practical feasibility of evading a state-of-the-art malware classifier along with its hardened version. Thirdly, we explore the effectiveness of adversarial training as a possible approach to enforce robustness against adversarial samples, evaluating its effectiveness on the considered machine learning models under different scenarios.\n          <\/jats:p>\n          <jats:p>Our results demonstrate that \u201cadversarial-malware as a service\u201d is a realistic threat, as we automatically generate thousands of realistic and inconspicuous adversarial applications at scale, where on average it takes only a few minutes to generate an adversarial instance.<\/jats:p>","DOI":"10.1145\/3742895","type":"journal-article","created":{"date-parts":[[2025,6,21]],"date-time":"2025-06-21T05:37:39Z","timestamp":1750484259000},"page":"1-37","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Intriguing Properties of Adversarial ML Attacks in the Problem Space [Extended Version]"],"prefix":"10.1145","volume":"28","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1421-2058","authenticated-orcid":false,"given":"Jacopo","family":"Cortellazzi","sequence":"first","affiliation":[{"name":"King's College London","place":["London, United Kingdom of Great Britain and Northern Ireland"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-7170-1274","authenticated-orcid":false,"given":"Erwin","family":"Quiring","sequence":"additional","affiliation":[{"name":"Ruhr-Universitat Bochum","place":["Bochum, Germany"]},{"name":"International Computer Science Institute (ICSI)","place":["Bochum, Germany"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3628-794X","authenticated-orcid":false,"given":"Daniel","family":"Arp","sequence":"additional","affiliation":[{"name":"Technische Universit\u00e4t Wien","place":["Vienna, Austria"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1140-322X","authenticated-orcid":false,"given":"Feargus","family":"Pendlebury","sequence":"additional","affiliation":[{"name":"University College London","place":["London, United Kingdom of Great Britain and Northern Ireland"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1254-1758","authenticated-orcid":false,"given":"Fabio","family":"Pierazzi","sequence":"additional","affiliation":[{"name":"University College London","place":["London, United Kingdom of Great Britain and Northern Ireland"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3878-2680","authenticated-orcid":false,"given":"Lorenzo","family":"Cavallaro","sequence":"additional","affiliation":[{"name":"University College London","place":["London, United Kingdom of Great Britain and Northern Ireland"]}]}],"member":"320","published-online":{"date-parts":[[2025,9,11]]},"reference":[{"key":"e_1_3_3_2_2","volume-title":"Compilers, Principles,Techniques, and Tools (2nd Edition)","author":"Aho Alfred V.","year":"2007","unstructured":"Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. 2007. Compilers, Principles,Techniques, and Tools (2nd Edition). Addison Wesley."},{"key":"e_1_3_3_3_2","volume-title":"Proceedings of the ACM Mining Software Repositories (MSR\u201916)","author":"Allix Kevin","year":"2016","unstructured":"Kevin Allix, Tegawend\u00e9 F. Bissyand\u00e9, Jacques Klein, and Yves Le Traon. 2016. Androzoo: Collecting millions of android apps for the research community. In Proceedings of the ACM Mining Software Repositories (MSR\u201916)."},{"key":"e_1_3_3_4_2","volume-title":"Proceedings of the Empirical Methods in Natural Language Processing (EMNLP\u201918)","author":"Alzantot Moustafa","year":"2018","unstructured":"Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP\u201918)."},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","unstructured":"A. Moser C. Kruegel and E. Kirda. 2007. Limits of static analysis for malware detection. Twenty-Third Annual Computer Security Applications Conference (ACSAC 2007). Miami Beach FL 421\u2013430. DOI:10.1109\/ACSAC.2007.21","DOI":"10.1109\/ACSAC.2007.21"},{"key":"e_1_3_3_6_2","unstructured":"Android. 2020. Permissions Overview - Dangerous Permissions. (2020). Retrieved from https:\/\/developer.android.com\/guide\/topics\/permissions\/overview#dangerous_permissions"},{"key":"e_1_3_3_7_2","volume-title":"Proceedings of the IEEE NCA","author":"Apruzzese Giovanni","year":"2018","unstructured":"Giovanni Apruzzese and Michele Colajanni. 2018. Evading botnet detectors based on flows and random forest with adversarial samples. In Proceedings of the IEEE NCA."},{"key":"e_1_3_3_8_2","volume-title":"Proceedings of the IEEE NCA","author":"Apruzzese Giovanni","year":"2019","unstructured":"Giovanni Apruzzese, Michele Colajanni, and Mirco Marchetti. 2019. Evaluating the effectiveness of adversarial attacks against botnet detectors. In Proceedings of the IEEE NCA."},{"key":"e_1_3_3_9_2","volume-title":"Proceedings of the NDSS","author":"Arp Daniel","year":"2014","unstructured":"Daniel Arp, Michael Spreitzenbarth, Malte Hubner, Hugo Gascon, and Konrad Rieck. 2014. DREBIN: Effective and explainable detection of android malware in your pocket. In Proceedings of the NDSS."},{"key":"e_1_3_3_10_2","volume-title":"Proceedings of the PLDI","author":"Arzt Steven","year":"2014","unstructured":"Steven Arzt, Siegfried Rasthofer, Christian Fritz, Eric Bodden, Alexandre Bartel, Jacques Klein, Yves Le Traon, Damien Octeau, and Patrick D. McDaniel. 2014. FlowDroid: Precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for Android apps. In Proceedings of the PLDI. ACM."},{"key":"e_1_3_3_11_2","volume-title":"Proceedings of the ISSTA","author":"Barr Earl T.","year":"2015","unstructured":"Earl T. Barr, Mark Harman, Yue Jia, Alexandru Marginean, and Justyna Petke. 2015. Automated software transplantation. In Proceedings of the ISSTA. ACM."},{"key":"e_1_3_3_12_2","volume-title":"Proceedings of the ECML-PKDD","author":"Biggio Battista","year":"2013","unstructured":"Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim \u0160rndi\u0107, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. In Proceedings of the ECML-PKDD. Springer."},{"key":"e_1_3_3_13_2","doi-asserted-by":"crossref","unstructured":"Battista Biggio Giorgio Fumera and Fabio Roli. 2013. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering 26 11 (2013) 984\u2013996.","DOI":"10.1109\/TKDE.2013.57"},{"key":"e_1_3_3_14_2","article-title":"Wild patterns: Ten years after the rise of adversarial machine learning","author":"Biggio Battista","year":"2018","unstructured":"Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition 84, 9 (2018), 317\u2013331.","journal-title":"Pattern Recognition"},{"key":"e_1_3_3_15_2","volume-title":"Pattern Recognition and Machine Learning","author":"Bishop Christopher M.","year":"2006","unstructured":"Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning."},{"key":"e_1_3_3_16_2","doi-asserted-by":"crossref","first-page":"103676","DOI":"10.1016\/j.cose.2023.103676","article-title":"Evadedroid: A practical evasion attack on machine learning for black-box android malware detection","volume":"139","author":"Bostani Hamid","year":"2024","unstructured":"Hamid Bostani and Veelasha Moonsamy. 2024. Evadedroid: A practical evasion attack on machine learning for black-box android malware detection. Computers and Security 139, C (2024), 103676.","journal-title":"Computers and Security"},{"key":"e_1_3_3_17_2","unstructured":"Nicholas Carlini Anish Athalye Nicolas Papernot Wieland Brendel Jonas Rauber Dimitris Tsipras Ian Goodfellow and Aleksander Madry. 2019. On evaluating adversarial robustness. https:\/\/dblp.org\/rec\/journals\/corr\/abs-1902-06705"},{"key":"e_1_3_3_18_2","volume-title":"Proceedings of the IEEE Symp. S&P","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symp. S&P."},{"key":"e_1_3_3_19_2","volume-title":"Proceedings of the Deep Learning for Security (DLS) Workshop","author":"Carlini Nicholas","year":"2018","unstructured":"Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In Proceedings of the Deep Learning for Security (DLS) Workshop. IEEE."},{"key":"e_1_3_3_20_2","first-page":"3","volume-title":"Proceedings of the AISec@CCS","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David A. Wagner. 2017. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the AISec@CCS. ACM, 3\u201314."},{"key":"e_1_3_3_21_2","first-page":"109","volume-title":"Proceedings of the 10th ACM Conference on Data and Application Security and Privacy","author":"Chen Jiyu","year":"2020","unstructured":"Jiyu Chen, David Wang, and Hao Chen. 2020. Explore the transformation space for adversarial images. In Proceedings of the 10th ACM Conference on Data and Application Security and Privacy. 109\u2013120."},{"key":"e_1_3_3_22_2","article-title":"Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues","author":"Corona Igino","year":"2013","unstructured":"Igino Corona, Giorgio Giacinto, and Fabio Roli. 2013. Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues. Information Sciences 239, 20 (2013), 201\u2013225.","journal-title":"Information Sciences"},{"key":"e_1_3_3_23_2","volume-title":"Proceedings of the KDD","year":"2004","unstructured":"Nilesh Dalvi, Pedro Domingos, Mausam, Sumit Sanghai, and Deepak Verma. 2004. Adversarial classification. In Proceedings of the KDD. ACM."},{"key":"e_1_3_3_24_2","first-page":"119","volume-title":"Proceedings of the ACM Conference on Computer and Communications Security","author":"Dang Hung","year":"2017","unstructured":"Hung Dang, Yue Huang, and Ee-Chien Chang. 2017. Evading classifiers by morphing in the dark. In Proceedings of the ACM Conference on Computer and Communications Security. ACM, 119\u2013133."},{"key":"e_1_3_3_25_2","article-title":"Yes, machine learning can be more secure! a case study on android malware detection","author":"Demontis Ambra","year":"2017","unstructured":"Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, and Fabio Roli. 2017. Yes, machine learning can be more secure! a case study on android malware detection. IEEE Transactions on Dependable and Secure Computing 16, 6 (2017), 711\u2013724.","journal-title":"IEEE Transactions on Dependable and Secure Computing"},{"key":"e_1_3_3_26_2","doi-asserted-by":"crossref","first-page":"1384","DOI":"10.1109\/SP46215.2023.10179316","volume-title":"Proceedings of the 2023 IEEE Symposium on Security and Privacy (SP\u201923)","author":"Dyrmishi Salijona","year":"2023","unstructured":"Salijona Dyrmishi, Salah Ghamizi, Thibault Simonetto, Yves Le Traon, and Maxime Cordy. 2023. On the empirical effectiveness of unrealistic adversarial hardening against realistic adversarial attacks. In Proceedings of the 2023 IEEE Symposium on Security and Privacy (SP\u201923). IEEE, 1384\u20131400."},{"key":"e_1_3_3_27_2","doi-asserted-by":"crossref","DOI":"10.2197\/ipsjjip.25.866","article-title":"The evolution of process hiding techniques in malware-current threats and possible countermeasures","author":"Eresheim Sebastian","year":"2017","unstructured":"Sebastian Eresheim, Robert Luh, and Sebastian Schrittwieser. 2017. The evolution of process hiding techniques in malware-current threats and possible countermeasures. Journal of Information Processing 25, 0 (2017), 866\u2013874.","journal-title":"Journal of Information Processing"},{"key":"e_1_3_3_28_2","volume-title":"Proceedings of the ACM CCS","author":"Fass Aurore","year":"2019","unstructured":"Aurore Fass, Michael Backes, and Ben Stock. 2019. HideNoSeek: Camouflaging malicious JavaScript in benign ASTs. In Proceedings of the ACM CCS."},{"issue":"6433","key":"e_1_3_3_29_2","doi-asserted-by":"crossref","first-page":"1287","DOI":"10.1126\/science.aaw4399","article-title":"Adversarial attacks on medical machine learning","volume":"363","author":"Finlayson Samuel G.","year":"2019","unstructured":"Samuel G. Finlayson, John D. Bowers, Joichi Ito, Jonathan L. Zittrain, Andrew L. Beam, and Isaac S. Kohane. 2019. Adversarial attacks on medical machine learning. Science 363, 6433 (2019), 1287\u20131289.","journal-title":"Science"},{"key":"e_1_3_3_30_2","first-page":"59","volume-title":"Proceedings of the ACM Conference on Computer and Communications Security","author":"Fogla Prahlad","year":"2006","unstructured":"Prahlad Fogla and Wenke Lee. 2006. Evading network anomaly detection systems: Formal reasoning and practical techniques. In Proceedings of the ACM Conference on Computer and Communications Security. ACM, 59\u201368."},{"key":"e_1_3_3_31_2","volume-title":"Deep Learning","author":"Goodfellow Ian","year":"2016","unstructured":"Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press."},{"key":"e_1_3_3_32_2","unstructured":"Ian J. Goodfellow Jonathon Shlens and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. https:\/\/dblp.org\/rec\/journals\/corr\/GoodfellowSS14"},{"key":"e_1_3_3_33_2","volume-title":"Proceedings of the ICLR (Poster)","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the ICLR (Poster)."},{"key":"e_1_3_3_34_2","volume-title":"Proceedings of the ESORICS","author":"Grosse Kathrin","year":"2017","unstructured":"Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2017. Adversarial examples for malware detection. In Proceedings of the ESORICS. Springer."},{"key":"e_1_3_3_35_2","first-page":"90","volume-title":"Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security","author":"He Ping","year":"2023","unstructured":"Ping He, Yifan Xia, Xuhong Zhang, and Shouling Ji. 2023. Efficient query-based attack against ML-based android malware detection under zero knowledge setting. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 90\u2013104."},{"key":"e_1_3_3_36_2","volume-title":"Proceedings of the AISec","author":"Huang Ling","year":"2011","unstructured":"Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2011. Adversarial machine learning. In Proceedings of the AISec. ACM."},{"key":"e_1_3_3_37_2","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1145\/2046684.2046692","volume-title":"Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence","author":"Huang Ling","year":"2011","unstructured":"Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2011. Adversarial machine learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence. ACM, 43\u201358."},{"key":"e_1_3_3_38_2","volume-title":"Proceedings of the International Workshop on Security and Privacy Analytics","author":"Incer Inigo","year":"2018","unstructured":"Inigo Incer, Michael Theodorides, Sadia Afroz, and David Wagner. 2018. Adversarially robust malware detection using monotonic classification. In Proceedings of the International Workshop on Security and Privacy Analytics. ACM."},{"key":"e_1_3_3_39_2","first-page":"S48\u2013S59","article-title":"MalDozer: Automatic framework for android malware detection using deep learning","volume":"24","author":"Karbab ElMouatez Billah","year":"2018","unstructured":"ElMouatez Billah Karbab, Mourad Debbabi, Abdelouahid Derhab, and Djedjiga Mouheb. 2018. MalDozer: Automatic framework for android malware detection using deep learning. Digital investigation 24, Supplement S (2018), S48\u2013S59.","journal-title":"Digital investigation"},{"key":"e_1_3_3_40_2","volume-title":"Proceedings of the Journal des Sciences Militaires","author":"Kerckhoffs Auguste","year":"1883","unstructured":"Auguste Kerckhoffs. 1883. La cryptographie militaire. In Proceedings of the Journal des Sciences Militaires."},{"key":"e_1_3_3_41_2","volume-title":"Proceedings of the EUSIPCO","author":"Kolosnjaji Bojan","year":"2018","unstructured":"Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. 2018. Adversarial malware binaries: Evading deep learning for malware detection in executables. In Proceedings of the EUSIPCO. IEEE."},{"key":"e_1_3_3_42_2","unstructured":"Bogdan Kulynych Jamie Hayes Nikita Samarin and Carmela Troncoso. 2018. Evading classifiers in discrete domains with provable optimality guarantees. https:\/\/dblp.org\/rec\/journals\/corr\/abs-1810-10939"},{"key":"e_1_3_3_43_2","first-page":"197","volume-title":"Proceedings of the IEEE Symposium on Security and Privacy (S&P\u201914)","year":"2014","unstructured":"Nedim \u0160rndi\u0107 and Pavel Laskov. 2014. Practical evasion of a learning-based classifier: A case study. In Proceedings of the IEEE Symposium on Security and Privacy (S&P\u201914). IEEE, 197\u2013211."},{"key":"e_1_3_3_44_2","volume-title":"Proceedings of the ACSAC","author":"Laskov Pavel","year":"2011","unstructured":"Pavel Laskov and Nedim \u0160rndi\u0107. 2011. Static detection of malicious JavaScript-Bearing PDF documents. In Proceedings of the ACSAC. ACM."},{"key":"e_1_3_3_45_2","volume-title":"Proceedings of the MALWARE","author":"Leslous Mourad","year":"2017","unstructured":"Mourad Leslous, Val\u00e9rie Viet Triem Tong, Jean-Fran\u00e7ois Lalande, and Thomas Genet. 2017. GPFinder: Tracking the invisible in android malware. In Proceedings of the MALWARE. IEEE."},{"key":"e_1_3_3_46_2","volume-title":"Proceedings of the NDSS","author":"Li Jinfeng","year":"2019","unstructured":"Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. TextBugger: Generating adversarial text against real-world applications. In Proceedings of the NDSS."},{"key":"e_1_3_3_47_2","volume-title":"Proceedings of the CEAS","author":"Lowd Daniel","year":"2005","unstructured":"Daniel Lowd and Christopher Meek. 2005. Good word attacks on statistical spam filters. In Proceedings of the CEAS."},{"key":"e_1_3_3_48_2","first-page":"1163","volume-title":"Proceedings of the 32nd USENIX Security Symposium (USENIX Security\u201923)","author":"Lucas Keane","year":"2023","unstructured":"Keane Lucas, Samruddhi Pai, Weiran Lin, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif. 2023. Adversarial training for \\(\\lbrace\\) Raw-Binary \\(\\rbrace\\) malware classifiers. In Proceedings of the 32nd USENIX Security Symposium (USENIX Security\u201923). 1163\u20131180."},{"key":"e_1_3_3_49_2","volume-title":"Proceedings of the ACM Asia Conference on Computer and Communications Security (ASIA CCS\u201921)","author":"Lucas Keane","year":"2021","unstructured":"Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre. 2021. Malware makeover: Breaking ML-based static analysis by modifying executable bytes. In Proceedings of the ACM Asia Conference on Computer and Communications Security (ASIA CCS\u201921)."},{"key":"e_1_3_3_50_2","unstructured":"Aleksander Madry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. https:\/\/dblp.org\/rec\/conf\/iclr\/MadryMSTV18"},{"key":"e_1_3_3_51_2","unstructured":"Davide Maiorca Battista Biggio and Giorgio Giacinto. 2019. Towards robust detection of adversarial infection vectors: Lessons learned in PDF malware. https:\/\/dblp.org\/rec\/journals\/corr\/abs-1811-00830"},{"key":"e_1_3_3_52_2","volume-title":"Proceedings of the ASIACCS","author":"Maiorca Davide","year":"2013","unstructured":"Davide Maiorca, Igino Corona, and Giorgio Giacinto. 2013. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious pdf files detection. In Proceedings of the ASIACCS. ACM."},{"key":"e_1_3_3_53_2","volume-title":"Proceedings of the International Workshop on Machine Learning and Data Mining in Pattern Recognition","author":"Maiorca Davide","year":"2012","unstructured":"Davide Maiorca, Giorgio Giacinto, and Igino Corona. 2012. A pattern recognition system for malicious PDF files detection. In Proceedings of the International Workshop on Machine Learning and Data Mining in Pattern Recognition. Springer."},{"key":"e_1_3_3_54_2","volume-title":"Proceedings of the ACM Conference on Data and Applications Security and Privacy (CODASPY\u201919)","author":"Matyukhina Alina","year":"2019","unstructured":"Alina Matyukhina, Natalia Stakhanova, Mila Dalla Preda, and Celine Perley. 2019. Adversarial authorship attribution in open-source projects. In Proceedings of the ACM Conference on Data and Applications Security and Privacy (CODASPY\u201919)."},{"key":"e_1_3_3_55_2","first-page":"301","volume-title":"Proceedings of the 7th ACM on Conference on Data and Application Security and Privacy","author":"McLaughlin Niall","year":"2017","unstructured":"Niall McLaughlin, Jesus Martinez del Rincon, BooJoong Kang, Suleiman Yerima, Paul Miller, Sakir Sezer, Yeganeh Safaei, Erik Trickel, Ziming Zhao, Adam Doup\u00e9, et\u00a0al. 2017. Deep android malware detection. In Proceedings of the 7th ACM on Conference on Data and Application Security and Privacy. 301\u2013308."},{"key":"e_1_3_3_56_2","volume-title":"Proceedings of the EUSIPCO","author":"Melis Marco","year":"2018","unstructured":"Marco Melis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, and Fabio Roli. 2018. Explaining black-box android malware detection. In Proceedings of the EUSIPCO. IEEE."},{"key":"e_1_3_3_57_2","volume-title":"Proceedings of the DIMVA","author":"Miller Brad","year":"2016","unstructured":"Brad Miller, Alex Kantchelian, Michael Carl Tschantz, Sadia Afroz, Rekha Bachwani, Riyaz Faizullabhoy, Ling Huang, Vaishaal Shankar, Tony Wu, George Yiu, et\u00a0al. 2016. Reviewer integration and performance measurement for malware detection. In Proceedings of the DIMVA. Springer."},{"key":"e_1_3_3_58_2","volume-title":"Proceedings of the ACSAC","author":"Moser Andreas","year":"2007","unstructured":"Andreas Moser, Christopher Kruegel, and Engin Kirda. 2007. Limits of static analysis for malware detection. In Proceedings of the ACSAC."},{"key":"e_1_3_3_59_2","first-page":"37","volume-title":"Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security","author":"Oak Rajvardhan","year":"2019","unstructured":"Rajvardhan Oak, Min Du, David Yan, Harshvardhan Takawale, and Idan Amit. 2019. Malware detection on highly imbalanced data through sequence modeling. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. 37\u201348."},{"key":"e_1_3_3_60_2","doi-asserted-by":"crossref","first-page":"372","DOI":"10.1109\/EuroSP.2016.36","volume-title":"Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P\u201916)","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P\u201916). IEEE, 372\u2013387."},{"key":"e_1_3_3_61_2","volume-title":"Proceedings of the NIPS Autodiff Workshop","author":"Paszke Adam","year":"2017","unstructured":"Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Proceedings of the NIPS Autodiff Workshop."},{"key":"e_1_3_3_62_2","volume-title":"Proceedings of the 28th USENIX Security Symposium","author":"Pendlebury Feargus","year":"2019","unstructured":"Feargus Pendlebury, Fabio Pierazzi, Roberto Jordaney, Johannes Kinder, and Lorenzo Cavallaro. 2019. TESSERACT: Eliminating experimental bias in malware classification across space and time. In Proceedings of the 28th USENIX Security Symposium. USENIX Association, Santa Clara, CA. USENIX Sec."},{"key":"e_1_3_3_63_2","volume-title":"Types and Programming Languages","author":"Pierce Benjamin C.","year":"2002","unstructured":"Benjamin C. Pierce and C. Benjamin. 2002. Types and Programming Languages. MIT Press."},{"key":"e_1_3_3_64_2","volume-title":"Proceedings of the USENIX Security Symposium","author":"Quiring Erwin","year":"2019","unstructured":"Erwin Quiring, Alwin Maier, and Konrad Rieck. 2019. Misleading authorship attribution of source code using adversarial learning. In Proceedings of the USENIX Security Symposium."},{"key":"e_1_3_3_65_2","volume-title":"Proceedings of the AAAI Workshops","author":"Raff Edward","year":"2018","unstructured":"Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, and Charles K. Nicholas. 2018. Malware detection by eating a whole exe. In Proceedings of the AAAI Workshops."},{"key":"e_1_3_3_66_2","unstructured":"Marco Rando Luca Demetrio Lorenzo Rosasco and Fabio Roli. 2024. A new formulation for zeroth-order optimization of adversarial EXEmples in malware detection. https:\/\/dblp.org\/rec\/journals\/corr\/abs-2405-14519"},{"key":"e_1_3_3_67_2","volume-title":"Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL\u201919)","author":"Ren Shuhuai","year":"2019","unstructured":"Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL\u201919)."},{"key":"e_1_3_3_68_2","volume-title":"Proceedings of the RAID","author":"Rosenberg Ishai","year":"2018","unstructured":"Ishai Rosenberg, Asaf Shabtai, Lior Rokach, and Yuval Elovici. 2018. Generic black-box end-to-end attack against state of the art API call based malware classifiers. In Proceedings of the RAID. Springer."},{"key":"e_1_3_3_69_2","doi-asserted-by":"publisher","DOI":"10.1016\/0004-3702(95)00045-3"},{"key":"e_1_3_3_70_2","volume-title":"Proceedings of the ACM CCS","author":"Sharif Mahmood","year":"2016","unstructured":"Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the ACM CCS. ACM."},{"key":"e_1_3_3_71_2","doi-asserted-by":"crossref","first-page":"990","DOI":"10.1145\/3488932.3497768","volume-title":"Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security","author":"Song Wei","year":"2022","unstructured":"Wei Song, Xuezixiang Li, Sadia Afroz, Deepali Garg, Dmitry Kuznetsov, and Heng Yin. 2022. Mab-malware: A reinforcement learning framework for blackbox generation of adversarial malware. In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security. 990\u20131003."},{"key":"e_1_3_3_72_2","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1109\/SPW.2019.00015","volume-title":"Proceedings of the 2019 IEEE Security and Privacy Workshops (SPW\u201919)","author":"Suciu Octavian","year":"2019","unstructured":"Octavian Suciu, Scott E. Coull, and Jeffrey Johns. 2019. Exploring adversarial examples in malware detection. In Proceedings of the 2019 IEEE Security and Privacy Workshops (SPW\u201919). IEEE, 8\u201314."},{"key":"e_1_3_3_73_2","article-title":"When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu, Radu M\u0103rginean, Yi\u011fitcan Kaya, Hal Daum\u00e9 III, and Tudor Dumitra\u015f. 2018. When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. USENIX Security Symposium (2018), 1299\u20131316.","journal-title":"USENIX Security Symposium"},{"key":"e_1_3_3_74_2","article-title":"Intriguing properties of neural networks","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. ICLR (2014), 1\u201310.","journal-title":"ICLR"},{"key":"e_1_3_3_75_2","first-page":"285","volume-title":"Proceedings of the 28th USENIX Security Symposium (USENIX Security\u201919)","author":"Tong Liang","year":"2019","unstructured":"Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, and Yevgeniy Vorobeychik. 2019. Improving robustness of \\(\\lbrace\\) ML \\(\\rbrace\\) classifiers against realizable evasion attacks using conserved features. In Proceedings of the 28th USENIX Security Symposium (USENIX Security\u201919). 285\u2013302."},{"key":"e_1_3_3_76_2","volume-title":"Proceedings of the IEEE Symposium on Security and Privacy","author":"Ugarte-Pedrero Xabier","year":"2015","unstructured":"Xabier Ugarte-Pedrero, Davide Balzarotti, Igor Santos, and Pablo G. Bringas. 2015. SoK: Deep packer inspection: A longitudinal study of the complexity of run-time packers. In Proceedings of the IEEE Symposium on Security and Privacy."},{"key":"e_1_3_3_77_2","volume-title":"Proceedings of the CASCON 1st Decade High Impact Papers","author":"Vall\u00e9e-Rai Raja","year":"2010","unstructured":"Raja Vall\u00e9e-Rai, Phong Co, Etienne Gagnon, Laurie Hendren, Patrick Lam, and Vijay Sundaresan. 2010. Soot: A Java bytecode optimization framework. In Proceedings of the CASCON 1st Decade High Impact Papers. IBM Corp."},{"key":"e_1_3_3_78_2","volume-title":"Proceedings of the USENIX ENIGMA","author":"Vigna Giovanni","year":"2018","unstructured":"Giovanni Vigna and Davide Balzarotti. 2018. When Malware is Packin\u2019 Heat. In Proceedings of the USENIX ENIGMA."},{"key":"e_1_3_3_79_2","doi-asserted-by":"crossref","first-page":"3035","DOI":"10.1007\/s12652-018-0803-6","article-title":"Effective android malware detection with a hybrid model based on deep autoencoder and convolutional neural network","volume":"10","author":"Wang Wei","year":"2019","unstructured":"Wei Wang, Mengxue Zhao, and Jigang Wang. 2019. Effective android malware detection with a hybrid model based on deep autoencoder and convolutional neural network. Journal of Ambient Intelligence and Humanized Computing 10, 8 (2019), 3035\u20133043.","journal-title":"Journal of Ambient Intelligence and Humanized Computing"},{"key":"e_1_3_3_80_2","first-page":"439","volume-title":"Proceedings of the 5th International Conference on Software Engineering (ICSE\u201981)","author":"Weiser Mark","year":"1981","unstructured":"Mark Weiser. 1981. Program slicing. In Proceedings of the 5th International Conference on Software Engineering (ICSE\u201981). IEEE, 439\u2013449. Retrieved from http:\/\/dl.acm.org\/citation.cfm?id=800078.802557"},{"key":"e_1_3_3_81_2","volume-title":"Fundamentals of Model Theory","author":"Weiss William","year":"2015","unstructured":"William Weiss and Cherie D\u2019Mello. 2015. Fundamentals of Model Theory. University of Toronto."},{"key":"e_1_3_3_82_2","first-page":"443","volume-title":"Proceedings of the USENIX Security Symposium","author":"Xiao Qixue","year":"2019","unstructured":"Qixue Xiao, Yufei Chen, Chao Shen, Yu Chen, and Kang Li. 2019. Seeing is not believing: Camouflage attacks on image scaling algorithms. In Proceedings of the USENIX Security Symposium. USENIX Association, 443\u2013460."},{"key":"e_1_3_3_83_2","first-page":"473","volume-title":"Proceedings of the 2018 IEEE European Symposium on Security and Privacy (EuroS&P\u201918)","author":"Xu Ke","year":"2018","unstructured":"Ke Xu, Yingjiu Li, Robert H. Deng, and Kai Chen. 2018. Deeprefiner: Multi-layer android malware detection system applying deep neural networks. In Proceedings of the 2018 IEEE European Symposium on Security and Privacy (EuroS&P\u201918). IEEE, 473\u2013487."},{"key":"e_1_3_3_84_2","volume-title":"Proceedings of the NDSS","author":"Xu Weilin","year":"2016","unstructured":"Weilin Xu, Yanjun Qi, and David Evans. 2016. Automatically evading classifiers. In Proceedings of the NDSS."},{"key":"e_1_3_3_85_2","volume-title":"Proceedings of the ACSAC","author":"Yang Wei","year":"2017","unstructured":"Wei Yang, Deguang Kong, Tao Xie, and Carl A. Gunter. 2017. Malware detection in adversarial settings: Exploiting feature evolutions and confusions in android apps. In Proceedings of the ACSAC. ACM."},{"key":"e_1_3_3_86_2","first-page":"7472","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled tradeoff between robustness and accuracy. In Proceedings of the International Conference on Machine Learning. PMLR, 7472\u20137482."},{"key":"e_1_3_3_87_2","volume-title":"Proceedings of the ACM DAC","author":"Zizzo Giulio","year":"2019","unstructured":"Giulio Zizzo, Chris Hankin, Sergio Maffeis, and Kevin Jones. 2019. Adversarial machine learning beyond the image domain. In Proceedings of the ACM DAC."}],"container-title":["ACM Transactions on Privacy and Security"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3742895","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,11]],"date-time":"2025-09-11T12:33:45Z","timestamp":1757594025000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3742895"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,11]]},"references-count":86,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,11,30]]}},"alternative-id":["10.1145\/3742895"],"URL":"https:\/\/doi.org\/10.1145\/3742895","relation":{},"ISSN":["2471-2566","2471-2574"],"issn-type":[{"value":"2471-2566","type":"print"},{"value":"2471-2574","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,9,11]]},"assertion":[{"value":"2024-04-08","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-26","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-11","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}