{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,26]],"date-time":"2025-12-26T11:24:09Z","timestamp":1766748249765,"version":"3.41.0"},"reference-count":81,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,3,14]],"date-time":"2024-03-14T00:00:00Z","timestamp":1710374400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Priv. Secur."],"published-print":{"date-parts":[[2024,5,31]]},"abstract":"<jats:p>\n            In recent years, the widespread adoption of Machine Learning (ML) at the core of complex IT systems has driven researchers to investigate the security and reliability of ML techniques. A very specific kind of threats concerns the\n            <jats:italic>adversary<\/jats:italic>\n            mechanisms through which an attacker could induce a classification algorithm to provide the desired output. Such strategies, known as Adversarial Machine Learning (AML), have a twofold purpose: to calculate a perturbation to be applied to the classifier\u2019s input such that the outcome is subverted, while maintaining the underlying intent of the original data. Although any manipulation that accomplishes these goals is theoretically acceptable, in real scenarios perturbations must correspond to a set of permissible manipulations of the input, which is rarely considered in the literature. In this article, we present\n            <jats:italic>AdverSPAM<\/jats:italic>\n            , an AML technique designed to fool the spam account detection system of an Online Social Network (OSN). The proposed black-box evasion attack is formulated as an optimization problem that computes the adversarial sample while maintaining two important properties of the feature space, namely\n            <jats:italic>statistical correlation<\/jats:italic>\n            and\n            <jats:italic>semantic dependency<\/jats:italic>\n            . Although being demonstrated in an OSN security scenario, such an approach might be applied in other context where the aim is to perturb data described by mutually related features. Experiments conducted on a public dataset show the effectiveness of\n            <jats:italic>AdverSPAM<\/jats:italic>\n            compared to five state-of-the-art competitors, even in the presence of adversarial defense mechanisms.\n          <\/jats:p>","DOI":"10.1145\/3643563","type":"journal-article","created":{"date-parts":[[2024,1,26]],"date-time":"2024-01-26T10:34:30Z","timestamp":1706265270000},"page":"1-31","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":7,"title":["AdverSPAM: Adversarial SPam Account Manipulation in Online Social Networks"],"prefix":"10.1145","volume":"27","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7638-3624","authenticated-orcid":false,"given":"Federico","family":"Concone","sequence":"first","affiliation":[{"name":"University of Palermo, Palermo, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5480-2100","authenticated-orcid":false,"given":"Salvatore","family":"Gaglio","sequence":"additional","affiliation":[{"name":"University of Palermo, Palermo, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2050-8923","authenticated-orcid":false,"given":"Andrea","family":"Giammanco","sequence":"additional","affiliation":[{"name":"University of Palermo, Palermo, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8217-2230","authenticated-orcid":false,"given":"Giuseppe Lo","family":"Re","sequence":"additional","affiliation":[{"name":"University of Palermo, Palermo, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5963-6236","authenticated-orcid":false,"given":"Marco","family":"Morana","sequence":"additional","affiliation":[{"name":"University of Palermo, Palermo, Italy"}]}],"member":"320","published-online":{"date-parts":[[2024,3,14]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-44503-X_27"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2021.115782"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jisa.2020.102717"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-00296-0_5"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397332"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2016.2558154"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1007\/s13042-010-0007-7"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.07.023"},{"key":"e_1_3_2_10_2","volume-title":"Proceedings of the 12th Annual Malware Technical Exchange Meeting","author":"Boutsikas John","year":"2021","unstructured":"John Boutsikas, Maksim Ekin Eren, Charles K. Varga, Edward Raff, Cynthia Matuszek, and Charles Nicholas. 2021. Evading malware classifiers via Monte Carlo mutant feature discovery. In Proceedings of the 12th Annual Malware Technical Exchange Meeting."},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-020-00266-y"},{"key":"e_1_3_2_12_2","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1109\/SP.2017.49","volume-title":"Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP)","author":"Carlini N.","year":"2017","unstructured":"N. Carlini and D. Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, Los Alamitos, CA, USA, 39\u201357. DOI:10.1109\/SP.2017.49"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2023.01.071"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.953"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2020.3004059"},{"key":"e_1_3_2_16_2","volume-title":"Proceedings of the 7th International Conference on Learning Representations, ICLR 2019","author":"Cheng Minhao","year":"2019","unstructured":"Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2019. Query-efficient hard-label black-box attack: An optimization-based approach. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019. OpenReview.net."},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2021.3108009"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3544746"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2022.3198830"},{"key":"e_1_3_2_20_2","doi-asserted-by":"crossref","first-page":"359","DOI":"10.1109\/SMARTCOMP.2019.00073","volume-title":"Proceedings of the 2019 IEEE International Conference on Smart Computing (SMARTCOMP)","author":"Concone Federico","year":"2019","unstructured":"Federico Concone, Giuseppe Lo Re, Marco Morana, and Claudio Ruocco. 2019. Assisted labeling for spam account detection on Twitter. In Proceedings of the 2019 IEEE International Conference on Smart Computing (SMARTCOMP). 359\u2013366. DOI:10.1109\/SMARTCOMP.2019.00073"},{"key":"e_1_3_2_21_2","volume-title":"Proceedings of the 3rd Italian Conference on Cyber Security","volume":"2315","author":"Concone Federico","year":"2019","unstructured":"Federico Concone, Giuseppe Lo Re, Marco Morana, and Claudio Ruocco. 2019. Twitter spam account detection by effective labeling. In Proceedings of the 3rd Italian Conference on Cyber Security . Pierpaolo Degano and Roberto Zunino (Eds.), CEUR Workshop Proceedings, , Vol. 2315, CEUR-WS.org."},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIC.2021.3130380"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2017.2681672"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.heliyon.2019.e01802"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11831-019-09344-w"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1007\/0-387-28356-0_10"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2021.3082330"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3473039"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2017.2700270"},{"key":"e_1_3_2_30_2","first-page":"321","volume-title":"Proceedings of the 28th USENIX Conference on Security Symposium (SEC\u201919)","author":"Demontis Ambra","year":"2019","unstructured":"Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. 2019. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In Proceedings of the 28th USENIX Conference on Security Symposium (SEC\u201919). USENIX Association, USA, 321\u2013338. DOI:10.5555\/3361338.3361361"},{"key":"e_1_3_2_31_2","doi-asserted-by":"crossref","first-page":"318","DOI":"10.1109\/CVPR42600.2020.00040","volume-title":"Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Dong Yinpeng","year":"2020","unstructured":"Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu. 2020. Benchmarking adversarial robustness on image classification. In Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 318\u2013328. DOI:10.1109\/CVPR42600.2020.00040"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2018.2825958"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-08421-8_34"},{"key":"e_1_3_2_34_2","unstructured":"I. Goodfellow J. Shlens and C. Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572."},{"key":"e_1_3_2_35_2","first-page":"2137","volume-title":"Proceedings of the 35th International Conference on Machine Learning","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the 35th International Conference on Machine Learning . Jennifer Dy and Andreas Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, PMLR, 2137\u20132146."},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1007\/s12525-021-00475-2"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.csl.2021.101199"},{"key":"e_1_3_2_38_2","volume-title":"Proceedings of the 15th International Conference on Availability, Reliability and Security (ARES\u201920)","author":"Kuchipudi Bhargav","year":"2020","unstructured":"Bhargav Kuchipudi, Ravi Teja Nannapaneni, and Qi Liao. 2020. Adversarial machine learning for spam filters. In Proceedings of the 15th International Conference on Availability, Reliability and Security (ARES\u201920). ACM, New York, NY, USA, Article 38, 6 pages. DOI:10.1145\/3407023.3407079"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-022-12500-3"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.comcom.2021.04.007"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSII.2022.3160266"},{"key":"e_1_3_2_42_2","volume-title":"Proceedings of the 5th International Conference on Learning Representations, ICLR 2017","author":"Liu Yanpei","year":"2017","unstructured":"Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into transferable adversarial examples and black-box attacks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017. OpenReview.net."},{"key":"e_1_3_2_43_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3332184"},{"key":"e_1_3_2_45_2","first-page":"119","volume-title":"Proceedings of the 8th ACM SIGSAC Symposium on Information, Computer and Communications Security (ASIA CCS\u201913)","author":"Maiorca Davide","year":"2013","unstructured":"Davide Maiorca, Igino Corona, and Giorgio Giacinto. 2013. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious PDF files detection. In Proceedings of the 8th ACM SIGSAC Symposium on Information, Computer and Communications Security (ASIA CCS\u201913). ACM, New York, NY, USA, 119\u2013130. DOI:10.1145\/2484313.2484327"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2020.101901"},{"key":"e_1_3_2_47_2","doi-asserted-by":"crossref","first-page":"135","DOI":"10.1145\/3133956.3134057","volume-title":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS\u201917)","author":"Meng Dongyu","year":"2017","unstructured":"Dongyu Meng and Hao Chen. 2017. MagNet: A two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS\u201917). ACM, New York, NY, USA, 135\u2013147. DOI:10.1145\/3133956.3134057"},{"key":"e_1_3_2_48_2","doi-asserted-by":"crossref","first-page":"2574","DOI":"10.1109\/CVPR.2016.282","volume-title":"Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Moosavi-Dezfooli S.","year":"2016","unstructured":"S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, Los Alamitos, CA, USA, 2574\u20132582. DOI:10.1109\/CVPR.2016.282"},{"key":"e_1_3_2_49_2","doi-asserted-by":"crossref","first-page":"86","DOI":"10.1109\/CVPR.2017.17","volume-title":"Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Moosavi-Dezfooli Seyed-Mohsen","year":"2017","unstructured":"Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 86\u201394. DOI:10.1109\/CVPR.2017.17"},{"key":"e_1_3_2_50_2","first-page":"1","volume-title":"Proceedings of the 2020 IEEE Global Communications Conference","author":"Newaz Akm Iqtidar","year":"2020","unstructured":"Akm Iqtidar Newaz, Nur Imtiazul Haque, Amit Kumar Sikder, Mohammad Ashiqur Rahman, and A. Selcuk Uluagac. 2020. Adversarial attacks to machine learning-based smart healthcare systems. In Proceedings of the 2020 IEEE Global Communications Conference. 1\u20136. DOI:10.1109\/GLOBECOM42002.2020.9322472"},{"key":"e_1_3_2_51_2","first-page":"506","volume-title":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS\u201917)","author":"Papernot Nicolas","year":"2017","unstructured":"Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS\u201917). ACM, New York, NY, USA, 506\u2013519. DOI:10.1145\/3052973.3053009"},{"key":"e_1_3_2_52_2","unstructured":"N. Papernot P. McDaniel and I. Goodfellow. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277."},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/TGRS.2022.3213305"},{"key":"e_1_3_2_54_2","first-page":"1288","volume-title":"Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","author":"Peng Xiao","year":"2019","unstructured":"Xiao Peng, Weiqing Huang, and Zhixin Shi. 2019. Adversarial attack against DoS intrusion detection: An improved boundary-based method. In Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). 1288\u20131295. DOI:10.1109\/ICTAI.2019.00179"},{"key":"e_1_3_2_55_2","doi-asserted-by":"crossref","first-page":"1332","DOI":"10.1109\/SP40000.2020.00073","volume-title":"Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP)","author":"Pierazzi Fabio","year":"2020","unstructured":"Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. 2020. Intriguing properties of adversarial ML attacks in the problem space. In Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP). 1332\u20131349. DOI:10.1109\/SP40000.2020.00073"},{"key":"e_1_3_2_56_2","first-page":"1308","volume-title":"Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP)","author":"Pierazzi Fabio","year":"2020","unstructured":"Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. 2020. Intriguing properties of adversarial ML attacks in the problem space. In Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 1308\u20131325. DOI:10.1109\/SP40000.2020.00073"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-94-015-8330-5_4"},{"key":"e_1_3_2_58_2","first-page":"5231","volume-title":"Proceedings of the 36th International Conference on Machine Learning","author":"Qin Yao","year":"2019","unstructured":"Yao Qin, Nicholas Carlini, Garrison Cottrell, Ian Goodfellow, and Colin Raffel. 2019. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In Proceedings of the 36th International Conference on Machine Learning. Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, PMLR, 5231\u20135240."},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.21105\/joss.02607"},{"key":"e_1_3_2_60_2","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1145\/2996758.2996771","volume-title":"Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security (AISec\u201916)","author":"Russu Paolo","year":"2016","unstructured":"Paolo Russu, Ambra Demontis, Battista Biggio, Giorgio Fumera, and Fabio Roli. 2016. Secure kernel machines against evasion attacks. In Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security (AISec\u201916). ACM, New York, NY, USA, 59\u201369. DOI:10.1145\/2996758.2996771"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCCN.2020.3010330"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/2377677.2377781"},{"key":"e_1_3_2_63_2","doi-asserted-by":"crossref","first-page":"163","DOI":"10.1145\/2504730.2504731","volume-title":"Proceedings of the 2013 Conference on Internet Measurement Conference (IMC\u201913)","author":"Stringhini Gianluca","year":"2013","unstructured":"Gianluca Stringhini, Gang Wang, Manuel Egele, Christopher Kruegel, Giovanni Vigna, Haitao Zheng, and Ben Y. Zhao. 2013. Follow the green: Growth and dynamics in Twitter follower markets. In Proceedings of the 2013 Conference on Internet Measurement Conference (IMC\u201913). ACM, New York, NY, USA, 163\u2013176. DOI:10.1145\/2504730.2504731"},{"key":"e_1_3_2_64_2","first-page":"1299","volume-title":"Proceedings of the 27th USENIX Conference on Security Symposium (SEC\u201918)","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu, Radu M\u0103rginean, Yi\u011fitcan Kaya, Hal Daum\u00e9, and Tudor Dumitra\u015f. 2018. When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. In Proceedings of the 27th USENIX Conference on Security Symposium (SEC\u201918). USENIX Association, USA, 1299\u20131316."},{"key":"e_1_3_2_65_2","doi-asserted-by":"crossref","first-page":"318","DOI":"10.1145\/3383313.3412243","volume-title":"Proceedings of the 14th ACM Conference on Recommender Systems (RecSys\u201920)","author":"Tang Jiaxi","year":"2020","unstructured":"Jiaxi Tang, Hongyi Wen, and Ke Wang. 2020. Revisiting adversarially learned injection attacks against recommender systems. In Proceedings of the 14th ACM Conference on Recommender Systems (RecSys\u201920). ACM, New York, NY, USA, 318\u2013327. DOI:10.1145\/3383313.3412243"},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2019.2936378"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2021.102554"},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2020.3026543"},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.1016\/0169-7439(87)80084-9"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.energy.2021.122960"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2017.11.013"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jisa.2020.102694"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2013.2267732"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ress.2023.109299"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2023.103103"},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2022.102770"},{"key":"e_1_3_2_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2015.2415032"},{"key":"e_1_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.3023126"},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3220639"},{"key":"e_1_3_2_80_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2019.2960824"},{"key":"e_1_3_2_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394520"},{"key":"e_1_3_2_82_2","doi-asserted-by":"crossref","first-page":"197","DOI":"10.1109\/SP.2014.20","volume-title":"Proceedings of the 2014 IEEE Symposium on Security and Privacy","author":"\u0160rndi\u0107 Nedim","year":"2014","unstructured":"Nedim \u0160rndi\u0107 and Pavel Laskov. 2014. Practical evasion of a learning-based classifier: A case study. In Proceedings of the 2014 IEEE Symposium on Security and Privacy. 197\u2013211. DOI:10.1109\/SP.2014.20"}],"container-title":["ACM Transactions on Privacy and Security"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643563","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3643563","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:50:27Z","timestamp":1750287027000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643563"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,14]]},"references-count":81,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,5,31]]}},"alternative-id":["10.1145\/3643563"],"URL":"https:\/\/doi.org\/10.1145\/3643563","relation":{},"ISSN":["2471-2566","2471-2574"],"issn-type":[{"type":"print","value":"2471-2566"},{"type":"electronic","value":"2471-2574"}],"subject":[],"published":{"date-parts":[[2024,3,14]]},"assertion":[{"value":"2023-02-24","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-17","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-03-14","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}