{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T23:19:16Z","timestamp":1771024756025,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":44,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,4,14]],"date-time":"2022-04-14T00:00:00Z","timestamp":1649894400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,4,14]]},"DOI":"10.1145\/3508398.3511510","type":"proceedings-article","created":{"date-parts":[[2022,4,16]],"date-time":"2022-04-16T04:13:31Z","timestamp":1650082411000},"page":"16-28","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["EG-Booster: Explanation-Guided Booster of ML Evasion Attacks"],"prefix":"10.1145","author":[{"given":"Abderrahmen","family":"Amich","sequence":"first","affiliation":[{"name":"University of Michigan, Dearborn, MI, USA"}]},{"given":"Birhanu","family":"Eshete","sequence":"additional","affiliation":[{"name":"University of Michigan, Dearborn, MI, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,4,15]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"CNN-CIFAR10 model. https:\/\/github.com\/jamespengcheng\/PyTorch-CNNon-CIFAR10","unstructured":"2020. CNN-CIFAR10 model. https:\/\/github.com\/jamespengcheng\/PyTorch-CNNon-CIFAR10 . 2020. CNN-CIFAR10 model. https:\/\/github.com\/jamespengcheng\/PyTorch-CNNon-CIFAR10."},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-90019-9_11"},{"key":"e_1_3_2_2_3_1","volume-title":"Globally Normalized Transition-Based Neural Networks. In ACL","author":"Andor Daniel","year":"2016","unstructured":"Daniel Andor , Chris Alberti , David Weiss , Aliaksei Severyn , Alessandro Presta , Kuzman Ganchev , Slav Petrov , and Michael Collins . 2016 . Globally Normalized Transition-Based Neural Networks. In ACL 2016. The Association for Computer Linguistics. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally Normalized Transition-Based Neural Networks. In ACL 2016. The Association for Computer Linguistics."},{"key":"e_1_3_2_2_4_1","unstructured":"Ulrich A\u00efvodji Alexandre Bolot and S\u00e9bastien Gambs. 2020. Model extraction from counterfactual explanations. arXiv:2009.01884 [cs.LG]  Ulrich A\u00efvodji Alexandre Bolot and S\u00e9bastien Gambs. 2020. Model extraction from counterfactual explanations. arXiv:2009.01884 [cs.LG]"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-40994-3_25"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.07.023"},{"key":"e_1_3_2_2_7_1","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019","author":"Brendel Wieland","year":"2019","unstructured":"Wieland Brendel , Jonas Rauber , Matthias K\u00fcmmerer , Ivan Ustyuzhaninov , and Matthias Bethge . 2019 . Accurate, reliable and fast robustness evaluation . In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 , NeurIPS 2019, December 8--14, 2019, Vancouver, BC, Canada. 12841--12851. Wieland Brendel, Jonas Rauber, Matthias K\u00fcmmerer, Ivan Ustyuzhaninov, and Matthias Bethge. 2019. Accurate, reliable and fast robustness evaluation. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8--14, 2019, Vancouver, BC, Canada. 12841--12851."},{"key":"e_1_3_2_2_8_1","unstructured":"Nicholas Carlini. 2020. A Complete List of All (arXiv) Adversarial Example Papers. https:\/\/nicholas.carlini.com\/writing\/2019\/all-adversarial-example-papers.html.  Nicholas Carlini. 2020. A Complete List of All (arXiv) Adversarial Example Papers. https:\/\/nicholas.carlini.com\/writing\/2019\/all-adversarial-example-papers.html."},{"key":"e_1_3_2_2_9_1","volume-title":"Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David A. Wagner . 2017 . Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017 , San Jose, CA, USA, May 22--26 , 2017 . 39--57. Nicholas Carlini and David A.Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22--26, 2017. 39--57."},{"key":"e_1_3_2_2_10_1","volume-title":"Wainwright","author":"Chen Jianbo","year":"2020","unstructured":"Jianbo Chen , Michael I. Jordan , and Martin J . Wainwright . 2020 . HopSkipJumpAttack: A Query-Efficient Decision-Based Attack . arXiv:1904.02144 [cs.LG] Jianbo Chen, Michael I. Jordan, and Martin J.Wainwright. 2020. HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. arXiv:1904.02144 [cs.LG]"},{"key":"e_1_3_2_2_11_1","volume-title":"Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13--18","volume":"2216","author":"Croce Francesco","year":"2020","unstructured":"Francesco Croce and Matthias Hein . 2020 . Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks . In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13--18 July 2020, Virtual Event (Proceedings of Machine Learning Research , Vol. 119). PMLR, 2206-- 2216 . Francesco Croce and Matthias Hein. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13--18 July 2020, Virtual Event (Proceedings of Machine Learning Research, Vol. 119). PMLR, 2206--2216."},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASL.2011.2134090"},{"key":"e_1_3_2_2_13_1","volume-title":"Boosting Adversarial Attacks With Momentum. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018","author":"Dong Yinpeng","year":"2018","unstructured":"Yinpeng Dong , Fangzhou Liao , Tianyu Pang , Hang Su , Jun Zhu , Xiaolin Hu , and Jianguo Li . 2018 . Boosting Adversarial Attacks With Momentum. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018 , Salt Lake City, UT, USA, June 18--22 , 2018. IEEE Computer Society, 9185--9193. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting Adversarial Attacks With Momentum. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18--22, 2018. IEEE Computer Society, 9185--9193."},{"key":"e_1_3_2_2_14_1","unstructured":"Logan Engstrom Andrew Ilyas Hadi Salman Shibani Santurkar and Dimitris Tsipras. 2019. Robustness (Python Library). https:\/\/github.com\/MadryLab\/ robustness  Logan Engstrom Andrew Ilyas Hadi Salman Shibani Santurkar and Dimitris Tsipras. 2019. Robustness (Python Library). https:\/\/github.com\/MadryLab\/ robustness"},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2020.3021924"},{"key":"e_1_3_2_2_16_1","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence 33 (10","author":"Ghorbani Amirata","year":"2017","unstructured":"Amirata Ghorbani , Abubakar Abid , and James Zou . 2017 . Interpretation of Neural Networks Is Fragile . Proceedings of the AAAI Conference on Artificial Intelligence 33 (10 2017). Amirata Ghorbani, Abubakar Abid, and James Zou. 2017. Interpretation of Neural Networks Is Fragile. Proceedings of the AAAI Conference on Artificial Intelligence 33 (10 2017)."},{"key":"e_1_3_2_2_17_1","volume-title":"Explaining and Harnessing Adversarial Examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings.","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow , Jonathon Shlens , and Christian Szegedy . 2015 . Explaining and Harnessing Adversarial Examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings."},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3243792"},{"key":"e_1_3_2_2_19_1","volume-title":"Fooling Neural Network Interpretations via Adversarial Model Manipulation. (02","author":"Heo Juyeon","year":"2019","unstructured":"Juyeon Heo , Sunghwan Joo , and Taesup Moon . 2019. Fooling Neural Network Interpretations via Adversarial Model Manipulation. (02 2019 ). Juyeon Heo, Sunghwan Joo, and Taesup Moon. 2019. Fooling Neural Network Interpretations via Adversarial Model Manipulation. (02 2019)."},{"key":"e_1_3_2_2_20_1","volume-title":"Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables. In 26th European Signal Processing Conference, EUSIPCO 2018","author":"Kolosnjaji Bojan","year":"2018","unstructured":"Bojan Kolosnjaji , Ambra Demontis , Battista Biggio , Davide Maiorca , Giorgio Giacinto , Claudia Eckert , and Fabio Roli . 2018 . Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables. In 26th European Signal Processing Conference, EUSIPCO 2018 , Roma, Italy, September 3--7 , 2018. 533--537. Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. 2018. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables. In 26th European Signal Processing Conference, EUSIPCO 2018, Roma, Italy, September 3--7, 2018. 533--537."},{"key":"e_1_3_2_2_21_1","unstructured":"Alex Krizhevsky Vinod Nair and Geoffrey Hinton. [n.d.]. CIFAR-10 (Canadian Institute for Advanced Research). ([n. d.]). http:\/\/www.cs.toronto.edu\/~kriz\/ cifar.html  Alex Krizhevsky Vinod Nair and Geoffrey Hinton. [n.d.]. CIFAR-10 (Canadian Institute for Advanced Research). ([n. d.]). http:\/\/www.cs.toronto.edu\/~kriz\/ cifar.html"},{"key":"e_1_3_2_2_22_1","volume-title":"Adversarial Machine Learning at Scale. CoRR abs\/1611.01236","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin , Ian J. Goodfellow , and Samy Bengio . 2016. Adversarial Machine Learning at Scale. CoRR abs\/1611.01236 ( 2016 ). arXiv:1611.01236 Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial Machine Learning at Scale. CoRR abs\/1611.01236 (2016). arXiv:1611.01236"},{"key":"e_1_3_2_2_23_1","volume-title":"Burges","author":"LeCun Yan","year":"2020","unstructured":"Yan LeCun , Corinna Cortes , and Christopher J.C . Burges . 2020 . The MNIST Database of Handwritten Digits . http:\/\/yann.lecun.com\/exdb\/mnist\/. Yan LeCun, Corinna Cortes, and Christopher J.C. Burges. 2020. The MNIST Database of Handwritten Digits. http:\/\/yann.lecun.com\/exdb\/mnist\/."},{"key":"e_1_3_2_2_24_1","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"Scott","year":"2017","unstructured":"Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions . In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 , 4--9 December 2017 , Long Beach, CA, USA. 4765--4774. Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4--9 December 2017, Long Beach, CA, USA. 4765--4774."},{"key":"e_1_3_2_2_25_1","volume-title":"Towards Deep Learning Models Resistant to Adversarial Attacks. CoRR abs\/1706.06083","author":"Madry Aleksander","year":"2017","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. CoRR abs\/1706.06083 ( 2017 ). arXiv:1706.06083 Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. CoRR abs\/1706.06083 (2017). arXiv:1706.06083"},{"key":"e_1_3_2_2_26_1","volume-title":"secml: A Python Library for Secure and Explainable Machine Learning. arXiv preprint arXiv:1912.10013","author":"Melis Marco","year":"2019","unstructured":"Marco Melis , Ambra Demontis , Maura Pintor , Angelo Sotgiu , and Battista Biggio . 2019. secml: A Python Library for Secure and Explainable Machine Learning. arXiv preprint arXiv:1912.10013 ( 2019 ). Marco Melis, Ambra Demontis, Maura Pintor, Angelo Sotgiu, and Battista Biggio. 2019. secml: A Python Library for Secure and Explainable Machine Learning. arXiv preprint arXiv:1912.10013 (2019)."},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287562"},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00930"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot Patrick McDaniel Xi Wu Somesh Jha and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. 582--597.  Nicolas Papernot Patrick McDaniel Xi Wu Somesh Jha and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. 582--597.","DOI":"10.1109\/SP.2016.41"},{"key":"e_1_3_2_2_31_1","volume-title":"Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints. CoRR abs\/2102.12827","author":"Pintor Maura","year":"2021","unstructured":"Maura Pintor , Fabio Roli , Wieland Brendel , and Battista Biggio . 2021. Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints. CoRR abs\/2102.12827 ( 2021 ). Maura Pintor, Fabio Roli, Wieland Brendel, and Battista Biggio. 2021. Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints. CoRR abs\/2102.12827 (2021)."},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1016\/0041-5553(64)90137-5"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_2_34_1","volume-title":"Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability. In 2020 International Joint Conference on Neural Networks, IJCNN 2020","author":"Rosenberg Ishai","year":"2020","unstructured":"Ishai Rosenberg , Shai Meir , Jonathan Berrebi , Ilay Gordon , Guillaume Sicard , and Eli (Omid) David . 2020 . Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability. In 2020 International Joint Conference on Neural Networks, IJCNN 2020 , Glasgow, United Kingdom, July 19--24 , 2020. IEEE, 1--10. Ishai Rosenberg, Shai Meir, Jonathan Berrebi, Ilay Gordon, Guillaume Sicard, and Eli (Omid) David. 2020. Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability. In 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19--24, 2020. IEEE, 1--10."},{"key":"e_1_3_2_2_35_1","volume-title":"Deep Reinforcement Learning framework for Autonomous Driving. CoRR abs\/1704.02532","author":"Sallab Ahmad El","year":"2017","unstructured":"Ahmad El Sallab , Mohammed Abdou , Etienne Perot , and Senthil Kumar Yogamani . 2017. Deep Reinforcement Learning framework for Autonomous Driving. CoRR abs\/1704.02532 ( 2017 ). Ahmad El Sallab, Mohammed Abdou, Etienne Perot, and Senthil Kumar Yogamani. 2017. Deep Reinforcement Learning framework for Autonomous Driving. CoRR abs\/1704.02532 (2017)."},{"key":"e_1_3_2_2_36_1","volume-title":"Explanation- Guided Backdoor Poisoning Attacks Against Malware Classifiers. In 30th USENIX Security Symposium, USENIX Security","author":"Severi Giorgio","year":"2021","unstructured":"Giorgio Severi , Jim Meyer , Scott Coull , and Alina Oprea . 2021 . Explanation- Guided Backdoor Poisoning Attacks Against Malware Classifiers. In 30th USENIX Security Symposium, USENIX Security 2021. USENIX Association. Giorgio Severi, Jim Meyer, Scott Coull, and Alina Oprea. 2021. Explanation- Guided Backdoor Poisoning Attacks Against Malware Classifiers. In 30th USENIX Security Symposium, USENIX Security 2021. USENIX Association."},{"key":"e_1_3_2_2_37_1","doi-asserted-by":"crossref","unstructured":"L. Shapley. 1953. A value for n-person games.  L. Shapley. 1953. A value for n-person games.","DOI":"10.1515\/9781400881970-018"},{"key":"e_1_3_2_2_38_1","doi-asserted-by":"crossref","unstructured":"Reza Shokri Martin Strobel and Yair Zick. 2021. On the Privacy Risks of Model Explanations. arXiv:1907.00164 [cs.LG]  Reza Shokri Martin Strobel and Yair Zick. 2021. On the Privacy Risks of Model Explanations. arXiv:1907.00164 [cs.LG]","DOI":"10.1145\/3461702.3462533"},{"key":"e_1_3_2_2_39_1","volume-title":"Proceedings of the 34th International Conference on Machine Learning, ICML 2017","author":"Shrikumar Avanti","year":"2017","unstructured":"Avanti Shrikumar , Peyton Greenside , and Anshul Kundaje . 2017 . Learning Important Features Through Propagating Activation Differences . In Proceedings of the 34th International Conference on Machine Learning, ICML 2017 , Sydney, NSW, Australia, 6- -11 August 2017. 3145--3153. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning Important Features Through Propagating Activation Differences. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6--11 August 2017. 3145--3153."},{"key":"e_1_3_2_2_40_1","volume-title":"Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In 2nd International Conference on Learning Representations, ICLRWorkshop Track Proceedings.","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan , Andrea Vedaldi , and Andrew Zisserman . 2014 . Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In 2nd International Conference on Learning Representations, ICLRWorkshop Track Proceedings. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In 2nd International Conference on Learning Representations, ICLRWorkshop Track Proceedings."},{"key":"e_1_3_2_2_41_1","volume-title":"Striving for Simplicity: The All Convolutional Net. In 3rd International Conference on Learning Representations, ICLR Workshop Track Proceedings.","author":"Springenberg Jost Tobias","unstructured":"Jost Tobias Springenberg , Alexey Dosovitskiy , Thomas Brox , and Martin A. Riedmiller . 2015 . Striving for Simplicity: The All Convolutional Net. In 3rd International Conference on Learning Representations, ICLR Workshop Track Proceedings. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2015. Striving for Simplicity: The All Convolutional Net. In 3rd International Conference on Learning Representations, ICLR Workshop Track Proceedings."},{"key":"e_1_3_2_2_42_1","unstructured":"Christian Szegedy Wojciech Zaremba Ilya Sutskever Joan Bruna Dumitru Erhan Ian Goodfellow and Rob Fergus. 2014. Intriguing properties of neural networks. arXiv:1312.6199 [cs.CV]  Christian Szegedy Wojciech Zaremba Ilya Sutskever Joan Bruna Dumitru Erhan Ian Goodfellow and Rob Fergus. 2014. Intriguing properties of neural networks. arXiv:1312.6199 [cs.CV]"},{"key":"e_1_3_2_2_43_1","unstructured":"Jonathan Uesato Brendan O'Donoghue Aaron van den Oord and Pushmeet Kohli. 2018. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. arXiv:1802.05666 [cs.LG]  Jonathan Uesato Brendan O'Donoghue Aaron van den Oord and Pushmeet Kohli. 2018. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. arXiv:1802.05666 [cs.LG]"},{"key":"e_1_3_2_2_44_1","volume-title":"Evaluating Explanation Methods for Deep Learning in Security. In 2020 IEEE European Symposium on Security and Privacy (EuroS P). 158--174","author":"Warnecke A.","unstructured":"A. Warnecke , D. Arp , C. Wressnegger , and K. Rieck . 2020 . Evaluating Explanation Methods for Deep Learning in Security. In 2020 IEEE European Symposium on Security and Privacy (EuroS P). 158--174 . A.Warnecke, D. Arp, C.Wressnegger, and K. Rieck. 2020. Evaluating Explanation Methods for Deep Learning in Security. In 2020 IEEE European Symposium on Security and Privacy (EuroS P). 158--174."}],"event":{"name":"CODASPY '22: Twelveth ACM Conference on Data and Application Security and Privacy","location":"Baltimore MD USA","acronym":"CODASPY '22","sponsor":["SIGSAC ACM Special Interest Group on Security, Audit, and Control"]},"container-title":["Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3508398.3511510","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3508398.3511510","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:30:39Z","timestamp":1750188639000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3508398.3511510"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,14]]},"references-count":44,"alternative-id":["10.1145\/3508398.3511510","10.1145\/3508398"],"URL":"https:\/\/doi.org\/10.1145\/3508398.3511510","relation":{},"subject":[],"published":{"date-parts":[[2022,4,14]]},"assertion":[{"value":"2022-04-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}