{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:21:01Z","timestamp":1750220461554,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":31,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,5,24]],"date-time":"2021-05-24T00:00:00Z","timestamp":1621814400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,5,24]]},"DOI":"10.1145\/3433210.3453114","type":"proceedings-article","created":{"date-parts":[[2021,6,4]],"date-time":"2021-06-04T15:26:39Z","timestamp":1622820399000},"page":"292-306","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models"],"prefix":"10.1145","author":[{"given":"Stefano","family":"Calzavara","sequence":"first","affiliation":[{"name":"Universit\u00e0 Ca' Foscari Venezia, Venezia, Italy"}]},{"given":"Lorenzo","family":"Cazzaro","sequence":"additional","affiliation":[{"name":"Universit\u00e0 Ca' Foscari Venezia, Venezia, Italy"}]},{"given":"Claudio","family":"Lucchese","sequence":"additional","affiliation":[{"name":"Universit\u00e0 Ca' Foscari Venezia, Venezia, Italy"}]}],"member":"320","published-online":{"date-parts":[[2021,6,4]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01258-8_10"},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-40994-3_25"},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.07.023"},{"key":"e_1_3_2_2_4_1","unstructured":"Wieland Brendel Jonas Rauber and Matthias Bethge. 2018. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. In ICLR. OpenReview.net.  Wieland Brendel Jonas Rauber and Matthias Bethge. 2018. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. In ICLR. OpenReview.net."},{"key":"e_1_3_2_2_5_1","volume-title":"ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. In AISec@CCS. ACM, 15--26.","author":"Chen Pin-Yu","year":"2017","unstructured":"Pin-Yu Chen , Huan Zhang , Yash Sharma , Jinfeng Yi , and Cho-Jui Hsieh . 2017 . ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. In AISec@CCS. ACM, 15--26. Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. In AISec@CCS. ACM, 15--26."},{"key":"e_1_3_2_2_6_1","unstructured":"Minhao Cheng Thong Le Pin-Yu Chen Huan Zhang Jinfeng Yi and Cho-Jui Hsieh. 2019 b. Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach. In ICLR. OpenReview.net.  Minhao Cheng Thong Le Pin-Yu Chen Huan Zhang Jinfeng Yi and Cho-Jui Hsieh. 2019 b. Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach. In ICLR. OpenReview.net."},{"key":"e_1_3_2_2_7_1","unstructured":"Shuyu Cheng Yinpeng Dong Tianyu Pang Hang Su and Jun Zhu. 2019 a. Improving Black-box Adversarial Attacks with a Transfer-based Prior. In NeurIPS. 10932--10942.  Shuyu Cheng Yinpeng Dong Tianyu Pang Hang Su and Jun Zhu. 2019 a. Improving Black-box Adversarial Attacks with a Transfer-based Prior. In NeurIPS. 10932--10942."},{"volume-title":"USENIX Security","author":"Demontis Ambra","key":"e_1_3_2_2_8_1","unstructured":"Ambra Demontis , Marco Melis , Maura Pintor , Matthew Jagielski , Battista Biggio , Alina Oprea , Cristina Nita-Rotaru , and Fabio Roli . 2019. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks . In USENIX Security . USENIX Association , 321--338. Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. 2019. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. In USENIX Security. USENIX Association, 321--338."},{"volume-title":"Boosting Adversarial Attacks With Momentum","author":"Dong Yinpeng","key":"e_1_3_2_2_9_1","unstructured":"Yinpeng Dong , Fangzhou Liao , Tianyu Pang , Hang Su , Jun Zhu , Xiaolin Hu , and Jianguo Li. 2018. Boosting Adversarial Attacks With Momentum . In CVPR. IEEE Computer Society , 9185--9193. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting Adversarial Attacks With Momentum. In CVPR. IEEE Computer Society, 9185--9193."},{"volume-title":"Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks","author":"Dong Yinpeng","key":"e_1_3_2_2_10_1","unstructured":"Yinpeng Dong , Tianyu Pang , Hang Su , and Jun Zhu . 2019. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks . In CVPR. Computer Vision Foundation \/ IEEE , 4312--4321. Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. 2019. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. In CVPR. Computer Vision Foundation \/ IEEE, 4312--4321."},{"key":"e_1_3_2_2_11_1","unstructured":"Ian J. Goodfellow Jonathon Shlens and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In ICLR. OpenReview.net.  Ian J. Goodfellow Jonathon Shlens and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In ICLR. OpenReview.net."},{"key":"e_1_3_2_2_12_1","unstructured":"Andrew Ilyas Logan Engstrom Anish Athalye and Jessy Lin. 2018. Black-box Adversarial Attacks with Limited Queries and Information. In ICML. PMLR 2142--2151.  Andrew Ilyas Logan Engstrom Anish Athalye and Jessy Lin. 2018. Black-box Adversarial Attacks with Limited Queries and Information. In ICML. PMLR 2142--2151."},{"key":"e_1_3_2_2_13_1","volume-title":"Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors. In ICLR. OpenReview.net.","author":"Ilyas Andrew","year":"2019","unstructured":"Andrew Ilyas , Logan Engstrom , and Aleksander Madry . 2019 . Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors. In ICLR. OpenReview.net. Andrew Ilyas, Logan Engstrom, and Aleksander Madry. 2019. Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors. In ICLR. OpenReview.net."},{"key":"e_1_3_2_2_14_1","volume-title":"Buse Gul Atli, and N. Asokan","author":"Juuti Mika","year":"2019","unstructured":"Mika Juuti , Buse Gul Atli, and N. Asokan . 2019 . Making Targeted Black-box Evasion Attacks Effective and Efficient. In AISec@CCS 2019. ACM , 83--94. Mika Juuti, Buse Gul Atli, and N. Asokan. 2019. Making Targeted Black-box Evasion Attacks Effective and Efficient. In AISec@CCS 2019. ACM, 83--94."},{"key":"e_1_3_2_2_15_1","volume-title":"Gradient-Based Learning Applied to Document Recognition. Proc","author":"Lecun Yann","year":"1998","unstructured":"Yann Lecun , Leon Bottou , Yoshua Bengio , and Patrick Haffner . 1998. Gradient-Based Learning Applied to Document Recognition. Proc . IEEE , Vol . 86 (12 1998 ), 2278 -- 2324. Yann Lecun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE, Vol. 86 (12 1998), 2278 -- 2324."},{"key":"e_1_3_2_2_16_1","unstructured":"Yanpei Liu Xinyun Chen Chang Liu and Dawn Song. 2017. Delving into Transferable Adversarial Examples and Black-box Attacks. In ICLR. OpenReview.net.  Yanpei Liu Xinyun Chen Chang Liu and Dawn Song. 2017. Delving into Transferable Adversarial Examples and Black-box Attacks. In ICLR. OpenReview.net."},{"volume-title":"CEAS","author":"Lowd Daniel","key":"e_1_3_2_2_17_1","unstructured":"Daniel Lowd and Christopher Meek . 2005. Good Word Attacks on Statistical Spam Filters . In CEAS . http:\/\/www.ceas.cc\/papers-2005\/125.pdf Daniel Lowd and Christopher Meek. 2005. Good Word Attacks on Statistical Spam Filters. In CEAS. http:\/\/www.ceas.cc\/papers-2005\/125.pdf"},{"key":"e_1_3_2_2_18_1","volume-title":"Fahad Shahbaz Khan, and Fatih Porikli.","author":"Naseer Muzammal","year":"2019","unstructured":"Muzammal Naseer , Salman H. Khan , Muhammad Haris Khan , Fahad Shahbaz Khan, and Fatih Porikli. 2019 . Cross-Domain Transferability of Adversarial Perturbations. In NeurIPS. 12885--12895. Muzammal Naseer, Salman H. Khan, Muhammad Haris Khan, Fahad Shahbaz Khan, and Fatih Porikli. 2019. Cross-Domain Transferability of Adversarial Perturbations. In NeurIPS. 12885--12895."},{"key":"e_1_3_2_2_19_1","volume-title":"Goodfellow","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot , Patrick D. McDaniel , and Ian J . Goodfellow . 2016 . Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. CoRR , Vol. abs\/ 1605 .07277 (2016). arxiv: 1605.07277 http:\/\/arxiv.org\/abs\/1605.07277 Nicolas Papernot, Patrick D. McDaniel, and Ian J. Goodfellow. 2016. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. CoRR, Vol. abs\/1605.07277 (2016). arxiv: 1605.07277 http:\/\/arxiv.org\/abs\/1605.07277"},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot Patrick D. McDaniel Ian J. Goodfellow Somesh Jha Z. Berkay Celik and Ananthram Swami. 2017. Practical Black-Box Attacks against Machine Learning. In AsiaCCS. ACM 506--519.  Nicolas Papernot Patrick D. McDaniel Ian J. Goodfellow Somesh Jha Z. Berkay Celik and Ananthram Swami. 2017. Practical Black-Box Attacks against Machine Learning. In AsiaCCS. ACM 506--519.","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.5555\/1953048.2078195"},{"volume-title":"Adversarial Diversity and Hard Positive Generation. In CVPR Workshops. IEEE Computer Society, 410--417","author":"Rozsa Andras","key":"e_1_3_2_2_22_1","unstructured":"Andras Rozsa , Ethan M. Rudd , and Terrance E. Boult . 2016 . Adversarial Diversity and Hard Positive Generation. In CVPR Workshops. IEEE Computer Society, 410--417 . Andras Rozsa, Ethan M. Rudd, and Terrance E. Boult. 2016. Adversarial Diversity and Hard Positive Generation. In CVPR Workshops. IEEE Computer Society, 410--417."},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1561\/2200000070"},{"key":"e_1_3_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1561\/2200000068"},{"key":"e_1_3_2_2_25_1","volume-title":"Hal Daum\u00e9 III, and Tudor Dumitras","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu , Radu Marginean , Yigitcan Kaya , Hal Daum\u00e9 III, and Tudor Dumitras . 2018 . When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks. In USENIX Security. USENIX Association , 1299--1316. Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daum\u00e9 III, and Tudor Dumitras. 2018. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks. In USENIX Security. USENIX Association, 1299--1316."},{"volume-title":"Reinforcement learning: An introduction","author":"Sutton Richard S","key":"e_1_3_2_2_26_1","unstructured":"Richard S Sutton and Andrew G Barto . 2018. Reinforcement learning: An introduction . MIT press . Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction .MIT press."},{"volume-title":"Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries","author":"Suya Fnu","key":"e_1_3_2_2_27_1","unstructured":"Fnu Suya , Jianfeng Chi , David Evans , and Yuan Tian . 2020. Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries . In USENIX. USENIX Association , 1327--1344. Fnu Suya, Jianfeng Chi, David Evans, and Yuan Tian. 2020. Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries. In USENIX. USENIX Association, 1327--1344."},{"key":"e_1_3_2_2_28_1","unstructured":"Christian Szegedy Wojciech Zaremba Ilya Sutskever Joan Bruna Dumitru Erhan Ian J. Goodfellow and Rob Fergus. 2014. Intriguing properties of neural networks. In ICLR. OpenReview.net.  Christian Szegedy Wojciech Zaremba Ilya Sutskever Joan Bruna Dumitru Erhan Ian J. Goodfellow and Rob Fergus. 2014. Intriguing properties of neural networks. In ICLR. OpenReview.net."},{"key":"e_1_3_2_2_29_1","volume-title":"McDaniel","author":"Florian Tram\u00e8","year":"2018","unstructured":"Florian Tram\u00e8 r, Alexey Kurakin , Nicolas Papernot , Ian J. Goodfellow , Dan Boneh , and Patrick D . McDaniel . 2018 . Ensemble Adversarial Training: Attacks and Defenses. In ICLR. OpenReview .net. Florian Tram\u00e8 r, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. 2018. Ensemble Adversarial Training: Attacks and Defenses. In ICLR. OpenReview.net."},{"key":"e_1_3_2_2_30_1","unstructured":"Dongxian Wu Yisen Wang Shu-Tao Xia James Bailey and Xingjun Ma. 2020. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. In ICLR. OpenReview.net.  Dongxian Wu Yisen Wang Shu-Tao Xia James Bailey and Xingjun Ma. 2020. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. In ICLR. OpenReview.net."},{"key":"e_1_3_2_2_31_1","volume-title":"Yuille","author":"Xie Cihang","year":"2019","unstructured":"Cihang Xie , Zhishuai Zhang , Yuyin Zhou , Song Bai , Jianyu Wang , Zhou Ren , and Alan L . Yuille . 2019 . Improving Transferability of Adversarial Examples With Input Diversity. In CVPR. Computer Vision Foundation \/ IEEE , 2730--2739. Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L. Yuille. 2019. Improving Transferability of Adversarial Examples With Input Diversity. In CVPR. Computer Vision Foundation \/ IEEE, 2730--2739."}],"event":{"name":"ASIA CCS '21: ACM Asia Conference on Computer and Communications Security","sponsor":["SIGSAC ACM Special Interest Group on Security, Audit, and Control"],"location":"Virtual Event Hong Kong","acronym":"ASIA CCS '21"},"container-title":["Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3433210.3453114","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3433210.3453114","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:48:12Z","timestamp":1750193292000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3433210.3453114"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,24]]},"references-count":31,"alternative-id":["10.1145\/3433210.3453114","10.1145\/3433210"],"URL":"https:\/\/doi.org\/10.1145\/3433210.3453114","relation":{},"subject":[],"published":{"date-parts":[[2021,5,24]]},"assertion":[{"value":"2021-06-04","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}