{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,4]],"date-time":"2025-09-04T13:28:57Z","timestamp":1756992537531,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":36,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,8,20]],"date-time":"2020-08-20T00:00:00Z","timestamp":1597881600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["1910546,1953813,1846151"],"award-info":[{"award-number":["1910546,1953813,1846151"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,8,23]]},"DOI":"10.1145\/3394486.3403241","type":"proceedings-article","created":{"date-parts":[[2020,8,20]],"date-time":"2020-08-20T23:03:57Z","timestamp":1597964637000},"page":"1899-1907","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":19,"title":["AdvMind: Inferring Adversary Intent of Black-Box Attacks"],"prefix":"10.1145","author":[{"given":"Ren","family":"Pang","sequence":"first","affiliation":[{"name":"Pennsylvania State University, State College, PA, USA"}]},{"given":"Xinyang","family":"Zhang","sequence":"additional","affiliation":[{"name":"Pennsylvania State University, State College, PA, USA"}]},{"given":"Shouling","family":"Ji","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"given":"Xiapu","family":"Luo","sequence":"additional","affiliation":[{"name":"Hong Kong Polytechnic University, Hong Kong, Hong Kong"}]},{"given":"Ting","family":"Wang","sequence":"additional","affiliation":[{"name":"Pennsylvania State University, State College, PA, USA"}]}],"member":"320","published-online":{"date-parts":[[2020,8,20]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Proceedings of IEEE Conference on Machine Learning (ICML).","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye , Nicholas Carlini , and David Wagner . 2018 . Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples . In Proceedings of IEEE Conference on Machine Learning (ICML). Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of IEEE Conference on Machine Learning (ICML)."},{"key":"e_1_3_2_1_2_1","volume-title":"Proceedings of IEEE Conference on Machine Learning (ICML).","author":"Bernstein Jeremy","year":"2018","unstructured":"Jeremy Bernstein , Yu-Xiang Wang , Kamyar Azizzadenesheli , and Animashree Anandkumar . 2018 . signSGD: Compressed Optimisation for Non-Convex Problems . In Proceedings of IEEE Conference on Machine Learning (ICML). Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. 2018. signSGD: Compressed Optimisation for Non-Convex Problems. In Proceedings of IEEE Conference on Machine Learning (ICML)."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-34166-3_46"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/FG.2018.00020"},{"volume-title":"Proceedings of IEEE Symposium on Security and Privacy (S&P).","author":"Carlini Nicholas","key":"e_1_3_2_1_5_1","unstructured":"Nicholas Carlini and David A. Wagner . 2017. Towards Evaluating the Robustness of Neural Networks . In Proceedings of IEEE Symposium on Security and Privacy (S&P). Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In Proceedings of IEEE Symposium on Security and Privacy (S&P)."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140448"},{"key":"e_1_3_2_1_7_1","volume-title":"Stateful Detection of Black-Box Adversarial Attacks. ArXiv e-prints","author":"Chen Steven","year":"2019","unstructured":"Steven Chen , Nicholas Carlini , and David Wagner . 2019. Stateful Detection of Black-Box Adversarial Attacks. ArXiv e-prints ( 2019 ). Steven Chen, Nicholas Carlini, and David Wagner. 2019. Stateful Detection of Black-Box Adversarial Attacks. ArXiv e-prints (2019)."},{"key":"e_1_3_2_1_8_1","volume-title":"Adversarial Classification. In Proceedings of ACM International Conference on Knowledge Discovery and Data Mining (KDD).","author":"Dalvi Nilesh","year":"2004","unstructured":"Nilesh Dalvi , Pedro Domingos , Mausam, Sumit Sanghai , and Deepak Verma . 2004 . Adversarial Classification. In Proceedings of ACM International Conference on Knowledge Discovery and Data Mining (KDD). Nilesh Dalvi, Pedro Domingos, Mausam, Sumit Sanghai, and Deepak Verma. 2004. Adversarial Classification. In Proceedings of ACM International Conference on Knowledge Discovery and Data Mining (KDD)."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_1_10_1","volume-title":"Nature","volume":"542","author":"Esteva Andre","year":"2017","unstructured":"Andre Esteva , Brett Kuprel , Roberto A. Novoa , Justin Ko , Susan M. Swetter , Helen M. Blau , and Sebastian Thrun . 2017 . Dermatologist-level classification of skin cancer with deep neural networks . Nature , Vol. 542 , 7639 (2017), 115--118. Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. 2017. Dermatologist-level classification of skin cancer with deep neural networks. Nature, Vol. 542, 7639 (2017), 115--118."},{"key":"e_1_3_2_1_11_1","volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Goodfellow Ian","year":"2015","unstructured":"Ian Goodfellow , Jonathon Shlens , and Christian Szegedy . 2015 . Explaining and Harnessing Adversarial Examples . In Proceedings of International Conference on Learning Representations (ICLR). Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1214\/aoms\/1177703732"},{"key":"e_1_3_2_1_14_1","volume-title":"Proceedings of IEEE Conference on Machine Learning (ICML).","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas , Logan Engstrom , Anish Athalye , and Jessy Lin . 2018 . Black-box Adversarial Attacks with Limited Queries and Information . In Proceedings of IEEE Conference on Machine Learning (ICML). Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box Adversarial Attacks with Limited Queries and Information. In Proceedings of IEEE Conference on Machine Learning (ICML)."},{"volume-title":"Learning Multiple Layers of Features from Tiny Images. Technical report","author":"Krizhevsky Alex","key":"e_1_3_2_1_15_1","unstructured":"Alex Krizhevsky and Geoffrey Hinton . 2009. Learning Multiple Layers of Features from Tiny Images. Technical report , University of Toronto (2009) . Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto (2009)."},{"key":"e_1_3_2_1_16_1","volume-title":"Nature","volume":"521","author":"Lecun Yann","year":"2015","unstructured":"Yann Lecun , Yoshua Bengio , and Geoffrey Hinton . 2015 . Deep learning . Nature , Vol. 521 , 7553 (2015), 436--444. Yann Lecun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, Vol. 521, 7553 (2015), 436--444."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/2872427.2883060"},{"volume-title":"Proceedings of IEEE Symposium on Security and Privacy (S&P).","author":"Ling X.","key":"e_1_3_2_1_18_1","unstructured":"X. Ling , S. Ji , J. Zou , J. Wang , C. Wu , B. Li , and T. Wang . 2019. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model . In Proceedings of IEEE Symposium on Security and Privacy (S&P). X. Ling, S. Ji, J. Zou, J. Wang, C. Wu, B. Li, and T. Wang. 2019. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model. In Proceedings of IEEE Symposium on Security and Privacy (S&P)."},{"key":"e_1_3_2_1_19_1","volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Liu Sijia","year":"2019","unstructured":"Sijia Liu , Pin-Yu Chen , Xiangyi Chen , and Mingyi Hong . 2019 . signSGD via Zeroth-Order Oracle . In Proceedings of International Conference on Learning Representations (ICLR). Sijia Liu, Pin-Yu Chen, Xiangyi Chen, and Mingyi Hong. 2019. signSGD via Zeroth-Order Oracle. In Proceedings of International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_2_1_20_1","volume-title":"Delving into Transferable Adversarial Examples and Black-Box Attacks. ArXiv e-prints","author":"Liu Yanpei","year":"2016","unstructured":"Yanpei Liu , Xinyun Chen , Chang Liu , and Dawn Song . 2016. Delving into Transferable Adversarial Examples and Black-Box Attacks. ArXiv e-prints ( 2016 ). Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into Transferable Adversarial Examples and Black-Box Attacks. ArXiv e-prints (2016)."},{"key":"e_1_3_2_1_21_1","volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Ma Xingjun","year":"2018","unstructured":"Xingjun Ma , Bo Li , Yisen Wang , Sarah M. Erfani , Sudanthi Wijewickrema , Grant Schoenebeck , Dawn Song , Michael E. Houle , and James Bailey . 2018 . Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality . In Proceedings of International Conference on Learning Representations (ICLR). Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, and James Bailey. 2018. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. In Proceedings of International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_2_1_22_1","volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . 2018 . Towards Deep Learning Models Resistant to Adversarial Attacks . In Proceedings of International Conference on Learning Representations (ICLR). Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proceedings of International Conference on Learning Representations (ICLR)."},{"volume-title":"Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Moosavi-Dezfooli S.","key":"e_1_3_2_1_23_1","unstructured":"S. Moosavi-Dezfooli , A. Fawzi , O. Fawzi , and P. Frossard . 2017. Universal Adversarial Perturbations . In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. 2017. Universal Adversarial Perturbations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_3_2_1_24_1","volume-title":"Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Narodytska Nina","year":"2017","unstructured":"Nina Narodytska and Shiva Prasad Kasiviswanathan . 2017 . Simple Black-Box Adversarial Perturbations for Deep Networks . In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Nina Narodytska and Shiva Prasad Kasiviswanathan. 2017. Simple Black-Box Adversarial Perturbations for Deep Networks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D16-1264"},{"key":"e_1_3_2_1_27_1","volume-title":"AdvMind: Inferring Adversary Intent of Black-Box Attacks. ArXiv e-prints","author":"Ren Pang","year":"2020","unstructured":"Pang Ren , Zhang Xinyang , Ji Shouling , Luo Xiapu , and Wang Ting . 2020. AdvMind: Inferring Adversary Intent of Black-Box Attacks. ArXiv e-prints ( 2020 ). Pang Ren, Zhang Xinyang, Ji Shouling, Luo Xiapu, and Wang Ting. 2020. AdvMind: Inferring Adversary Intent of Black-Box Attacks. ArXiv e-prints (2020)."},{"key":"e_1_3_2_1_28_1","volume-title":"Healthcare Fraud Detection Market to grow at 24.59% CAGR by","author":"Research Orbis","year":"2024","unstructured":"Orbis Research . 2019. Healthcare Fraud Detection Market to grow at 24.59% CAGR by 2024 . https:\/\/www.globenewswire.com\/. Orbis Research. 2019. Healthcare Fraud Detection Market to grow at 24.59% CAGR by 2024. https:\/\/www.globenewswire.com\/."},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0167-9473(02)00078-6"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1038\/nature16961"},{"key":"e_1_3_2_1_31_1","volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan and Andrew Zisserman . 2014 . Very Deep Convolutional Networks for Large-Scale Image Recognition . In Proceedings of International Conference on Learning Representations (ICLR). Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_2_1_32_1","volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna , Dumitru Erhan , Ian Goodfellow , and Rob Fergus . 2014 . Intriguing Properties of Neural Networks . In Proceedings of International Conference on Learning Representations (ICLR). Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing Properties of Neural Networks. In Proceedings of International Conference on Learning Representations (ICLR)."},{"volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Tram\u00e8r F.","key":"e_1_3_2_1_33_1","unstructured":"F. Tram\u00e8r , A. Kurakin , N. Papernot , I. Goodfellow , D. Boneh , and P. McDaniel . 2018. Ensemble Adversarial Training: Attacks and Defenses . In Proceedings of International Conference on Learning Representations (ICLR). F. Tram\u00e8r, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel. 2018. Ensemble Adversarial Training: Attacks and Defenses. In Proceedings of International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_2_1_34_1","volume-title":"Proceedings of USENIX Security Symposium (SEC).","author":"Tram\u00e8r Florian","year":"2016","unstructured":"Florian Tram\u00e8r , Fan Zhang , Ari Juels , Michael K. Reiter , and Thomas Ristenpart . 2016 . Stealing Machine Learning Models via Prediction APIs . In Proceedings of USENIX Security Symposium (SEC). Florian Tram\u00e8r, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. In Proceedings of USENIX Security Symposium (SEC)."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.5555\/2627435.2638566"},{"key":"e_1_3_2_1_36_1","volume-title":"Chris Junchi Li, and Tong Zhang","author":"Ye Haishan","year":"2018","unstructured":"Haishan Ye , Zhichao Huang , Cong Fang , Chris Junchi Li, and Tong Zhang . 2018 . Hessian-Aware Zeroth-Order Optimization for Black-Box Adversarial Attack. ArXiv e-prints (2018). Haishan Ye, Zhichao Huang, Cong Fang, Chris Junchi Li, and Tong Zhang. 2018. Hessian-Aware Zeroth-Order Optimization for Black-Box Adversarial Attack. ArXiv e-prints (2018)."}],"event":{"name":"KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"],"location":"Virtual Event CA USA","acronym":"KDD '20"},"container-title":["Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3394486.3403241","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3394486.3403241","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3394486.3403241","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:01:47Z","timestamp":1750197707000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3394486.3403241"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,8,20]]},"references-count":36,"alternative-id":["10.1145\/3394486.3403241","10.1145\/3394486"],"URL":"https:\/\/doi.org\/10.1145\/3394486.3403241","relation":{},"subject":[],"published":{"date-parts":[[2020,8,20]]},"assertion":[{"value":"2020-08-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}