{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:25:32Z","timestamp":1750220732736,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":21,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,9,7]],"date-time":"2020-09-07T00:00:00Z","timestamp":1599436800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100002920","name":"Research Grants Council, University Grants Committee","doi-asserted-by":"publisher","award":["14205018"],"award-info":[{"award-number":["14205018"]}],"id":[{"id":"10.13039\/501100002920","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012659","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61532017"],"award-info":[{"award-number":["61532017"]}],"id":[{"id":"10.13039\/501100012659","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,9,7]]},"DOI":"10.1145\/3386263.3407601","type":"proceedings-article","created":{"date-parts":[[2020,9,4]],"date-time":"2020-09-04T21:34:20Z","timestamp":1599255260000},"page":"543-548","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["On Configurable Defense against Adversarial Example Attacks"],"prefix":"10.1145","author":[{"given":"Bo","family":"Luo","sequence":"first","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]},{"given":"Min","family":"Li","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]},{"given":"Yu","family":"Li","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]},{"given":"Qiang","family":"Xu","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]}],"member":"320","published-online":{"date-parts":[[2020,9,7]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140444"},{"key":"e_1_3_2_2_2_1","volume-title":"Towards evaluating the robustness of neural networks. Security and Privacy (S&P)","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David Wagner . Towards evaluating the robustness of neural networks. Security and Privacy (S&P) , 2017 . Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. Security and Privacy (S&P), 2017."},{"key":"e_1_3_2_2_3_1","volume-title":"AAAI","author":"Chen Pin-Yu","year":"2018","unstructured":"Pin-Yu Chen , Yash Sharma , Huan Zhang , Jinfeng Yi , and Cho-Jui Hsieh . Ead : elastic-net attacks to deep neural networks via adversarial examples . AAAI , 2018 . Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. AAAI, 2018."},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00482"},{"key":"e_1_3_2_2_5_1","volume-title":"ICLR","author":"Goodfellow Ian J","year":"2015","unstructured":"Ian J Goodfellow , Jonathon Shlens , and Christian Szegedy . Explaining and harnessing adversarial examples . ICLR , 2015 . Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015."},{"key":"e_1_3_2_2_6_1","volume-title":"ICLR","author":"Gu Shixiang","year":"2015","unstructured":"Shixiang Gu and Luca Rigazio . Towards deep neural network architectures robust to adversarial examples . ICLR , 2015 . Shixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversarial examples. ICLR, 2015."},{"key":"e_1_3_2_2_7_1","volume-title":"ICLR","author":"He Warren","year":"2018","unstructured":"Warren He , Bo Li , and Dawn Song . Decision boundary analysis of adversarial examples . ICLR , 2018 . Warren He, Bo Li, and Dawn Song. Decision boundary analysis of adversarial examples. ICLR, 2018."},{"key":"e_1_3_2_2_8_1","volume-title":"ICLR","author":"Kurakin Alexey","year":"2017","unstructured":"Alexey Kurakin , Ian Goodfellow , and Samy Bengio . Adversarial machine learning at scale . ICLR , 2017 . Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. ICLR, 2017."},{"key":"e_1_3_2_2_9_1","volume-title":"Deepsec: A uniform platform for security analysis of deep learning model. Security and Privacy (S&P)","author":"Ling Xiang","year":"2019","unstructured":"Xiang Ling , Shouling Ji , Jiaxu Zou , Jiannan Wang , Chunming Wu , Bo Li , and Ting Wang . Deepsec: A uniform platform for security analysis of deep learning model. Security and Privacy (S&P) , 2019 . Xiang Ling, Shouling Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, and Ting Wang. Deepsec: A uniform platform for security analysis of deep learning model. Security and Privacy (S&P), 2019."},{"key":"e_1_3_2_2_10_1","volume-title":"ICLR","author":"Ma Xingjun","year":"2018","unstructured":"Xingjun Ma , Bo Li , Yisen Wang , Sarah M Erfani , Sudanthi Wijewickrema , Michael E Houle , Grant Schoenebeck , Dawn Song , and James Bailey . Characterizing adversarial subspaces using local intrinsic dimensionality . ICLR , 2018 . Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Michael E Houle, Grant Schoenebeck, Dawn Song, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. ICLR, 2018."},{"key":"e_1_3_2_2_11_1","volume-title":"ICLR","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . Towards deep learning models resistant to adversarial attacks . ICLR , 2018 . Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ICLR, 2018."},{"key":"e_1_3_2_2_12_1","volume-title":"CCS","author":"Meng Dongyu","year":"2017","unstructured":"Dongyu Meng and Hao Chen . Magnet : a two-pronged defense against adversarial examples . CCS , 2017 . Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. CCS, 2017."},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.17"},{"key":"e_1_3_2_2_14_1","volume-title":"The limitations of deep learning in adversarial settings. Security and Privacy (S&P)","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot , Patrick McDaniel , Somesh Jha , Matt Fredrikson , Z Berkay Celik , and Ananthram Swami . The limitations of deep learning in adversarial settings. Security and Privacy (S&P) , 2016 . Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. Security and Privacy (S&P), 2016."},{"key":"e_1_3_2_2_15_1","volume-title":"Distillation as a defense to adversarial perturbations against deep neural networks. Security and Privacy (S&P)","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot , Patrick McDaniel , Xi Wu , Somesh Jha , and Ananthram Swami . Distillation as a defense to adversarial perturbations against deep neural networks. Security and Privacy (S&P) , 2016 . Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. Security and Privacy (S&P), 2016."},{"key":"e_1_3_2_2_16_1","volume-title":"AAAI","author":"Ross Andrew Slavin","year":"2018","unstructured":"Andrew Slavin Ross and Finale Doshi-Velez . Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients . AAAI , 2018 . Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. AAAI, 2018."},{"key":"e_1_3_2_2_17_1","volume-title":"ICLR","author":"Tram\u00e8r Florian","year":"2018","unstructured":"Florian Tram\u00e8r , Alexey Kurakin , Nicolas Papernot , Ian Goodfellow , Dan Boneh , and Patrick McDaniel . Ensemble adversarial training: Attacks and defenses . ICLR , 2018 . Florian Tram\u00e8r, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. ICLR, 2018."},{"key":"e_1_3_2_2_18_1","volume-title":"NDSS","author":"Xu Weilin","year":"2018","unstructured":"Weilin Xu , David Evans , and Yanjun Qi. Feature squeezing : Detecting adversarial examples in deep neural networks . NDSS , 2018 . Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. NDSS, 2018."},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR.2018.8546189"},{"key":"e_1_3_2_2_20_1","volume-title":"Adversarial examples: Attacks and defenses for deep learning","author":"Yuan Xiaoyong","year":"2019","unstructured":"Xiaoyong Yuan , Pan He , Qile Zhu , and Xiaolin Li . Adversarial examples: Attacks and defenses for deep learning . IEEE transactions on neural networks and learning systems, 2019 . Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 2019."},{"key":"e_1_3_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00907"}],"event":{"name":"GLSVLSI '20: Great Lakes Symposium on VLSI 2020","acronym":"GLSVLSI '20","location":"Virtual Event China"},"container-title":["Proceedings of the 2020 on Great Lakes Symposium on VLSI"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3386263.3407601","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3386263.3407601","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:38:25Z","timestamp":1750199905000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3386263.3407601"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,9,7]]},"references-count":21,"alternative-id":["10.1145\/3386263.3407601","10.1145\/3386263"],"URL":"https:\/\/doi.org\/10.1145\/3386263.3407601","relation":{},"subject":[],"published":{"date-parts":[[2020,9,7]]},"assertion":[{"value":"2020-09-07","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}