{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T15:41:29Z","timestamp":1774539689205,"version":"3.50.1"},"reference-count":59,"publisher":"Association for Computing Machinery (ACM)","issue":"3","funder":[{"DOI":"10.13039\/501100018925","name":"111 Center","doi-asserted-by":"crossref","award":["B16037"],"award-info":[{"award-number":["B16037"]}],"id":[{"id":"10.13039\/501100018925","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Priv. Secur."],"published-print":{"date-parts":[[2025,8,31]]},"abstract":"<jats:p>\n            Network intrusion detection systems based on deep learning are gaining significant traction in cyber security due to their high prediction accuracy and strong adaptability to evolving cyber threats. However, a serious drawback is their vulnerability to evasion attacks that rely on adversarial examples. To provide robustness guarantees for deep neural networks against any possible perturbations, certified defenses against perturbations within a\n            <jats:italic toggle=\"yes\">\n              l\n              <jats:sub>p<\/jats:sub>\n            <\/jats:italic>\n            -bounded region around the input are being increasingly explored. Unfortunately, unlike existing image domain approaches that concentrate on homogeneous input feature spaces, the progress on certified defense for the network traffic domain, which is characterized by heterogeneous features, has been very limited. To address such a gap, we present the design and practicality of a novel framework, Multi-order Adaptive Randomized Smoothing (MARS), for certifying the robustness of network intrusion detectors based on deep neural networks. Experiments on various network intrusion detection systems show that MARS significantly improves the tightness of robustness certification (12.23% increase in l\n            <jats:sub>2<\/jats:sub>\n            certified radius), detection accuracy on evasion attack (7.17% improvement on\n            <jats:inline-formula content-type=\"math\/tex\">\n              <jats:tex-math notation=\"LaTeX\" version=\"MathJax\">\\(l_{\\infty }\\)<\/jats:tex-math>\n            <\/jats:inline-formula>\n            -PGD, 10.11% improvement on l\n            <jats:sub>1<\/jats:sub>\n            -EAD), and prediction accuracy on natural corruption (16.65% enhancement on latency, 18.23% enhancement on packet loss) compared to the SOTA method. We have also conducted an extensive analysis of the dimension-wise certified robustness of the network intrusion detector. The results indicate that the dimensional certified radii obtained using MARS reveal the robustness differences across feature dimensions, aligning with the empirical evaluation findings.\n          <\/jats:p>","DOI":"10.1145\/3715121","type":"journal-article","created":{"date-parts":[[2025,4,16]],"date-time":"2025-04-16T07:00:16Z","timestamp":1744786816000},"page":"1-33","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Dimensional Robustness Certification for Deep Neural Networks in Network Intrusion Detection Systems"],"prefix":"10.1145","volume":"28","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3705-7345","authenticated-orcid":false,"suffix":"Female","given":"Mengdie","family":"Huang","sequence":"first","affiliation":[{"name":"School of Cyber Engineering, Xidian University","place":["Xi'an, China"]},{"name":"Department of Computer Science, Purdue University","place":["Xi'an, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-5749-2058","authenticated-orcid":false,"suffix":"Male","given":"Yingjun","family":"Lin","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Purdue University","place":["West Lafayette, United States"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5858-5070","authenticated-orcid":false,"suffix":"Male","given":"Xiaofeng","family":"Chen","sequence":"additional","affiliation":[{"name":"School of Cyber Engineering, Xidian University","place":["Xi'an, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4029-7051","authenticated-orcid":false,"suffix":"Female","given":"Elisa","family":"Bertino","sequence":"additional","affiliation":[{"name":"Department of Computer Sciences, Purdue University","place":["West Lafayette, United States"]}]}],"member":"320","published-online":{"date-parts":[[2025,8,23]]},"reference":[{"key":"e_1_3_1_2_2","first-page":"16085","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Bitterwolf Julian","year":"2020","unstructured":"Julian Bitterwolf, Alexander Meinke, and Matthias Hein. 2020. Certifiably adversarially robust detection of out-of-distribution data. In Proceedings of the Annual Conference on Neural Information Processing Systems. 16085\u201316095."},{"key":"e_1_3_1_3_2","first-page":"1","volume-title":"Proceedings of the Network and Distributed System Security Symposium","author":"Chang Jung-Woo","year":"2023","unstructured":"Jung-Woo Chang, Mojan Javaheripi, Seira Hidano, and Farinaz Koushanfar. 2023. RoVISQ: Reduction of video service quality via adversarial attacks on deep learning-based video compression. In Proceedings of the Network and Distributed System Security Symposium. 1\u201318."},{"key":"e_1_3_1_4_2","first-page":"10","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","author":"Chen Pin-Yu","year":"2018","unstructured":"Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2018. EAD: Elastic-net attacks to deep neural networks via adversarial examples. In Proceedings of the AAAI Conference on Artificial Intelligence. 10\u201317."},{"key":"e_1_3_1_5_2","doi-asserted-by":"crossref","first-page":"251","DOI":"10.1007\/978-3-319-68167-2_18","volume-title":"Proceedings of the International Symposium on Automated Technology for Verification and Analysis","author":"Cheng Chih-Hong","year":"2017","unstructured":"Chih-Hong Cheng, Georg N\u00fchrenberg, and Harald Ruess. 2017. Maximum resilience of artificial neural networks. In Proceedings of the International Symposium on Automated Technology for Verification and Analysis. 251\u2013268."},{"key":"e_1_3_1_6_2","first-page":"1310","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Cohen Jeremy","year":"2019","unstructured":"Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019. Certified adversarial robustness via randomized smoothing. In Proceedings of the International Conference on Machine Learning. 1310\u20131320."},{"key":"e_1_3_1_7_2","first-page":"1","volume-title":"Proceedings of the IEEE Conference on Computer Communications","author":"Diallo Alec F.","year":"2021","unstructured":"Alec F. Diallo and Paul Patras. 2021. Adaptive clustering-based malicious traffic classification at the network edge. In Proceedings of the IEEE Conference on Computer Communications. 1\u201310."},{"key":"e_1_3_1_8_2","first-page":"11427","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Fazlyab Mahyar","year":"2019","unstructured":"Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. 2019. Efficient and accurate estimation of Lipschitz constants for deep neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems. 11427\u201311438."},{"key":"e_1_3_1_9_2","first-page":"3340","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Fischer Marc","year":"2021","unstructured":"Marc Fischer, Maximilian Baader, and Martin Vechev. 2021. Scalable certified segmentation via randomized smoothing. In Proceedings of the International Conference on Machine Learning. 3340\u20133351."},{"key":"e_1_3_1_10_2","first-page":"1","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Fromherz Aymeric","year":"2020","unstructured":"Aymeric Fromherz, Klas Leino, Matt Fredrikson, Bryan Parno, and Corina Pasareanu. 2020. Fast geometric projections for local robustness certification. In Proceedings of the International Conference on Learning Representations. 1\u201315."},{"key":"e_1_3_1_11_2","first-page":"1","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations. 1\u201311."},{"key":"e_1_3_1_12_2","unstructured":"Sven Gowal Krishnamurthy Dvijotham Robert Stanforth Rudy Bunel Chongli Qin Jonathan Uesato Relja Arandjelovic Timothy Mann and Pushmeet Kohli. 2018. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv:1810.12715. Retrieved from https:\/\/arxiv.org\/abs\/1810.12715"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00494"},{"key":"e_1_3_1_14_2","doi-asserted-by":"crossref","unstructured":"Dongqi Han Zhiliang Wang Ying Zhong Wenqi Chen Jiahai Yang Shuqiang Lu Xingang Shi and Xia Yin. 2021. Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors. IEEE Journal on Selected Areas in Communications 39 8 (2021) 2632\u20132647.","DOI":"10.1109\/JSAC.2021.3087242"},{"key":"e_1_3_1_15_2","first-page":"8465","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Hao Zhongkai","year":"2022","unstructured":"Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jian Song, and Jun Zhu. 2022. Gsmooth: Certified robustness against semantic transformations via generalized randomized smoothing. In Proceedings of the International Conference on Machine Learning. 8465\u20138483."},{"key":"e_1_3_1_16_2","doi-asserted-by":"crossref","unstructured":"Mengdie Huang Hyunwoo Lee Ashish Kundu Xiaofeng Chen Anand Mudgerikar Ninghui Li and Elisa Bertino. 2024. ARIoTEDef: Adversarially robust IoT early defense system based on self-evolution against multi-step attacks. ACM Transactions on Internet of Things 5 3 (2024) 1\u201334.","DOI":"10.1145\/3660646"},{"key":"e_1_3_1_17_2","first-page":"767","volume-title":"Proceedings of the IEEE International Conference on Trust, Security and Privacy in Computing and Communications","author":"Huang Mengdie","year":"2024","unstructured":"Mengdie Huang, Yingjun Lin, Xiaofeng Chen, and Elisa Bertino. 2024. MARS: Robustness certification for deep network intrusion detectors via multi-order adaptive randomized smoothing. In Proceedings of the IEEE International Conference on Trust, Security and Privacy in Computing and Communications. 767\u2013774."},{"key":"e_1_3_1_18_2","doi-asserted-by":"crossref","first-page":"716","DOI":"10.1145\/3579856.3595786","volume-title":"Proceedings of the ACM Asia Conference on Computer and Communications Security","author":"Huang Mengdie","year":"2023","unstructured":"Mengdie Huang, Yi Xie, Xiaofeng Chen, Jin Li, Changyu Dong, Zheli Liu, and Willy Susilo. 2023. Boost off\/on-manifold adversarial robustness for deep learning with latent representation mixup. In Proceedings of the ACM Asia Conference on Computer and Communications Security. 716\u2013730."},{"key":"e_1_3_1_19_2","first-page":"1","volume-title":"Proceedings of the Network and Distributed System Security Symposium","author":"Jia Wei","year":"2022","unstructured":"Wei Jia, Zhaojun Lu, Haichun Zhang, Zhenglin Liu, Jie Wang, and Gang Qu. 2022. Fooling the eyes of autonomous vehicles: Robust physical adversarial examples against traffic sign recognition systems. In Proceedings of the Network and Distributed System Security Symposium. 1\u201317."},{"key":"e_1_3_1_20_2","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems . 1\u201311","author":"Jordan Matt","year":"2019","unstructured":"Matt Jordan, Justin Lewis, and Alexandros G. Dimakis. 2019. Provable certificates for adversarial examples: Fitting a ball in the union of polytopes. In Proceedings of the Annual Conference on Neural Information Processing Systems . 1\u201311."},{"key":"e_1_3_1_21_2","first-page":"656","volume-title":"Proceedings of the IEEE Symposium on Security and Privacy","author":"Lecuyer Mathias","year":"2019","unstructured":"Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. 2019. Certified robustness to adversarial examples with differential privacy. In Proceedings of the IEEE Symposium on Security and Privacy. 656\u2013672."},{"key":"e_1_3_1_22_2","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems . 4910\u20134921","author":"Lee Guang-He","year":"2019","unstructured":"Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi Jaakkola. 2019. Tight certificates of adversarial robustness for randomly smoothed classifiers. In Proceedings of the Annual Conference on Neural Information Processing Systems . 4910\u20134921."},{"key":"e_1_3_1_23_2","first-page":"16891","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Lee Sungyoon","year":"2020","unstructured":"Sungyoon Lee, Jaewook Lee, and Saerom Park. 2020. Lipschitz-certifiable training with a tight outer bound. In Proceedings of the Annual Conference on Neural Information Processing Systems. 16891\u201316902."},{"key":"e_1_3_1_24_2","first-page":"1","volume-title":"Proceedings of the Network and Distributed System SecuritySymposium","author":"Li Jinfeng","year":"2019","unstructured":"Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. TEXTBUGGER: Generating adversarial text against real-world applications. In Proceedings of the Network and Distributed System SecuritySymposium. 1\u201315."},{"key":"e_1_3_1_25_2","first-page":"94","volume-title":"Proceedings of the IEEE Symposium on Security and Privacy","author":"Li Linyi","year":"2022","unstructured":"Linyi Li, Tao Xie, and Bo Li. 2022. SoK: Certified robustness for deep neural networks. In Proceedings of the IEEE Symposium on Security and Privacy. 94\u2013115."},{"key":"e_1_3_1_26_2","first-page":"1","volume-title":"Proceedings of the Network and Distributed System Security Symposium","author":"Li Shasha","year":"2019","unstructured":"Shasha Li, Ajaya Neupane, Sujoy Paul, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy Chowdhury, and Ananthram Swami. 2019. Stealthy adversarial perturbations against real-time video classification systems. In Proceedings of the Network and Distributed System Security Symposium. 1\u201315."},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00191"},{"key":"e_1_3_1_28_2","first-page":"6072","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Lim Cong Han","year":"2020","unstructured":"Cong Han Lim, Raquel Urtasun, and Ersin Yumer. 2020. Hierarchical verification for adversarial robustness. In Proceedings of the International Conference on Machine Learning. 6072\u20136082."},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01234-2_23"},{"key":"e_1_3_1_30_2","first-page":"4308","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Lyu Zhaoyang","year":"2021","unstructured":"Zhaoyang Lyu, Minghao Guo, Tong Wu, Guodong Xu, Kehuan Zhang, and Dahua Lin. 2021. Towards evaluating and training verifiably robust neural networks. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 4308\u20134317."},{"key":"e_1_3_1_31_2","first-page":"1","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations. 1\u201327."},{"key":"e_1_3_1_32_2","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems . 4501\u20134511","author":"Mohapatra Jeet","year":"2020","unstructured":"Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel. 2020. Higher-order certification for randomized smoothing. In Proceedings of the Annual Conference on Neural Information Processing Systems . 4501\u20134511."},{"key":"e_1_3_1_33_2","first-page":"5145","volume-title":"Proceedings of the Annual Meeting of the Association for Computational Linguistics","author":"Moon Han Cheol","year":"2023","unstructured":"Han Cheol Moon, Shafiq Joty, Ruochen Zhao, Megh Thakkar, and Chi Xu. 2023. Randomized smoothing with masked inference for adversarially robust text classifications. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 5145\u20135165."},{"key":"e_1_3_1_34_2","first-page":"2705","volume-title":"Proceedings of the USENIX Security Symposium","author":"Nasr Milad","year":"2021","unstructured":"Milad Nasr, Alireza Bahramali, and Amir Houmansadr. 2021. Defeating DNN-based traffic analysis systems in real-time with blind adversarial perturbations. In Proceedings of the USENIX Security Symposium. 2705\u20132722."},{"key":"e_1_3_1_35_2","first-page":"4970","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Pang Tianyu","year":"2019","unstructured":"Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. 2019. Improving adversarial robustness via promoting ensemble diversity. In Proceedings of the International Conference on Machine Learning. 4970\u20134979."},{"key":"e_1_3_1_36_2","first-page":"7683","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Phan Hai","year":"2020","unstructured":"Hai Phan, My T. Thai, Han Hu, Ruoming Jin, Tong Sun, and Dejing Dou. 2020. Scalable differential privacy with certified robustness in adversarial learning. In Proceedings of the International Conference on Machine Learning. 7683\u20137694."},{"key":"e_1_3_1_37_2","first-page":"1","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Raghunathan Aditi","year":"2018","unstructured":"Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified defenses against adversarial examples. In Proceedings of the International Conference on Learning Representations. 1\u201315."},{"key":"e_1_3_1_38_2","first-page":"10900","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Raghunathan Aditi","year":"2018","unstructured":"Aditi Raghunathan, Jacob Steinhardt, and Percy S. Liang. 2018. Semidefinite relaxations for certifying robustness to adversarial examples. In Proceedings of the Annual Conference on Neural Information Processing Systems. 10900\u201310910."},{"key":"e_1_3_1_39_2","first-page":"9835","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Salman Hadi","year":"2019","unstructured":"Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. 2019. A convex relaxation barrier to tight robustness verification of neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems. 9835\u20139846."},{"key":"e_1_3_1_40_2","first-page":"1","volume-title":"Proceedings of the Network and Distributed System Security Symposium","author":"Schonherr Lea","year":"2019","unstructured":"Lea Schonherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, and Dorothea Kolossa. 2019. Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding. In Proceedings of the Network and Distributed System Security Symposium. 1\u201315."},{"key":"e_1_3_1_41_2","first-page":"18335","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Shi Zhouxing","year":"2021","unstructured":"Zhouxing Shi, Yihan Wang, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2021. Fast certified robust training with short warmup. In Proceedings of the Annual Conference on Neural Information Processing Systems. 18335\u201318349."},{"key":"e_1_3_1_42_2","first-page":"1","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Tjeng Vincent","year":"2019","unstructured":"Vincent Tjeng, Kai Y. Xiao, and Russ Tedrake. 2019. Evaluating robustness of neural networks with mixed integer programming. In Proceedings of the International Conference on Learning Representations. 1\u201321."},{"key":"e_1_3_1_43_2","first-page":"1633","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Tramer Florian","year":"2020","unstructured":"Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. In Proceedings of the Annual Conference on Neural Information Processing Systems. 1633\u20131645."},{"key":"e_1_3_1_44_2","unstructured":"CIC UNB. 2018. A Realistic Cyber Defense Dataset (CSE-CIC-IDS2018). Retrieved January 31 2018 from https:\/\/www.unb.ca\/cic\/datasets\/ids-2018"},{"key":"e_1_3_1_45_2","doi-asserted-by":"crossref","unstructured":"Pauli Virtanen Ralf Gommers Travis E. Oliphant Matt Haberland Tyler Reddy David Cournapeau Evgeni Burovski Pearu Peterson Warren Weckesser Jonathan Bright et\u00a0al. 2020. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nature Methods 17 3 (2020) 261\u2013272.","DOI":"10.1038\/s41592-020-0772-5"},{"key":"e_1_3_1_46_2","first-page":"22361","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Vor\u00e1\u010dek V\u00e1clav","year":"2022","unstructured":"V\u00e1clav Vor\u00e1\u010dek and Matthias Hein. 2022. Provably adversarially robust nearest prototype classifiers. In Proceedings of the International Conference on Machine Learning. 22361\u201322383."},{"key":"e_1_3_1_47_2","doi-asserted-by":"crossref","first-page":"1645","DOI":"10.1145\/3447548.3467295","volume-title":"Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining","author":"Wang Binghui","year":"2021","unstructured":"Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. 2021. Certified robustness of graph neural networks against adversarial structural perturbation. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1645\u20131653."},{"key":"e_1_3_1_48_2","first-page":"1","volume-title":"Proceedings of the Network and Distributed System Security Symposium","author":"Wang Kai","year":"2023","unstructured":"Kai Wang, Zhiliang Wang, Dongqi Han, Wenqi Chen, Jiahai Yang, Xingang Shi, and Xia Yin. 2023. BARS: Local robustness certification for deep learning based traffic analysis systems. In Proceedings of the Network and Distributed System Security Symposium. 1\u201318."},{"key":"e_1_3_1_49_2","volume-title":"Proceedings of the IEEE Conference on Computer Communications . 1\u201310","author":"Wang Ning","year":"2021","unstructured":"Ning Wang, Yimin Chen, Yang Hu, Wenjing Lou, and Y. Thomas Hou. 2021. MANDA: On adversarial example detection for network intrusion detection system. In Proceedings of the IEEE Conference on Computer Communications . 1\u201310."},{"key":"e_1_3_1_50_2","first-page":"29909","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Wang Shiqi","year":"2021","unstructured":"Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. 2021. Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. In Proceedings of the Annual Conference on Neural Information Processing Systems. 29909\u201329921."},{"key":"e_1_3_1_51_2","first-page":"5286","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Wong Eric","year":"2018","unstructured":"Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the International Conference on Machine Learning. 5286\u20135295."},{"key":"e_1_3_1_52_2","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition . 501\u2013509","author":"Xie Cihang","year":"2019","unstructured":"Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L. Yuille, and Kaiming He. 2019. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition . 501\u2013509."},{"key":"e_1_3_1_53_2","first-page":"1129","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Xu Kaidi","year":"2020","unstructured":"Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. 2020. Automatic perturbation analysis for scalable certified robustness and beyond. In Proceedings of the Annual Conference on Neural Information Processing Systems. 1129\u20131141."},{"key":"e_1_3_1_54_2","unstructured":"Kaidi Xu Huan Zhang ShiqiWang YihanWang Suman Jana Xue Lin and Cho-Jui Hsieh. 2021. Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In Proceedings of the International Conference on Learning Representations. 1\u201315."},{"key":"e_1_3_1_55_2","first-page":"2327","volume-title":"Proceedings of the USENIX Security Symposium","author":"Yang Limin","year":"2021","unstructured":"Limin Yang, Wenbo Guo, Qingying Hao, Arridhana Ciptadi, Ali Ahmadzadeh, Xinyu Xing, and Gang Wang. 2021. CADE: Detecting and explaining concept drift samples for security applications. In Proceedings of the USENIX Security Symposium. 2327\u20132344."},{"key":"e_1_3_1_56_2","first-page":"1","volume-title":"Proceedings of the Network and Distributed System Security Symposium","author":"Yang Yijun","year":"2022","unstructured":"Yijun Yang, Ruiyuan Gao, Yu Li, Qiuxia Lai, and Qiang Xu. 2022. What you see is not what the network infers: Detecting adversarial examples based on semantic contradiction. In Proceedings of the Network and Distributed System Security Symposium. 1\u201318."},{"key":"e_1_3_1_57_2","doi-asserted-by":"crossref","unstructured":"Chaoyun Zhang Xavier Costa-Perez and Paul Patras. 2022. Adversarial attacks against deep learning-based network intrusion detection systems and defense mechanisms. IEEE\/ACM Transactions on Networking 30 3 (2022) 1294\u20131311.","DOI":"10.1109\/TNET.2021.3137084"},{"key":"e_1_3_1_58_2","first-page":"1","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Zhang Huan","year":"2019","unstructured":"Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, and Cho-Jui Hsieh. 2019. Towards stable and efficient training of verifiably robust neural networks. In Proceedings of the International Conference on Learning Representations. 1\u201325."},{"key":"e_1_3_1_59_2","first-page":"1656","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Zhang Huan","year":"2022","unstructured":"Huan Zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. 2022. General cutting planes for bound-propagation-based neural network verification. In Proceedings of the Annual Conference on Neural Information Processing Systems. 1656\u20131670."},{"key":"e_1_3_1_60_2","first-page":"4944","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Zhang Huan","year":"2018","unstructured":"Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. In Proceedings of the Annual Conference on Neural Information Processing Systems. 4944\u20134953."}],"container-title":["ACM Transactions on Privacy and Security"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3715121","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,23]],"date-time":"2025-08-23T14:47:34Z","timestamp":1755960454000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3715121"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,23]]},"references-count":59,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,8,31]]}},"alternative-id":["10.1145\/3715121"],"URL":"https:\/\/doi.org\/10.1145\/3715121","relation":{},"ISSN":["2471-2566","2471-2574"],"issn-type":[{"value":"2471-2566","type":"print"},{"value":"2471-2574","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,8,23]]},"assertion":[{"value":"2024-12-07","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-01-02","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-08-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}