{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,28]],"date-time":"2025-11-28T12:33:08Z","timestamp":1764333188075,"version":"3.41.0"},"reference-count":26,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2022,7,31]],"date-time":"2022-07-31T00:00:00Z","timestamp":1659225600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"NSFC","doi-asserted-by":"crossref","award":["62074100"],"award-info":[{"award-number":["62074100"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["J. Emerg. Technol. Comput. Syst."],"published-print":{"date-parts":[[2022,7,31]]},"abstract":"<jats:p>Deep Neural Networks (DNNs) have been widely used in variety of fields with great success. However, recent research indicates that DNNs are susceptible to adversarial attacks, which can easily fool the well-trained DNN-based classifiers without being detected by human eyes. In this article, we propose to integrate the target DNN model with our robust bit-plane classifiers to defend against adversarial attacks. The bit-plane classifiers take bit-planes of input images for convolution, which is motivated by our observation that successful attacks aim to generate imperceptible perturbations, and they mainly affect the low-order bits of pixels in clean images when adding the perturbations. We also propose two metrics, bit-plane perturbation rate and channel modification rate, to further explain the robustness of bit-plane classifiers. We discuss potential adaptive attack and find that our defense can be effective as long as the adversarial examples are qualified. We conduct experiments on dataset CIFAR-10 and GTSRB under white-box attack and black-box attack. The results show that our defense method can effectively increase the average model accuracy from 16.23% to 83.53% under white-box attack and from 40.65% to 88.14% under black-box attack on CIFAR-10 without sacrificing the accuracy of clean images.<\/jats:p>","DOI":"10.1145\/3510855","type":"journal-article","created":{"date-parts":[[2022,7,21]],"date-time":"2022-07-21T12:17:11Z","timestamp":1658405831000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Defending against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit-plane Slicing"],"prefix":"10.1145","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9572-4738","authenticated-orcid":false,"given":"Yuan","family":"Liu","sequence":"first","affiliation":[{"name":"ShanghaiTech University, Shanghai, China"}]},{"given":"Jinxin","family":"Dong","sequence":"additional","affiliation":[{"name":"ShanghaiTech University, Shanghai, China"}]},{"given":"Pingqiang","family":"Zhou","sequence":"additional","affiliation":[{"name":"ShanghaiTech University, Shanghai, China"}]}],"member":"320","published-online":{"date-parts":[[2022,8,2]]},"reference":[{"key":"e_1_3_1_2_2","article-title":"Dimensionality reduction as a defense against evasion attacks on machine learning classifiers","volume":"1704","author":"Bhagoji Arjun Nitin","year":"2017","unstructured":"Arjun Nitin Bhagoji, Daniel Cullina, and Prateek Mittal. 2017. Dimensionality reduction as a defense against evasion attacks on machine learning classifiers. CoRR abs\/1704.02654 (2017).","journal-title":"CoRR"},{"key":"e_1_3_1_3_2","article-title":"Decision-based adversarial attacks: Reliable attacks against black-box machine learning models","volume":"1712","author":"Brendel Wieland","year":"2017","unstructured":"Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2017. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. CoRR abs\/1712.04248 (2017).","journal-title":"CoRR"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.49"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140448"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00059"},{"key":"e_1_3_1_7_2","unstructured":"R. C. Gonzalez and R. E. Woods. 2017. Digital Image Processing 4th Edition . Pearson."},{"key":"e_1_3_1_8_2","volume-title":"Proceedings of the 3rd International Conference on Learning Representations","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations."},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2013.6638947"},{"key":"e_1_3_1_10_2","first-page":"2484","volume-title":"Proceedings of the 36th International Conference on Machine Learning","author":"Guo Chuan","year":"2019","unstructured":"Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q. Weinberger. 2019. Simple black-box adversarial attacks. In Proceedings of the 36th International Conference on Machine Learning. 2484\u20132493."},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00065"},{"key":"e_1_3_1_12_2","volume-title":"Proceedings of the 3rd International Conference on Learning Representations","author":"Kingma Diederik P.","year":"2015","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations."},{"key":"e_1_3_1_14_2","volume-title":"Proceedings of the 5th International Conference on Learning Representations","author":"Kurakin Alexey","year":"2017","unstructured":"Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In Proceedings of the 5th International Conference on Learning Representations."},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2014.03.021"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00095"},{"key":"e_1_3_1_17_2","volume-title":"Proceedings of the 6th International Conference on Learning Representations","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th International Conference on Learning Representations."},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134057"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2016.41"},{"key":"e_1_3_1_22_2","article-title":"Transferability in machine learning: From phenomena to black-box attacks using adversarial samples","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick D. McDaniel, and Ian J. Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. CoRR (2016). http:\/\/arxiv.org\/abs\/1605.07277.","journal-title":"CoRR"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASP-DAC47756.2020.9045107"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2012.02.016"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_3_1_26_2","volume-title":"Proceedings of the 2nd International Conference on Learning Representations","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations."},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23198"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3287624.3288750"}],"container-title":["ACM Journal on Emerging Technologies in Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3510855","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3510855","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:02:12Z","timestamp":1750186932000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3510855"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,7,31]]},"references-count":26,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,7,31]]}},"alternative-id":["10.1145\/3510855"],"URL":"https:\/\/doi.org\/10.1145\/3510855","relation":{},"ISSN":["1550-4832","1550-4840"],"issn-type":[{"type":"print","value":"1550-4832"},{"type":"electronic","value":"1550-4840"}],"subject":[],"published":{"date-parts":[[2022,7,31]]},"assertion":[{"value":"2021-05-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-01-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-08-02","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}