{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,6]],"date-time":"2025-11-06T12:31:37Z","timestamp":1762432297885,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":115,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T00:00:00Z","timestamp":1667779200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Science Foundation","award":["IIS-2014552 DGE-1565570 DGE-1922649 CNS-1747751"],"award-info":[{"award-number":["IIS-2014552 DGE-1565570 DGE-1922649 CNS-1747751"]}]},{"name":"Ripple University Blockchain Research Initiative"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,11,7]]},"DOI":"10.1145\/3548606.3560678","type":"proceedings-article","created":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T11:41:28Z","timestamp":1667821288000},"page":"2129-2143","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["LoneNeuron: A Highly-Effective Feature-Domain Neural Trojan Using Invisible and Polymorphic Watermarks"],"prefix":"10.1145","author":[{"given":"Zeyan","family":"Liu","sequence":"first","affiliation":[{"name":"The University of Kansas, Lawrence, KS, USA"}]},{"given":"Fengjun","family":"Li","sequence":"additional","affiliation":[{"name":"The University of Kansas, Lawrence, KS, USA"}]},{"given":"Zhu","family":"Li","sequence":"additional","affiliation":[{"name":"University of Missouri-Kansas City, Kansas City, MO, USA"}]},{"given":"Bo","family":"Luo","sequence":"additional","affiliation":[{"name":"The University of Kansas, Lawrence, KS, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,11,7]]},"reference":[{"volume-title":"Acoustical and environmental robustness in automatic speech recognition","author":"Acero Alex","key":"e_1_3_2_1_1_1","unstructured":"Alex Acero . 1992. Acoustical and environmental robustness in automatic speech recognition . Springer . Alex Acero. 1992. Acoustical and environmental robustness in automatic speech recognition. Springer."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2807385"},{"key":"e_1_3_2_1_3_1","unstructured":"Dario Amodei Sundaram Ananthanarayanan etal 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In ICML.  Dario Amodei Sundaram Ananthanarayanan et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In ICML."},{"key":"e_1_3_2_1_4_1","volume-title":"Blind backdoors in deep learning models. USENIX Security","author":"Bagdasaryan Eugene","year":"2021","unstructured":"Eugene Bagdasaryan and Vitaly Shmatikov . 2021. Blind backdoors in deep learning models. USENIX Security ( 2021 ). Eugene Bagdasaryan and Vitaly Shmatikov. 2021. Blind backdoors in deep learning models. USENIX Security (2021)."},{"volume-title":"A new backdoor attack in CNNs by training set corruption without label poisoning","author":"Barni Mauro","key":"e_1_3_2_1_5_1","unstructured":"Mauro Barni , Kassem Kallas , and Benedetta Tondi . 2019. A new backdoor attack in CNNs by training set corruption without label poisoning . In IEEE ICIP. Mauro Barni, Kassem Kallas, and Benedetta Tondi. 2019. A new backdoor attack in CNNs by training set corruption without label poisoning. In IEEE ICIP."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"crossref","unstructured":"Marco Barreno Blaine Nelson Russell Sears Anthony D Joseph and J Doug Tygar. 2006. Can machine learning be secure?. In AsiaCCS.  Marco Barreno Blaine Nelson Russell Sears Anthony D Joseph and J Doug Tygar. 2006. Can machine learning be secure?. In AsiaCCS.","DOI":"10.1145\/1128817.1128824"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"crossref","unstructured":"Lawrence E Bassham III Andrew L Rukhin Juan Soto James R Nechvatal Miles E Smid Elaine B Barker Stefan D Leigh Mark Levenson Mark Vangel David L Banks etal 2010. A statistical test suite for random and pseudorandom number generators for cryptographic applications Sp 800--22 rev. 1a. National Institute of Standards & Technology.  Lawrence E Bassham III Andrew L Rukhin Juan Soto James R Nechvatal Miles E Smid Elaine B Barker Stefan D Leigh Mark Levenson Mark Vangel David L Banks et al. 2010. A statistical test suite for random and pseudorandom number generators for cryptographic applications Sp 800--22 rev. 1a. National Institute of Standards & Technology.","DOI":"10.6028\/NIST.SP.800-22r1a"},{"key":"e_1_3_2_1_8_1","volume-title":"Pavel Laskov, Giorgio Giacinto, and Fabio Roli.","author":"Biggio Battista","year":"2013","unstructured":"Battista Biggio , Igino Corona , Davide Maiorca , Blaine Nelson , Nedim vS rndi\u0107 , Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013 . Evasion attacks against machine learning at test time. In ECML\/PKDD. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim vS rndi\u0107, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. In ECML\/PKDD."},{"key":"e_1_3_2_1_9_1","unstructured":"Battista Biggio B Nelson and P Laskov. 2012. Poisoning attacks against support vector machines. In ICML.  Battista Biggio B Nelson and P Laskov. 2012. Poisoning attacks against support vector machines. In ICML."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.07.023"},{"key":"e_1_3_2_1_11_1","volume-title":"Jonathan Frankle, and John Guttag.","author":"Blalock Davis","year":"2020","unstructured":"Davis Blalock , Jose Javier Gonzalez Ortiz , Jonathan Frankle, and John Guttag. 2020 . What is the state of neural network pruning? arXiv:2003.03033 (2020). Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. 2020. What is the state of neural network pruning? arXiv:2003.03033 (2020)."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140444"},{"volume-title":"Towards evaluating the robustness of neural networks","author":"Carlini Nicholas","key":"e_1_3_2_1_14_1","unstructured":"Nicholas Carlini and David Wagner . 2017b. Towards evaluating the robustness of neural networks . In IEEE S &P. Nicholas Carlini and David Wagner. 2017b. Towards evaluating the robustness of neural networks. In IEEE S&P."},{"key":"e_1_3_2_1_15_1","volume-title":"Adversarial attacks and defences: A survey. arXiv:1810.00069","author":"Chakraborty Anirban","year":"2018","unstructured":"Anirban Chakraborty , Manaar Alam , Vishal Dey , Anupam Chattopadhyay , and Debdeep Mukhopadhyay . 2018. Adversarial attacks and defences: A survey. arXiv:1810.00069 ( 2018 ). Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. arXiv:1810.00069 (2018)."},{"key":"e_1_3_2_1_16_1","volume-title":"AISafe Workshop.","author":"Chen Bryant","year":"2019","unstructured":"Bryant Chen , Wilka Carvalho , Nathalie Baracaldo , Heiko Ludwig , Benjamin Edwards , Taesung Lee , Ian Molloy , and Biplav Srivastava . 2019 a. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering . In AISafe Workshop. Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, and Biplav Srivastava. 2019a. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering. In AISafe Workshop."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"crossref","unstructured":"Huili Chen Cheng Fu Jishen Zhao and Farinaz Koushanfar. 2019b. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks. In IJCAI.  Huili Chen Cheng Fu Jishen Zhao and Farinaz Koushanfar. 2019b. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks. In IJCAI.","DOI":"10.24963\/ijcai.2019\/647"},{"key":"e_1_3_2_1_18_1","volume-title":"Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526","author":"Chen Xinyun","year":"2017","unstructured":"Xinyun Chen , Chang Liu , Bo Li , Kimberly Lu , and Dawn Song . 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526 ( 2017 ). Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526 (2017)."},{"volume-title":"Backdoor attacks on neural network operations","author":"Clements Joseph","key":"e_1_3_2_1_19_1","unstructured":"Joseph Clements and Yingjie Lao . 2018. Backdoor attacks on neural network operations . In IEEE GlobalSIP. Joseph Clements and Yingjie Lao. 2018. Backdoor attacks on neural network operations. In IEEE GlobalSIP."},{"key":"e_1_3_2_1_20_1","unstructured":"Yimian Dai Fabian Gieseke Stefan Oehmcke Yiquan Wu and Kobus Barnard. 2021a. Attentional feature fusion. In CVPR. 3560--3569.  Yimian Dai Fabian Gieseke Stefan Oehmcke Yiquan Wu and Kobus Barnard. 2021a. Attentional feature fusion. In CVPR. 3560--3569."},{"key":"e_1_3_2_1_21_1","volume-title":"Coatnet: Marrying convolution and attention for all data sizes. NeurIPS","author":"Dai Zihang","year":"2021","unstructured":"Zihang Dai , Hanxiao Liu , Quoc V Le , and Mingxing Tan . 2021 b. Coatnet: Marrying convolution and attention for all data sizes. NeurIPS (2021). Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. 2021b. Coatnet: Marrying convolution and attention for all data sizes. NeurIPS (2021)."},{"key":"e_1_3_2_1_22_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_2_1_23_1","volume-title":"Februus: Input purification defense against trojan attacks on deep neural network systems. In ACSAC. 897--912.","author":"Doan Bao Gia","year":"2020","unstructured":"Bao Gia Doan , Ehsan Abbasnejad , and Damith C Ranasinghe . 2020 . Februus: Input purification defense against trojan attacks on deep neural network systems. In ACSAC. 897--912. Bao Gia Doan, Ehsan Abbasnejad, and Damith C Ranasinghe. 2020. Februus: Input purification defense against trojan attacks on deep neural network systems. In ACSAC. 897--912."},{"key":"e_1_3_2_1_24_1","unstructured":"Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold Sylvain Gelly etal 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).  Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold Sylvain Gelly et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)."},{"key":"e_1_3_2_1_25_1","volume-title":"Scheirer","author":"Dumford Jacob","year":"2020","unstructured":"Jacob Dumford and Walter J . Scheirer . 2020 . Backdooring Convolutional Neural Networks via Targeted Weight Perturbations. In IAPR\/IEEE IJCB. Jacob Dumford and Walter J. Scheirer. 2020. Backdooring Convolutional Neural Networks via Targeted Weight Perturbations. In IAPR\/IEEE IJCB."},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1002\/widm.1257"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"crossref","unstructured":"Matt Fredrikson Somesh Jha and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In ACM CCS.  Matt Fredrikson Somesh Jha and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In ACM CCS.","DOI":"10.1145\/2810103.2813677"},{"key":"e_1_3_2_1_28_1","volume-title":"Strip: A defence against trojan attacks on deep neural networks. In ACSAC.","author":"Gao Yansong","year":"2019","unstructured":"Yansong Gao , Change Xu , Derui Wang , Shiping Chen , Damith C Ranasinghe , and Surya Nepal . 2019 . Strip: A defence against trojan attacks on deep neural networks. In ACSAC. Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C Ranasinghe, and Surya Nepal. 2019. Strip: A defence against trojan attacks on deep neural networks. In ACSAC."},{"key":"e_1_3_2_1_29_1","unstructured":"Ian Goodfellow Jonathon Shlens and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In ICLR.  Ian Goodfellow Jonathon Shlens and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In ICLR."},{"key":"e_1_3_2_1_30_1","volume-title":"NIPS MLSec Workshop.","author":"Gu Tianyu","year":"2017","unstructured":"Tianyu Gu , Brendan Dolan-Gavitt , and Siddharth Garg . 2017 . Badnets: Identifying vulnerabilities in the machine learning model supply chain . In NIPS MLSec Workshop. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. In NIPS MLSec Workshop."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2909068"},{"key":"e_1_3_2_1_32_1","volume-title":"On Hiding Neural Networks Inside Neural Networks. arXiv:2002.10078","author":"Guo Chuan","year":"2020","unstructured":"Chuan Guo , Ruihan Wu , and Kilian Q Weinberger . 2020b. On Hiding Neural Networks Inside Neural Networks. arXiv:2002.10078 ( 2020 ). Chuan Guo, Ruihan Wu, and Kilian Q Weinberger. 2020b. On Hiding Neural Networks Inside Neural Networks. arXiv:2002.10078 (2020)."},{"key":"e_1_3_2_1_33_1","volume-title":"Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. In ICDM.","author":"Guo Wenbo","year":"2020","unstructured":"Wenbo Guo , Lun Wang , Xinyu Xing , Min Du , and Dawn Song . 2020 a. Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. In ICDM. Wenbo Guo, Lun Wang, Xinyu Xing, Min Du, and Dawn Song. 2020a. Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. In ICDM."},{"key":"e_1_3_2_1_34_1","unstructured":"Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR.  Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR."},{"key":"e_1_3_2_1_35_1","volume-title":"Handcrafted backdoors in deep neural networks. arXiv preprint arXiv:2106.04690","author":"Hong Sanghyun","year":"2021","unstructured":"Sanghyun Hong , Nicholas Carlini , and Alexey Kurakin . 2021. Handcrafted backdoors in deep neural networks. arXiv preprint arXiv:2106.04690 ( 2021 ). Sanghyun Hong, Nicholas Carlini, and Alexey Kurakin. 2021. Handcrafted backdoors in deep neural networks. arXiv preprint arXiv:2106.04690 (2021)."},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"crossref","unstructured":"Sebastian Houben Johannes Stallkamp Jan Salmen Marc Schlipsing and Christian Igel. 2013. Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark. In IJCNN.  Sebastian Houben Johannes Stallkamp Jan Salmen Marc Schlipsing and Christian Igel. 2013. Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark. In IJCNN.","DOI":"10.1109\/IJCNN.2013.6706807"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2021.3112099"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380243"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/2046684.2046692"},{"key":"e_1_3_2_1_40_1","volume-title":"NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations. arXiv:1911.07399","author":"Huang Xijie","year":"2019","unstructured":"Xijie Huang , Moustafa Alzantot , and Mani Srivastava . 2019. NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations. arXiv:1911.07399 ( 2019 ). Xijie Huang, Moustafa Alzantot, and Mani Srivastava. 2019. NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations. arXiv:1911.07399 (2019)."},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"crossref","unstructured":"Yujie Ji Xinyang Zhang Shouling Ji Xiapu Luo and Ting Wang. 2018. Model-reuse attacks on deep learning systems. In ACM CCS.  Yujie Ji Xinyang Zhang Shouling Ji Xiapu Luo and Ting Wang. 2018. Model-reuse attacks on deep learning systems. In ACM CCS.","DOI":"10.1145\/3243734.3243757"},{"volume-title":"Backdoor attacks against learning systems","author":"Ji Yujie","key":"e_1_3_2_1_42_1","unstructured":"Yujie Ji , Xinyang Zhang , and Ting Wang . 2017. Backdoor attacks against learning systems . In IEEE CNS. Yujie Ji, Xinyang Zhang, and Ting Wang. 2017. Backdoor attacks against learning systems. In IEEE CNS."},{"key":"e_1_3_2_1_43_1","volume-title":"Fahad Shahbaz Khan, and Mubarak Shah.","author":"Khan Salman","year":"2021","unstructured":"Salman Khan , Muzammal Naseer , Munawar Hayat , Syed Waqas Zamir , Fahad Shahbaz Khan, and Mubarak Shah. 2021 . Transformers in vision: A survey. ACM Computing Surveys (CSUR) ( 2021). Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. 2021. Transformers in vision: A survey. ACM Computing Surveys (CSUR) (2021)."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"crossref","unstructured":"Soheil Kolouri Aniruddha Saha Hamed Pirsiavash and Heiko Hoffmann. 2020. Universal litmus patterns: Revealing backdoor attacks in cnns. In CVPR.  Soheil Kolouri Aniruddha Saha Hamed Pirsiavash and Heiko Hoffmann. 2020. Universal litmus patterns: Revealing backdoor attacks in cnns. In CVPR.","DOI":"10.1109\/CVPR42600.2020.00038"},{"key":"e_1_3_2_1_45_1","unstructured":"Alex Krizhevsky Geoffrey Hinton etal 2009. Learning multiple layers of features from tiny images. (2009).  Alex Krizhevsky Geoffrey Hinton et al. 2009. Learning multiple layers of features from tiny images. (2009)."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3065386"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.249"},{"key":"e_1_3_2_1_49_1","volume-title":"Rebooting research on detecting repackaged android apps: Literature review and benchmark","author":"Li Li","year":"2019","unstructured":"Li Li , Tegawend\u00e9 F Bissyand\u00e9 , and Jacques Klein . 2019a. Rebooting research on detecting repackaged android apps: Literature review and benchmark . IEEE TSE ( 2019 ). Li Li, Tegawend\u00e9 F Bissyand\u00e9, and Jacques Klein. 2019a. Rebooting research on detecting repackaged android apps: Literature review and benchmark. IEEE TSE (2019)."},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2020.3021407"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISVLSI.2018.00093"},{"key":"e_1_3_2_1_52_1","unstructured":"Xiang Li Wenhai Wang Xiaolin Hu and Jian Yang. 2019b. Selective kernel networks. In CVPR. 510--519.  Xiang Li Wenhai Wang Xiaolin Hu and Jian Yang. 2019b. Selective kernel networks. In CVPR. 510--519."},{"volume-title":"DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection","author":"Li Yuanchun","key":"e_1_3_2_1_53_1","unstructured":"Yuanchun Li , Jiayi Hua , Haoyu Wang , Chunyang Chen , and Yunxin Liu . 2021. DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection . In IEEE\/ACM ICSE. Yuanchun Li, Jiayi Hua, Haoyu Wang, Chunyang Chen, and Yunxin Liu. 2021. DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection. In IEEE\/ACM ICSE."},{"key":"e_1_3_2_1_54_1","volume-title":"Rethinking the Trigger of Backdoor Attack. arXiv:2004.04692","author":"Li Yiming","year":"2020","unstructured":"Yiming Li , Tongqing Zhai , Baoyuan Wu , Yong Jiang , Zhifeng Li , and Shutao Xia . 2020b. Rethinking the Trigger of Backdoor Attack. arXiv:2004.04692 ( 2020 ). Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, and Shutao Xia. 2020b. Rethinking the Trigger of Backdoor Attack. arXiv:2004.04692 (2020)."},{"key":"e_1_3_2_1_55_1","volume-title":"Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID.","author":"Liu Kang","year":"2018","unstructured":"Kang Liu , Brendan Dolan-Gavitt , and Siddharth Garg . 2018 a. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID. Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2018a. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID."},{"key":"e_1_3_2_1_56_1","volume-title":"ABS: Scanning neural networks for back-doors by artificial brain stimulation. In ACM CCS.","author":"Liu Yingqi","year":"2019","unstructured":"Yingqi Liu , Wen-Chuan Lee , Guanhong Tao , Shiqing Ma , Yousra Aafer , and Xiangyu Zhang . 2019 . ABS: Scanning neural networks for back-doors by artificial brain stimulation. In ACM CCS. Yingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, and Xiangyu Zhang. 2019. ABS: Scanning neural networks for back-doors by artificial brain stimulation. In ACM CCS."},{"key":"e_1_3_2_1_57_1","unstructured":"Yingqi Liu Shiqing Ma Yousra Aafer W. Lee Juan Zhai Weihang Wang and X. Zhang. 2018b. Trojaning Attack on Neural Networks. In NDSS.  Yingqi Liu Shiqing Ma Yousra Aafer W. Lee Juan Zhai Weihang Wang and X. Zhang. 2018b. Trojaning Attack on Neural Networks. In NDSS."},{"key":"e_1_3_2_1_58_1","unstructured":"Yunfei Liu Xingjun Ma James Bailey and Feng Lu. 2020. Reflection backdoor: A natural backdoor attack on deep neural networks. In ECCV.  Yunfei Liu Xingjun Ma James Bailey and Feng Lu. 2020. Reflection backdoor: A natural backdoor attack on deep neural networks. In ECCV."},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-17143-7_17"},{"key":"e_1_3_2_1_60_1","volume-title":"A convnet for the","author":"Liu Zhuang","year":"2020","unstructured":"Zhuang Liu , Hanzi Mao , Chao-Yuan Wu , Christoph Feichtenhofer , Trevor Darrell , and Saining Xie . 2022b. A convnet for the 2020 s. In CVPR. 11976--11986. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. 2022b. A convnet for the 2020s. In CVPR. 11976--11986."},{"key":"e_1_3_2_1_61_1","doi-asserted-by":"crossref","unstructured":"Shike Mei and Xiaojin Zhu. 2015. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners.. In AAAI.  Shike Mei and Xiaojin Zhu. 2015. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners.. In AAAI.","DOI":"10.1609\/aaai.v29i1.9569"},{"key":"e_1_3_2_1_62_1","volume-title":"ACM AISec Workshop.","author":"Gonz\u00e1lez Luis Mu","year":"2017","unstructured":"Luis Mu noz- Gonz\u00e1lez , Battista Biggio , Ambra Demontis , Andrea Paudice , Vasin Wongrassamee , Emil C Lupu , and Fabio Roli . 2017 . Towards poisoning of deep learning algorithms with back-gradient optimization . In ACM AISec Workshop. Luis Mu noz-Gonz\u00e1lez, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In ACM AISec Workshop."},{"key":"e_1_3_2_1_63_1","volume-title":"WaNet-Imperceptible Warping-based Backdoor Attack. In International Conference on Learning Representations.","author":"Nguyen Tuan Anh","year":"2020","unstructured":"Tuan Anh Nguyen and Anh Tuan Tran . 2020 . WaNet-Imperceptible Warping-based Backdoor Attack. In International Conference on Learning Representations. Tuan Anh Nguyen and Anh Tuan Tran. 2020. WaNet-Imperceptible Warping-based Backdoor Attack. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2015.7178964"},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"crossref","unstructured":"Ren Pang Hua Shen Xinyang Zhang Shouling Ji Yevgeniy Vorobeychik Xiapu Luo Alex Liu and Ting Wang. 2020a. A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. In ACM CCS.  Ren Pang Hua Shen Xinyang Zhang Shouling Ji Yevgeniy Vorobeychik Xiapu Luo Alex Liu and Ting Wang. 2020a. A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. In ACM CCS.","DOI":"10.1145\/3372297.3417253"},{"key":"e_1_3_2_1_66_1","volume-title":"TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask). arXiv:2012.09302","author":"Pang Ren","year":"2020","unstructured":"Ren Pang , Zheng Zhang , Xiangshan Gao , Zhaohan Xi , Shouling Ji , Peng Cheng , and Ting Wang . 2020 b. TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask). arXiv:2012.09302 (2020). Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, and Ting Wang. 2020b. TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask). arXiv:2012.09302 (2020)."},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot Patrick McDaniel Somesh Jha Matt Fredrikson Z Berkay Celik and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Euro S&P.  Nicolas Papernot Patrick McDaniel Somesh Jha Matt Fredrikson Z Berkay Celik and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Euro S&P.","DOI":"10.1109\/EuroSP.2016.36"},{"volume-title":"PyTorch Models and pre-trained weights: ConvNeXt. Available at: https:\/\/pytorch.org\/vision\/main\/models\/convnext.html, accessed","year":"2022","key":"e_1_3_2_1_68_1","unstructured":"PyTorch. 2022a. PyTorch Models and pre-trained weights: ConvNeXt. Available at: https:\/\/pytorch.org\/vision\/main\/models\/convnext.html, accessed : July 2022 . PyTorch. 2022a. PyTorch Models and pre-trained weights: ConvNeXt. Available at: https:\/\/pytorch.org\/vision\/main\/models\/convnext.html, accessed: July 2022."},{"volume-title":"PyTorch Package Reference: Models and pre-trained weights: VisionTransformer. https:\/\/pytorch.org\/vision\/stable\/models\/vision_transformer.html, accessed","year":"2022","key":"e_1_3_2_1_69_1","unstructured":"PyTorch. 2022b. PyTorch Package Reference: Models and pre-trained weights: VisionTransformer. https:\/\/pytorch.org\/vision\/stable\/models\/vision_transformer.html, accessed : July 2022 . PyTorch. 2022b. PyTorch Package Reference: Models and pre-trained weights: VisionTransformer. https:\/\/pytorch.org\/vision\/stable\/models\/vision_transformer.html, accessed: July 2022."},{"key":"e_1_3_2_1_70_1","volume-title":"ICLR Workshop.","author":"Qi Xiangyu","year":"2021","unstructured":"Xiangyu Qi , Jifeng Zhu , Chulin Xie , and Yong Yang . 2021 . Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting . In ICLR Workshop. Xiangyu Qi, Jifeng Zhu, Chulin Xie, and Yong Yang. 2021. Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting. In ICLR Workshop."},{"key":"e_1_3_2_1_71_1","volume-title":"TBT: Targeted Neural Network Attack with Bit Trojan. In CVPR.","author":"Rakin Adnan Siraj","year":"2020","unstructured":"Adnan Siraj Rakin , Zhezhi He , and Deliang Fan . 2020 . TBT: Targeted Neural Network Attack with Bit Trojan. In CVPR. Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. 2020. TBT: Targeted Neural Network Attack with Bit Trojan. In CVPR."},{"key":"e_1_3_2_1_72_1","unstructured":"Shahbaz Rezaei and Xin Liu. 2019. A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning. In ICLR.  Shahbaz Rezaei and Xin Liu. 2019. A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning. In ICLR."},{"key":"e_1_3_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2016.58"},{"key":"e_1_3_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-015-0816-y"},{"key":"e_1_3_2_1_75_1","unstructured":"Aniruddha Saha Akshayvarun Subramanya and Hamed Pirsiavash. 2020. Hidden trigger backdoor attacks. In AAAI.  Aniruddha Saha Akshayvarun Subramanya and Hamed Pirsiavash. 2020. Hidden trigger backdoor attacks. In AAAI."},{"key":"e_1_3_2_1_76_1","volume-title":"NeurIPS","volume":"28","author":"Sculley David","year":"2015","unstructured":"David Sculley , Gary Holt , Daniel Golovin , Eugene Davydov , Todd Phillips , Dietmar Ebner , Vinay Chaudhary , Michael Young , Jean-Francois Crespo , and Dan Dennison . 2015 . Hidden technical debt in machine learning systems . NeurIPS , Vol. 28 (2015). David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. 2015. Hidden technical debt in machine learning systems. NeurIPS , Vol. 28 (2015)."},{"key":"e_1_3_2_1_77_1","unstructured":"Ali Shafahi W Ronny Huang Mahyar Najibi Octavian Suciu Christoph Studer Tudor Dumitras and Tom Goldstein. 2018. Poison frogs! targeted clean-label poisoning attacks on neural networks. In NeurIPS.  Ali Shafahi W Ronny Huang Mahyar Najibi Octavian Suciu Christoph Studer Tudor Dumitras and Tom Goldstein. 2018. Poison frogs! targeted clean-label poisoning attacks on neural networks. In NeurIPS."},{"key":"e_1_3_2_1_78_1","unstructured":"Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR.  Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR."},{"key":"e_1_3_2_1_79_1","unstructured":"Manjeet Singh. 2019. How to overcome the AI\/ML Adoption Gap in the enterprise? Available at: https:\/\/coachmanjeet.medium.com\/how-to-overcome-the-ai-ml-adoption-gap-in-the-enterprise-56c152a7006f (Accessed: 04\/2021).  Manjeet Singh. 2019. How to overcome the AI\/ML Adoption Gap in the enterprise? Available at: https:\/\/coachmanjeet.medium.com\/how-to-overcome-the-ai-ml-adoption-gap-in-the-enterprise-56c152a7006f (Accessed: 04\/2021)."},{"key":"e_1_3_2_1_80_1","volume-title":"Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149","author":"Srinivas Suraj","year":"2015","unstructured":"Suraj Srinivas and R Venkatesh Babu . 2015. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149 ( 2015 ). Suraj Srinivas and R Venkatesh Babu. 2015. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149 (2015)."},{"key":"e_1_3_2_1_81_1","unstructured":"Andreas Steiner et al. [n. d.]. Vision Transformer and MLP-Mixer Architectures. Available at: https:\/\/github.com\/google-research\/vision_transformer.  Andreas Steiner et al. [n. d.]. Vision Transformer and MLP-Mixer Architectures. Available at: https:\/\/github.com\/google-research\/vision_transformer."},{"key":"e_1_3_2_1_82_1","volume-title":"Pang Wei W Koh, and Percy S Liang","author":"Steinhardt Jacob","year":"2017","unstructured":"Jacob Steinhardt , Pang Wei W Koh, and Percy S Liang . 2017 . Certified defenses for data poisoning attacks. In NIPS. Jacob Steinhardt, Pang Wei W Koh, and Percy S Liang. 2017. Certified defenses for data poisoning attacks. In NIPS."},{"key":"e_1_3_2_1_83_1","volume-title":"Hal Daume III, and Tudor Dumitras","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu , Radu Marginean , Yigitcan Kaya , Hal Daume III, and Tudor Dumitras . 2018 . When does machine learning $$FAIL$$? generalized transferability for evasion and poisoning attacks. In $$USENIX$$ Security . Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. 2018. When does machine learning $$FAIL$$? generalized transferability for evasion and poisoning attacks. In $$USENIX$$ Security."},{"key":"e_1_3_2_1_84_1","doi-asserted-by":"crossref","unstructured":"Christian Szegedy Sergey Ioffe Vincent Vanhoucke and Alexander A Alemi. 2017. Inception-v4 inception-ResNet and the impact of residual connections on learning. In AAAI.  Christian Szegedy Sergey Ioffe Vincent Vanhoucke and Alexander A Alemi. 2017. Inception-v4 inception-ResNet and the impact of residual connections on learning. In AAAI.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"e_1_3_2_1_85_1","volume-title":"Dumitru Erhan, Ian Goodfellow, and Robert Fergus.","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna Estrach , Dumitru Erhan, Ian Goodfellow, and Robert Fergus. 2014 . Intriguing properties of neural networks. In ICLR. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna Estrach, Dumitru Erhan, Ian Goodfellow, and Robert Fergus. 2014. Intriguing properties of neural networks. In ICLR."},{"key":"e_1_3_2_1_86_1","volume-title":"Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML.","author":"Tan Mingxing","year":"2019","unstructured":"Mingxing Tan and Quoc Le . 2019 . Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML. Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML."},{"key":"e_1_3_2_1_87_1","volume-title":"Demon in the variant: Statistical analysis of dnns for robust backdoor contamination detection. USENIX Security","author":"Tang Di","year":"2020","unstructured":"Di Tang , XiaoFeng Wang , Haixu Tang , and Kehuan Zhang . 2020b. Demon in the variant: Statistical analysis of dnns for robust backdoor contamination detection. USENIX Security ( 2020 ). Di Tang, XiaoFeng Wang, Haixu Tang, and Kehuan Zhang. 2020b. Demon in the variant: Statistical analysis of dnns for robust backdoor contamination detection. USENIX Security (2020)."},{"key":"e_1_3_2_1_88_1","doi-asserted-by":"crossref","unstructured":"Ruixiang Tang Mengnan Du Ninghao Liu Fan Yang and Xia Hu. 2020a. An embarrassingly simple approach for Trojan attack in deep neural networks. In ACM KDD.  Ruixiang Tang Mengnan Du Ninghao Liu Fan Yang and Xia Hu. 2020a. An embarrassingly simple approach for Trojan attack in deep neural networks. In ACM KDD.","DOI":"10.1145\/3394486.3403064"},{"key":"e_1_3_2_1_89_1","unstructured":"TensorFlow. 2022. Transfer learning and fine-tuning. TensorFlow Tutorials available at: https:\/\/www.tensorflow.org\/tutorials\/images\/transfer_learning.  TensorFlow. 2022. Transfer learning and fine-tuning. TensorFlow Tutorials available at: https:\/\/www.tensorflow.org\/tutorials\/images\/transfer_learning."},{"key":"e_1_3_2_1_90_1","volume-title":"Mehmet Emre Gursoy, and Ling Liu","author":"Tolpegin Vale","year":"2020","unstructured":"Vale Tolpegin , Stacey Truex , Mehmet Emre Gursoy, and Ling Liu . 2020 . Data poisoning attacks against federated learning systems. In Euro S &P. Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. 2020. Data poisoning attacks against federated learning systems. In Euro S&P."},{"key":"e_1_3_2_1_91_1","unstructured":"Florian Tram\u00e8r Fan Zhang Ari Juels Michael K Reiter and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In $$USENIX$$ Security.  Florian Tram\u00e8r Fan Zhang Ari Juels Michael K Reiter and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In $$USENIX$$ Security."},{"key":"e_1_3_2_1_92_1","volume-title":"Label-consistent backdoor attacks. arXiv:1912.02771","author":"Turner Alexander","year":"2019","unstructured":"Alexander Turner , Dimitris Tsipras , and Aleksander Madry . 2019. Label-consistent backdoor attacks. arXiv:1912.02771 ( 2019 ). Alexander Turner, Dimitris Tsipras, and Aleksander Madry. 2019. Label-consistent backdoor attacks. arXiv:1912.02771 (2019)."},{"key":"e_1_3_2_1_93_1","volume-title":"Attention is all you need. Advances in neural information processing systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , \u0141ukasz Kaiser , and Illia Polosukhin . 2017. Attention is all you need. Advances in neural information processing systems , Vol. 30 ( 2017 ). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems , Vol. 30 (2017)."},{"volume-title":"Neural cleanse: Identifying and mitigating backdoor attacks in neural networks","author":"Wang Bolun","key":"e_1_3_2_1_94_1","unstructured":"Bolun Wang , Yuanshun Yao , Shawn Shan , Huiying Li , Bimal Viswanath , Haitao Zheng , and Ben Y Zhao . 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks . In IEEE S &P. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In IEEE S&P."},{"key":"e_1_3_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_3_2_1_96_1","doi-asserted-by":"crossref","unstructured":"Zhenting Wang Juan Zhai and Shiqing Ma. 2022. BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning. In CVPR. 15074--15084.  Zhenting Wang Juan Zhai and Shiqing Ma. 2022. BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning. In CVPR. 15074--15084.","DOI":"10.1109\/CVPR52688.2022.01465"},{"key":"e_1_3_2_1_97_1","doi-asserted-by":"crossref","unstructured":"Emily Wenger Josephine Passananti A. Bhagoji Yuanshun Yao Hai-Tao Zheng and B. Zhao. 2020. Backdoor Attacks Against Deep Learning Systems in the Physical World. arXiv: Computer Vision and Pattern Recognition (2020).  Emily Wenger Josephine Passananti A. Bhagoji Yuanshun Yao Hai-Tao Zheng and B. Zhao. 2020. Backdoor Attacks Against Deep Learning Systems in the Physical World. arXiv: Computer Vision and Pattern Recognition (2020).","DOI":"10.1109\/CVPR46437.2021.00614"},{"key":"e_1_3_2_1_98_1","unstructured":"Dongxian Wu and Yisen Wang. 2021. Adversarial Neuron Pruning Purifies Backdoored Deep Models. In NeurIPS.  Dongxian Wu and Yisen Wang. 2021. Adversarial Neuron Pruning Purifies Backdoored Deep Models. In NeurIPS."},{"key":"e_1_3_2_1_99_1","volume-title":"Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks. In Asian Conf. on Machine Learning.","author":"Wu Lei","year":"2020","unstructured":"Lei Wu and Zhanxing Zhu . 2020 . Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks. In Asian Conf. on Machine Learning. Lei Wu and Zhanxing Zhu. 2020. Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks. In Asian Conf. on Machine Learning."},{"key":"e_1_3_2_1_100_1","volume-title":"Warren He, Mingyan Liu, and Dawn Song.","author":"Xiao Chaowei","year":"2018","unstructured":"Chaowei Xiao , Bo Li , Jun Yan Zhu , Warren He, Mingyan Liu, and Dawn Song. 2018 . Generating adversarial examples with adversarial networks. In IJCAI. Chaowei Xiao, Bo Li, Jun Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. In IJCAI."},{"key":"e_1_3_2_1_101_1","first-page":"30392","article-title":"Early convolutions help transformers see better","volume":"34","author":"Xiao Tete","year":"2021","unstructured":"Tete Xiao , Mannat Singh , Eric Mintun , Trevor Darrell , Piotr Doll\u00e1r , and Ross Girshick . 2021 . Early convolutions help transformers see better . Advances in Neural Information Processing Systems , Vol. 34 (2021), 30392 -- 30400 . Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Doll\u00e1r, and Ross Girshick. 2021. Early convolutions help transformers see better. Advances in Neural Information Processing Systems , Vol. 34 (2021), 30392--30400.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_102_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP40001.2021.00034"},{"key":"e_1_3_2_1_103_1","volume-title":"Generative poisoning attack method against neural networks. arXiv:1703.01340","author":"Yang Chaofei","year":"2017","unstructured":"Chaofei Yang , Qing Wu , Hai Li , and Yiran Chen . 2017. Generative poisoning attack method against neural networks. arXiv:1703.01340 ( 2017 ). Chaofei Yang, Qing Wu, Hai Li, and Yiran Chen. 2017. Generative poisoning attack method against neural networks. arXiv:1703.01340 (2017)."},{"key":"e_1_3_2_1_104_1","unstructured":"Yuanshun Yao Huiying Li Haitao Zheng and Ben Y Zhao. 2019. Latent backdoor attacks on deep neural networks. In ACM CCS.  Yuanshun Yao Huiying Li Haitao Zheng and Ben Y Zhao. 2019. Latent backdoor attacks on deep neural networks. In ACM CCS."},{"volume-title":"Hardware trojan in fpga cnn accelerator","author":"Ye Jing","key":"e_1_3_2_1_105_1","unstructured":"Jing Ye , Yu Hu , and Xiaowei Li. 2018. Hardware trojan in fpga cnn accelerator . In IEEE ATS. Jing Ye, Yu Hu, and Xiaowei Li. 2018. Hardware trojan in fpga cnn accelerator. In IEEE ATS."},{"key":"e_1_3_2_1_106_1","volume-title":"Summaries of the Third Annual JPL Airborne Geoscience Workshop.","author":"Yuhas Roberta H","year":"1992","unstructured":"Roberta H Yuhas , Alexander FH Goetz , and Joe W Boardman . 1992 . Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm. In JPL , Summaries of the Third Annual JPL Airborne Geoscience Workshop. Volume 1: AVIRIS Workshop. Roberta H Yuhas, Alexander FH Goetz, and Joe W Boardman. 1992. Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm. In JPL, Summaries of the Third Annual JPL Airborne Geoscience Workshop. Volume 1: AVIRIS Workshop."},{"key":"e_1_3_2_1_107_1","doi-asserted-by":"crossref","unstructured":"Yi Zeng Won Park Z Morley Mao and Ruoxi Jia. 2021. Rethinking the backdoor attacks' triggers: A frequency perspective. In CVPR. 16473--16481.  Yi Zeng Won Park Z Morley Mao and Ruoxi Jia. 2021. Rethinking the backdoor attacks' triggers: A frequency perspective. In CVPR. 16473--16481.","DOI":"10.1109\/ICCV48922.2021.01616"},{"key":"e_1_3_2_1_108_1","volume-title":"Resnest: Split-attention networks. In CVPR. 2736--2746.","author":"Zhang Hang","year":"2022","unstructured":"Hang Zhang , Chongruo Wu , Zhongyue Zhang , Yi Zhu , Haibin Lin , Zhi Zhang , Yue Sun , Tong He , Jonas Mueller , R Manmatha , 2022 . Resnest: Split-attention networks. In CVPR. 2736--2746. Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Haibin Lin, Zhi Zhang, Yue Sun, Tong He, Jonas Mueller, R Manmatha, et al. 2022. Resnest: Split-attention networks. In CVPR. 2736--2746."},{"key":"e_1_3_2_1_109_1","doi-asserted-by":"publisher","DOI":"10.1145\/3460319.3464809"},{"key":"e_1_3_2_1_110_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00068"},{"key":"e_1_3_2_1_111_1","doi-asserted-by":"crossref","unstructured":"Mengchen Zhao Bo An Yaodong Yu Sulin Liu and Sinno Jialin Pan. 2018. Data Poisoning Attacks on Multi-Task Relationship Learning.. In AAAI.  Mengchen Zhao Bo An Yaodong Yu Sulin Liu and Sinno Jialin Pan. 2018. Data Poisoning Attacks on Multi-Task Relationship Learning.. In AAAI.","DOI":"10.1609\/aaai.v32i1.11838"},{"key":"e_1_3_2_1_112_1","doi-asserted-by":"crossref","unstructured":"Yang Zhao Xing Hu Shuangchen Li Jing Ye Lei Deng Yu Ji Jianyu Xu Dong Wu and Yuan Xie. 2019. Memory trojan attack on neural network accelerators. In DATE.  Yang Zhao Xing Hu Shuangchen Li Jing Ye Lei Deng Yu Ji Jianyu Xu Dong Wu and Yuan Xie. 2019. Memory trojan attack on neural network accelerators. In DATE.","DOI":"10.23919\/DATE.2019.8715027"},{"key":"e_1_3_2_1_113_1","volume-title":"Sencun Zhu, and David Miller.","author":"Zhong Haoti","year":"2020","unstructured":"Haoti Zhong , Cong Liao , Anna Cinzia Squicciarini , Sencun Zhu, and David Miller. 2020 . Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation. In ACM CODASPY. Haoti Zhong, Cong Liao, Anna Cinzia Squicciarini, Sencun Zhu, and David Miller. 2020. Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation. In ACM CODASPY."},{"key":"e_1_3_2_1_114_1","unstructured":"Chen Zhu W Ronny Huang Hengduo Li Gavin Taylor Christoph Studer and Tom Goldstein. 2019. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets. In ICML.  Chen Zhu W Ronny Huang Hengduo Li Gavin Taylor Christoph Studer and Tom Goldstein. 2019. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets. In ICML."},{"key":"e_1_3_2_1_115_1","unstructured":"Jun-Yan Zhu Taesung Park Phillip Isola and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In ICCV.  Jun-Yan Zhu Taesung Park Phillip Isola and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In ICCV."},{"key":"e_1_3_2_1_116_1","volume-title":"PoTrojan: powerful neural-level trojan designs in deep learning models. arXiv:1802.03043","author":"Zou Minhui","year":"2018","unstructured":"Minhui Zou , Yang Shi , Chengliang Wang , Fangyu Li , WenZhan Song , and Yu Wang . 2018. PoTrojan: powerful neural-level trojan designs in deep learning models. arXiv:1802.03043 ( 2018 ). io Minhui Zou, Yang Shi, Chengliang Wang, Fangyu Li, WenZhan Song, and Yu Wang. 2018. PoTrojan: powerful neural-level trojan designs in deep learning models. arXiv:1802.03043 (2018). io"}],"event":{"name":"CCS '22: 2022 ACM SIGSAC Conference on Computer and Communications Security","sponsor":["SIGSAC ACM Special Interest Group on Security, Audit, and Control"],"location":"Los Angeles CA USA","acronym":"CCS '22"},"container-title":["Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3548606.3560678","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3548606.3560678","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:48:59Z","timestamp":1750182539000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3548606.3560678"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,7]]},"references-count":115,"alternative-id":["10.1145\/3548606.3560678","10.1145\/3548606"],"URL":"https:\/\/doi.org\/10.1145\/3548606.3560678","relation":{},"subject":[],"published":{"date-parts":[[2022,11,7]]},"assertion":[{"value":"2022-11-07","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}