{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T09:43:35Z","timestamp":1775123015881,"version":"3.50.1"},"reference-count":82,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2023,7,24]],"date-time":"2023-07-24T00:00:00Z","timestamp":1690156800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61932021"],"award-info":[{"award-number":["61932021"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Hong Kong RGC\/GRF","award":["16207120"],"award-info":[{"award-number":["16207120"]}]},{"name":"Hong Kong RGC\/RIF","award":["R5034-18"],"award-info":[{"award-number":["R5034-18"]}]},{"name":"Hong Kong ITF","award":["MHP\/055\/19"],"award-info":[{"award-number":["MHP\/055\/19"]}]},{"name":"Hong Kong PhD Fellowship Scheme, HKUST RedBird Academic Excellence Award, and the MSRA Collaborative Research Grant"},{"name":"Cisco Research Gift, Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grant, and CFI-JELF Project","award":["#40736"],"award-info":[{"award-number":["#40736"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2023,9,30]]},"abstract":"<jats:p>Model compression can significantly reduce the sizes of deep neural network (DNN) models and thus facilitate the dissemination of sophisticated, sizable DNN models, especially for deployment on mobile or embedded devices. However, the prediction results of compressed models may deviate from those of their original models. To help developers thoroughly understand the impact of model compression, it is essential to test these models to find those<jats:italic>deviated behaviors<\/jats:italic>before dissemination. However, this is a non-trivial task, because the architectures and gradients of compressed models are usually not available.<\/jats:p><jats:p>To this end, we propose<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>, a novel, search-based, black-box testing technique to automatically find triggering inputs that result in deviated behaviors in image classification tasks.<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>iteratively applies a series of mutation operations to a given seed image until a triggering input is found. For better efficacy and efficiency,<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>models the search problem as Markov Chains and leverages the Metropolis-Hasting algorithm to guide the selection of mutation operators in each iteration. Further,<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>utilizes a novel fitness function to prioritize the mutated inputs that either cause large differences between two models\u2019 outputs or trigger previously unobserved models\u2019 probability vectors. We evaluated<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>on 21 compressed models for image classification tasks with three datasets. The results show that<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>not only constantly outperforms the baseline in terms of efficacy but also significantly improves the efficiency:<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>is 17.84\u00d7\u2013446.06\u00d7 as fast as the baseline in terms of time; the number of queries required by<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>to find one triggering input is only 0.186\u20131.937% of those issued by the baseline. We also demonstrated that the triggering inputs found by<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>can be used to repair up to 48.48% deviated behaviors in image classification tasks and further decrease the effectiveness of<jats:sc><jats:sans-serif>Dflare<\/jats:sans-serif><\/jats:sc>on the repaired models.<\/jats:p>","DOI":"10.1145\/3583564","type":"journal-article","created":{"date-parts":[[2023,2,8]],"date-time":"2023-02-08T13:28:49Z","timestamp":1675862929000},"page":"1-32","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["Finding Deviated Behaviors of the Compressed DNN Models for Image Classifications"],"prefix":"10.1145","volume":"32","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1644-2965","authenticated-orcid":false,"given":"Yongqiang","family":"Tian","sequence":"first","affiliation":[{"name":"University of Waterloo, Canada and The Hong Kong University of Science and Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8039-0528","authenticated-orcid":false,"given":"Wuqi","family":"Zhang","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5588-9618","authenticated-orcid":false,"given":"Ming","family":"Wen","sequence":"additional","affiliation":[{"name":"Huazhong University of Science and Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3508-7172","authenticated-orcid":false,"given":"Shing-Chi","family":"Cheung","sequence":"additional","affiliation":[{"name":"The Hong Kong University of Science and Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0862-2491","authenticated-orcid":false,"given":"Chengnian","family":"Sun","sequence":"additional","affiliation":[{"name":"University of Waterloo, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1551-8948","authenticated-orcid":false,"given":"Shiqing","family":"Ma","sequence":"additional","affiliation":[{"name":"Rutgers University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0955-503X","authenticated-orcid":false,"given":"Yu","family":"Jiang","sequence":"additional","affiliation":[{"name":"Tsinghua University, China"}]}],"member":"320","published-online":{"date-parts":[[2023,7,24]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2807385"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-00296-0_5"},{"key":"e_1_3_2_4_2","first-page":"158","volume-title":"ECCV\u201918","author":"Bhagoji Arjun Nitin","year":"2018","unstructured":"Arjun Nitin Bhagoji, Warren He, Bo Li, and Dawn Song. 2018. Practical black-box attacks on deep neural networks using efficient query mechanisms. In ECCV\u201918, Vol. 11216. Springer, 158\u2013174."},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1016\/B978-0-12-374457-9.00007-X"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/1150402.1150464"},{"key":"e_1_3_2_7_2","volume-title":"ICLR\u201920","author":"Cai Han","year":"2020","unstructured":"Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. 2020. Once-for-all: Train one network and specialize it for efficient deployment. In ICLR\u201920. OpenReview.net."},{"key":"e_1_3_2_8_2","first-page":"39","volume-title":"SP\u201917","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David A. Wagner. 2017. Towards evaluating the robustness of neural networks. In SP\u201917. IEEE Computer Society, 39\u201357."},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2022.3182313"},{"key":"e_1_3_2_10_2","volume-title":"Metamorphic Testing: A New Approach for Generating Next Test Cases","author":"Chen Tsong Yueh","year":"1998","unstructured":"Tsong Yueh Chen, Shing-Chi Cheung, and Siu Ming Yiu. 1998. Metamorphic Testing: A New Approach for Generating Next Test Cases. Technical Report HKUST-CS98-01. Department of Computer Science, HKUST, Hong Kong."},{"key":"e_1_3_2_11_2","first-page":"1257","volume-title":"ICSE\u201919","author":"Chen Yuting","year":"2019","unstructured":"Yuting Chen, Ting Su, and Zhendong Su. 2019. Deep differential testing of JVM implementations. In ICSE\u201919. IEEE \/ ACM, 1257\u20131268."},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/2908080.2908095"},{"key":"e_1_3_2_13_2","volume-title":"ICML 2021 Workshop on Adversarial Machine Learning","author":"Chen Zuohui","year":"2021","unstructured":"Zuohui Chen, RenXuan Wang, Yao Lu, jingyang Xiang, and Qi Xuan. 2021. Adversarial sample detection via channel pruning. In ICML 2021 Workshop on Adversarial Machine Learning."},{"key":"e_1_3_2_14_2","volume-title":"ICLR19","author":"Cheng Minhao","year":"2019","unstructured":"Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2019. Query-efficient hard-label black-box attack: An optimization-based approach. In ICLR19."},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00489"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-020-09816-7"},{"key":"e_1_3_2_17_2","volume-title":"CVPR\u201909","author":"Deng J.","year":"2009","unstructured":"J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In CVPR\u201909."},{"key":"e_1_3_2_18_2","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT\u201919) . Minneapolis MN."},{"key":"e_1_3_2_19_2","volume-title":"Digital Image Processing","author":"Gonzalez Rafael C.","year":"2008","unstructured":"Rafael C. Gonzalez and Richard E. Woods. 2008. Digital Image Processing. Prentice Hall, Upper Saddle River, NJ."},{"key":"e_1_3_2_20_2","volume-title":"ICLR\u201915","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In ICLR\u201915."},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-021-01453-z"},{"key":"e_1_3_2_22_2","first-page":"2484","volume-title":"ICML\u201919","author":"Guo Chuan","year":"2019","unstructured":"Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q. Weinberger. 2019. Simple black-box adversarial attacks. In ICML\u201919, Vol. 97. PMLR, 2484\u20132493."},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASE.2019.00080"},{"key":"e_1_3_2_24_2","volume-title":"ICLR\u201916","author":"Han Song","year":"2016","unstructured":"Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding. In ICLR\u201916, Yoshua Bengio and Yann LeCun (Eds.)."},{"key":"e_1_3_2_25_2","first-page":"1135","volume-title":"NIPS\u201915","author":"Han Song","year":"2015","unstructured":"Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning both weights and connections for efficient neural networks. In NIPS\u201915. MIT Press, Cambridge, MA, 1135\u20131143."},{"key":"e_1_3_2_26_2","unstructured":"Awni Y. Hannun Carl Case Jared Casper Bryan Catanzaro Greg Diamos Erich Elsen Ryan Prenger Sanjeev Satheesh Shubho Sengupta Adam Coates and Andrew Y. Ng. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv:1412.5567. Retrieved from http:\/\/arxiv.org\/abs\/1412.5567."},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2015.2465908"},{"key":"e_1_3_2_28_2","unstructured":"Qiang Hu Yuejun Guo Maxime Cordy Xiaofei Xie Wei Ma Mike Papadakis and Yves Le Traon. 2022. Characterizing and understanding the behavior of quantized models for reliable deployment. DOI:arXiv.2204.04220. Retrieved from arXiv:2204.04220."},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33013854"},{"issue":"2","key":"e_1_3_2_30_2","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1080\/00031305.1998.10480547","article-title":"Markov chain monte carlo in practice: A roundtable discussion","volume":"52","author":"Kass Robert E.","year":"1998","unstructured":"Robert E. Kass, Bradley P. Carlin, Andrew Gelman, and Radford M. Neal. 1998. Markov chain monte carlo in practice: A roundtable discussion. Am. Stat. 52, 2 (May1998), 93\u2013100.","journal-title":"Am. Stat."},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/IOLTS.2019.8854377"},{"key":"e_1_3_2_32_2","first-page":"1039","volume-title":"ICSE\u201919","author":"Kim Jinhan","year":"2019","unstructured":"Jinhan Kim, Robert Feldt, and Shin Yoo. 2019. Guiding deep learning system testing using surprise adequacy. In ICSE\u201919. IEEE Press, 1039\u20131049."},{"key":"e_1_3_2_33_2","unstructured":"Alex Krizhevsky Vinod Nair and Geoffrey Hinton. 2009. The CIFAR-10 Dataset. Retrieved fromhttp:\/\/www.cs.toronto.edu\/kriz\/cifar.html."},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/2594291.2594334"},{"key":"e_1_3_2_35_2","doi-asserted-by":"crossref","unstructured":"Vu Le Chengnian Sun and Zhendong Su. 2015. Finding deep compiler bugs via guided stochastic program mutation. In OOPSLA\u201915. ACM New York NY 386\u2013399.","DOI":"10.1145\/2858965.2814319"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_2_37_2","unstructured":"Yann LeCun and Corinna Cortes. 2010. MNIST Handwritten Digit Database. Retrieved from http:\/\/yann.lecun.com\/exdb\/mnist\/."},{"key":"e_1_3_2_38_2","volume-title":"ICLR\u201917","author":"Li Hao","year":"2017","unstructured":"Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning filters for efficient ConvNets. In ICLR\u201917."},{"key":"e_1_3_2_39_2","volume-title":"ICLR\u201920","author":"Lin Tao","year":"2020","unstructured":"Tao Lin, Sebastian U. Stich, Luis Barba, Daniil Dmitriev, and Martin Jaggi. 2020. Dynamic model pruning with feedback. In ICLR\u201920. OpenReview.net."},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.5555\/2636992"},{"key":"e_1_3_2_41_2","unstructured":"TensorFlow Lite. 2022. TensorFlow Lite. Retrieved May 20 2022 from https:\/\/www.tensorflow.org\/lite."},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2021.3066410"},{"key":"e_1_3_2_43_2","first-page":"721","volume-title":"ASPDAC\u201918","author":"Liu Qi","year":"2018","unstructured":"Qi Liu, Tao Liu, Zihao Liu, Yanzhi Wang, Yier Jin, and Wujie Wen. 2018. Security analysis and enhancement of model compressed deep learning systems under adversarial attacks. In ASPDAC\u201918. IEEE Press, 721\u2013726."},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3238147.3238202"},{"key":"e_1_3_2_45_2","first-page":"100","volume-title":"ISSRE\u201918","author":"Ma Lei","year":"2018","unstructured":"Lei Ma, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Felix Juefei-Xu, Chao Xie, Li Li, Yang Liu, Jianjun Zhao, and Yadong Wang. 2018. DeepMutation: Mutation testing of deep learning systems. In ISSRE\u201918, Sudipto Ghosh, Roberto Natella, Bojan Cukic, Robin Poston, and Nuno Laranjeiro (Eds.). IEEE Computer Society, 100\u2013111."},{"issue":"1","key":"e_1_3_2_46_2","first-page":"100","article-title":"Differential Testing for Software","volume":"10","author":"McKeeman William M.","year":"1998","unstructured":"William M. McKeeman. 1998. Differential Testing for Software. Digit. Techn. J. 10, 1 (1998), 100\u2013107.","journal-title":"Digit. Techn. J."},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9780511626630"},{"key":"e_1_3_2_48_2","volume-title":"ICLR\u201918","author":"Mishra Asit K.","year":"2018","unstructured":"Asit K. Mishra and Debbie Marr. 2018. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. In ICLR\u201918."},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2321376"},{"key":"e_1_3_2_50_2","first-page":"4901","volume-title":"ICML\u201919","author":"Odena Augustus","year":"2019","unstructured":"Augustus Odena, Catherine Olsson, David Andersen, and Ian J. Goodfellow. 2019. TensorFuzz: Debugging neural networks with coverage-guided fuzzing. In ICML\u201919. 4901\u20134911."},{"key":"e_1_3_2_51_2","unstructured":"ONNX. 2022. ONNX Inference. Retrieved May 20 2022 from https:\/\/onnxruntime.ai\/."},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2015.7178964"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380337"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416560"},{"key":"e_1_3_2_55_2","first-page":"1","volume-title":"SOSP\u201917","author":"Pei Kexin","year":"2017","unstructured":"Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. DeepXplore: Automated whitebox testing of deep learning systems. In SOSP\u201917. ACM, New York, NY, 1\u201318."},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416545"},{"key":"e_1_3_2_57_2","volume-title":"ICLR\u201918","author":"Polino Antonio","year":"2018","unstructured":"Antonio Polino, Razvan Pascanu, and Dan Alistarh. 2018. Model compression via distillation and quantization. In ICLR\u201918."},{"key":"e_1_3_2_58_2","unstructured":"PyTorch. 2022. Models and pre-trained weights: Quantized models. Retrieved August 19 2022 from https:\/\/pytorch.org\/vision\/stable\/models.html#quantized-models."},{"key":"e_1_3_2_59_2","volume-title":"ECCV\u201916","author":"Rastegari Mohammad","year":"2016","unstructured":"Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. XNOR-Net: ImageNet classification using binary convolutional neural networks. In ECCV\u201916."},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2016.2577031"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3409730"},{"key":"e_1_3_2_62_2","first-page":"696","volume-title":"Learning Representations by Back-Propagating Errors","author":"Rumelhart David E.","year":"1988","unstructured":"David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1988. Learning Representations by Back-Propagating Errors. MIT Press, Cambridge, MA, 696\u2013699."},{"key":"e_1_3_2_63_2","article-title":"Adversarial training for free!","volume":"32","author":"Shafahi Ali","year":"2019","unstructured":"Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free! Adv. Neural Inf. Process. Syst. 32 (2019).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_2_64_2","first-page":"6519","volume-title":"CVPR\u201919","author":"Shi Yucheng","year":"2019","unstructured":"Yucheng Shi, Siyu Wang, and Yahong Han. 2019. Curls & whey: Boosting black-box adversarial attacks. In CVPR\u201919. Computer Vision Foundation\/IEEE, 6519\u20136527."},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics8060661"},{"key":"e_1_3_2_66_2","doi-asserted-by":"crossref","unstructured":"Pravendra Singh Vinay Kumar Verma Piyush Rai and Vinay P. Namboodiri. 2019. Play and prune: Adaptive filter pruning for deep model compression. In IJCAI\u201919. AAAI Press 3460\u20133466.","DOI":"10.24963\/ijcai.2019\/480"},{"key":"e_1_3_2_67_2","unstructured":"Nvidia TensorRT. 2022. Nvidia. Retrieved May 20 2022 from https:\/\/developer.nvidia.com\/tensorrt."},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-021-09985-1"},{"key":"e_1_3_2_69_2","first-page":"303","volume-title":"ICSE\u201918","author":"Tian Yuchi","year":"2018","unstructured":"Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2018. DeepTest: Automated testing of deep-neural-network-driven autonomous cars. In ICSE\u201918. ACM, New York, NY, 303\u2013314."},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/3489517.3530400"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1145\/3533767.3534386"},{"key":"e_1_3_2_72_2","first-page":"8612","volume-title":"CVPR\u201919","author":"Wang Kuan","year":"2019","unstructured":"Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. 2019. HAQ: Hardware-aware automated quantization with mixed precision. In CVPR\u201919. Computer Vision Foundation\/IEEE, 8612\u20138620."},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3409761"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.2307\/3001968"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.521"},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3293882.3330579"},{"key":"e_1_3_2_77_2","first-page":"5772","volume-title":"IJCAI\u201919","author":"Xie Xiaofei","year":"2019","unstructured":"Xiaofei Xie, Lei Ma, Haijun Wang, Yuekang Li, Yang Liu, and Xiaohong Li. 2019. DiffChaser: Detecting disagreements for deep neural networks. In IJCAI\u201919. 5772\u20135778."},{"key":"e_1_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-022-10202-w"},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3409750"},{"key":"e_1_3_2_80_2","first-page":"7472","volume-title":"ICML\u201919)","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In ICML\u201919), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.), Vol. 97. PMLR, 7472\u20137482. http:\/\/proceedings.mlr.press\/v97\/zhang19p.html."},{"key":"e_1_3_2_81_2","volume-title":"ICLR\u201917","author":"Zhou Aojun","year":"2017","unstructured":"Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental network quantization: Towards lossless CNNs with low-precision weights. In ICLR\u201917."},{"key":"e_1_3_2_82_2","volume-title":"ICLR\u201918","author":"Zhu Michael","year":"2018","unstructured":"Michael Zhu and Suyog Gupta. 2018. To prune, or not to prune: Exploring the efficacy of pruning for model compression. In ICLR\u201918. OpenReview.net."},{"key":"e_1_3_2_83_2","unstructured":"Neta Zmora Guy Jacob Lev Zlotnik Bar Elharar and Gal Novik. 2019. Neural network distiller: A python package for DNN compression research. arXiv:1910.12232. Retrieved from https:\/\/arxiv.org\/abs\/1910.12232."}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583564","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3583564","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:54Z","timestamp":1750178274000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583564"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,24]]},"references-count":82,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2023,9,30]]}},"alternative-id":["10.1145\/3583564"],"URL":"https:\/\/doi.org\/10.1145\/3583564","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,24]]},"assertion":[{"value":"2022-05-26","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-01-17","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-07-24","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}