{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2023,1,7]],"date-time":"2023-01-07T05:56:41Z","timestamp":1673071001072},"publisher-location":"New York, NY, USA","reference-count":19,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,11,3]],"date-time":"2020-11-03T00:00:00Z","timestamp":1604361600000},"content-version":"vor","delay-in-days":366,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["1447711, 1514357, 1743418, and 1843025"]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2019,11,3]]},"DOI":"10.1145\/3357384.3358150","type":"proceedings-article","created":{"date-parts":[[2019,11,4]],"date-time":"2019-11-04T14:11:35Z","timestamp":1572876695000},"update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Adversarial Structured Neural Network Pruning"],"prefix":"10.1145","author":[{"given":"Xingyu","family":"Cai","sequence":"first","affiliation":[{"name":"University of Connecticut, Storrs, CT, USA"}]},{"given":"Jinfeng","family":"Yi","sequence":"additional","affiliation":[{"name":"JD AI Research, Nanjing, China"}]},{"given":"Fan","family":"Zhang","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"given":"Sanguthevar","family":"Rajasekaran","sequence":"additional","affiliation":[{"name":"University of Connecticut, Storrs, CT, USA"}]}],"member":"320","published-online":{"date-parts":[[2019,11,3]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye , Nicholas Carlini , and David Wagner . 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 ( 2018 ). Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018)."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140444"},{"key":"e_1_3_2_1_3_1","volume-title":"EAD: elastic-net attacks to deep neural networks via adversarial examples. arXiv preprint arXiv:1709.04114","author":"Chen Pin-Yu","year":"2017","unstructured":"Pin-Yu Chen , Yash Sharma , Huan Zhang , Jinfeng Yi , and Cho-Jui Hsieh . 2017. EAD: elastic-net attacks to deep neural networks via adversarial examples. arXiv preprint arXiv:1709.04114 ( 2017 ). Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2017. EAD: elastic-net attacks to deep neural networks via adversarial examples. arXiv preprint arXiv:1709.04114 (2017)."},{"key":"e_1_3_2_1_4_1","volume-title":"et almbox","author":"Denil Misha","year":"2013","unstructured":"Misha Denil , Babak Shakibi , Laurent Dinh , Nando de Freitas , et almbox . 2013 . Predicting parameters in deep learning. In NIPS. 2148--2156. Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et almbox. 2013. Predicting parameters in deep learning. In NIPS. 2148--2156."},{"key":"e_1_3_2_1_5_1","unstructured":"Emily L Denton Wojciech Zaremba Joan Bruna Yann LeCun and Rob Fergus. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS . 1269--1277. Emily L Denton Wojciech Zaremba Joan Bruna Yann LeCun and Rob Fergus. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS . 1269--1277."},{"key":"e_1_3_2_1_6_1","volume-title":"Trained Quantization and Huffman Coding. International Conference on Learning Representations (ICLR)","author":"Han Song","year":"2016","unstructured":"Song Han , Huizi Mao , and William J Dally . 2016 . Deep Compression: Compressing Deep Neural Networks with Pruning , Trained Quantization and Huffman Coding. International Conference on Learning Representations (ICLR) (2016). Song Han, Huizi Mao, and William J Dally. 2016. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. International Conference on Learning Representations (ICLR) (2016)."},{"key":"e_1_3_2_1_7_1","unstructured":"Song Han Jeff Pool John Tran and William Dally. 2015. Learning both weights and connections for efficient neural network. In NIPS . 1135--1143. Song Han Jeff Pool John Tran and William Dally. 2015. Learning both weights and connections for efficient neural network. In NIPS . 1135--1143."},{"key":"e_1_3_2_1_8_1","volume-title":"Data-driven sparse structure selection for deep neural networks. arXiv preprint arXiv:1707.01213","author":"Huang Zehao","year":"2017","unstructured":"Zehao Huang and Naiyan Wang . 2017. Data-driven sparse structure selection for deep neural networks. arXiv preprint arXiv:1707.01213 ( 2017 ). Zehao Huang and Naiyan Wang. 2017. Data-driven sparse structure selection for deep neural networks. arXiv preprint arXiv:1707.01213 (2017)."},{"key":"e_1_3_2_1_9_1","volume-title":"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size. arXiv preprint","author":"Iandola Forrest N","year":"2016","unstructured":"Forrest N Iandola , Song Han , Matthew W Moskewicz , Khalid Ashraf , William J Dally , and Kurt Keutzer . 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size. arXiv preprint ( 2016 ). Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size. arXiv preprint (2016)."},{"key":"e_1_3_2_1_10_1","volume-title":"Extremely low bit neural network: Squeeze the last bit out with admm. arXiv preprint","author":"Leng Cong","year":"2017","unstructured":"Cong Leng , Hao Li , Shenghuo Zhu , and Rong Jin . 2017. Extremely low bit neural network: Squeeze the last bit out with admm. arXiv preprint ( 2017 ). Cong Leng, Hao Li, Shenghuo Zhu, and Rong Jin. 2017. Extremely low bit neural network: Squeeze the last bit out with admm. arXiv preprint (2017)."},{"key":"e_1_3_2_1_11_1","unstructured":"Christos Louizos Karen Ullrich and Max Welling. 2017a. Bayesian compression for deep learning. In NIPS . Christos Louizos Karen Ullrich and Max Welling. 2017a. Bayesian compression for deep learning. In NIPS ."},{"key":"e_1_3_2_1_12_1","volume-title":"Learning Sparse Neural Networks through $ L_0 $ Regularization. arXiv preprint","author":"Louizos Christos","year":"2017","unstructured":"Christos Louizos , Max Welling , and Diederik P Kingma . 2017b. Learning Sparse Neural Networks through $ L_0 $ Regularization. arXiv preprint ( 2017 ). Christos Louizos, Max Welling, and Diederik P Kingma. 2017b. Learning Sparse Neural Networks through $ L_0 $ Regularization. arXiv preprint (2017)."},{"key":"e_1_3_2_1_13_1","volume-title":"Variational Dropout Sparsifies Deep Neural Networks. arXiv preprint","author":"Molchanov Dmitry","year":"2017","unstructured":"Dmitry Molchanov , Arsenii Ashukha , and Dmitry Vetrov . 2017. Variational Dropout Sparsifies Deep Neural Networks. arXiv preprint ( 2017 ). Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. 2017. Variational Dropout Sparsifies Deep Neural Networks. arXiv preprint (2017)."},{"key":"e_1_3_2_1_14_1","unstructured":"Kirill Neklyudov Dmitry Molchanov Arsenii Ashukha and Dmitry P Vetrov. 2017. Structured bayesian pruning via log-normal multiplicative noise. In NIPS . 6778--6787. Kirill Neklyudov Dmitry Molchanov Arsenii Ashukha and Dmitry P Vetrov. 2017. Structured bayesian pruning via log-normal multiplicative noise. In NIPS . 6778--6787."},{"key":"e_1_3_2_1_15_1","volume-title":"Adversarial Dropout for Supervised and Semi-supervised Learning. arXiv preprint arXiv:1707.03631","author":"Park Sungrae","year":"2017","unstructured":"Sungrae Park , Jun-Keon Park , Su-Jin Shin , and Il-Chul Moon . 2017. Adversarial Dropout for Supervised and Semi-supervised Learning. arXiv preprint arXiv:1707.03631 ( 2017 ). Sungrae Park, Jun-Keon Park, Su-Jin Shin, and Il-Chul Moon. 2017. Adversarial Dropout for Supervised and Semi-supervised Learning. arXiv preprint arXiv:1707.03631 (2017)."},{"key":"e_1_3_2_1_16_1","volume-title":"Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199","author":"Szegedy Christian","year":"2013","unstructured":"Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna , Dumitru Erhan , Ian Goodfellow , and Rob Fergus . 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 ( 2013 ). Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)."},{"key":"e_1_3_2_1_17_1","volume-title":"Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008","author":"Ullrich Karen","year":"2017","unstructured":"Karen Ullrich , Edward Meeds , and Max Welling . 2017. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008 ( 2017 ). Karen Ullrich, Edward Meeds, and Max Welling. 2017. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008 (2017)."},{"key":"e_1_3_2_1_18_1","unstructured":"Wei Wen Chunpeng Wu Yandan Wang Yiran Chen and Hai Li. 2016. Learning structured sparsity in deep neural networks. In NIPS. 2074--2082. Wei Wen Chunpeng Wu Yandan Wang Yiran Chen and Hai Li. 2016. Learning structured sparsity in deep neural networks. In NIPS. 2074--2082."},{"key":"e_1_3_2_1_19_1","volume-title":"Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124","author":"Ye Jianbo","year":"2018","unstructured":"Jianbo Ye , Xin Lu , Zhe Lin , and James Z Wang . 2018. Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124 ( 2018 ). Jianbo Ye, Xin Lu, Zhe Lin, and James Z Wang. 2018. Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124 (2018)."}],"event":{"name":"CIKM '19: The 28th ACM International Conference on Information and Knowledge Management","location":"Beijing China","acronym":"CIKM '19","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"]},"container-title":["Proceedings of the 28th ACM International Conference on Information and Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3357384.3358150","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3357384.3358150","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,6]],"date-time":"2023-01-06T13:27:07Z","timestamp":1673011627000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3357384.3358150"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,11,3]]},"references-count":19,"alternative-id":["10.1145\/3357384.3358150","10.1145\/3357384"],"URL":"http:\/\/dx.doi.org\/10.1145\/3357384.3358150","relation":{},"published":{"date-parts":[[2019,11,3]]},"assertion":[{"value":"2019-11-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}