{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,31]],"date-time":"2025-10-31T07:53:56Z","timestamp":1761897236273,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":44,"publisher":"ACM","license":[{"start":{"date-parts":[[2019,12,20]],"date-time":"2019-12-20T00:00:00Z","timestamp":1576800000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2019,12,20]]},"DOI":"10.1145\/3377713.3377739","type":"proceedings-article","created":{"date-parts":[[2020,2,7]],"date-time":"2020-02-07T10:07:26Z","timestamp":1581070046000},"page":"189-196","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["Evolutionary NetArchitecture Search for Deep Neural Networks Pruning"],"prefix":"10.1145","author":[{"given":"Shuxin","family":"Chen","sequence":"first","affiliation":[{"name":"Dalian University of Technology, Dalian, China"}]},{"given":"Lin","family":"Lin","sequence":"additional","affiliation":[{"name":"Dalian University of Technology, Dalian, China"}]},{"given":"Zixun","family":"Zhang","sequence":"additional","affiliation":[{"name":"Dalian University of Technology, Dalian, China"}]},{"given":"Mitsuo","family":"Gen","sequence":"additional","affiliation":[{"name":"Fuzzy Logic Systems Institute, Iizuka, Japan"}]}],"member":"320","published-online":{"date-parts":[[2020,2,7]]},"reference":[{"volume-title":"Advances in neural information processing systems","author":"Krizhevsky Alex","unstructured":"Alex Krizhevsky , Ilya Sutskever , and Geoffrey E Hinton . 2012. Imagenet classification with deep convolutional neural networks . In Advances in neural information processing systems . MIT Press , Cambridge , 1097--1105. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. MIT Press, Cambridge, 1097--1105.","key":"e_1_3_2_1_1_1"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_2_1","DOI":"10.1109\/CVPR.2015.7298594"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_3_1","DOI":"10.4316\/AECE.2013.01015"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_4_1","DOI":"10.1109\/CVPR.2015.7298965"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_5_1","DOI":"10.1109\/CVPR.2014.81"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_6_1","DOI":"10.1109\/ICCV.2017.322"},{"key":"e_1_3_2_1_7_1","volume-title":"Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan and Andrew Zisserman . 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 ( 2014 ). Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_8_1","DOI":"10.1109\/CVPR.2016.90"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_9_1","DOI":"10.1109\/CVPR.2017.634"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_10_1","DOI":"10.1109\/CVPR.2017.243"},{"volume-title":"Advances in neural information processing systems","author":"LeCun Yann","unstructured":"Yann LeCun , John S Denker , and Sara A Solla . 1990. Optimal brain damage . In Advances in neural information processing systems . MIT Press , Cambridge , 598--605. Yann LeCun, John S Denker, and Sara A Solla. 1990. Optimal brain damage. In Advances in neural information processing systems. MIT Press, Cambridge, 598--605.","key":"e_1_3_2_1_11_1"},{"key":"e_1_3_2_1_12_1","volume-title":"Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531","author":"Hinton Geoffrey","year":"2015","unstructured":"Geoffrey Hinton , Oriol Vinyals , and Jeff Dean . 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 ( 2015 ). Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)."},{"key":"e_1_3_2_1_13_1","volume-title":"Antoine Chassang, Carlo Gatta, and Yoshua Bengio.","author":"Romero Adriana","year":"2014","unstructured":"Adriana Romero , Nicolas Ballas , Samira Ebrahimi Kahou , Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014 . Fitnets : Hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014). Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014)."},{"key":"e_1_3_2_1_14_1","volume-title":"Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861","author":"Howard Andrew G","year":"2017","unstructured":"Andrew G Howard , Menglong Zhu , Bo Chen , Dmitry Kalenichenko , Weijun Wang , Tobias Weyand , Marco Andreetto , and Hartwig Adam . 2017 . Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017). Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)."},{"key":"e_1_3_2_1_15_1","volume-title":"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt","author":"Iandola Forrest N","year":"2016","unstructured":"Forrest N Iandola , Song Han , Matthew W Moskewicz , Khalid Ashraf , William J Dally , and Kurt Keutzer . 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt ; 0.5 MB model size. arXiv preprint arXiv:1602.07360 ( 2016 ). Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt; 0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016)."},{"volume-title":"Advances in neural information processing systems","author":"Denton Emily L","unstructured":"Emily L Denton , Wojciech Zaremba , Joan Bruna , Yann LeCun , and Rob Fergus . 2014. Exploiting linear structure within convolutional networks for efficient evaluation . In Advances in neural information processing systems . MIT Press , Cambridge , 1269--1277. Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in neural information processing systems. MIT Press, Cambridge, 1269--1277.","key":"e_1_3_2_1_16_1"},{"unstructured":"Cheng Tai Tong Xiao Yi Zhang Xiaogang Wang etal 2015. Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067 (2015).  Cheng Tai Tong Xiao Yi Zhang Xiaogang Wang et al. 2015. Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067 (2015).","key":"e_1_3_2_1_17_1"},{"key":"e_1_3_2_1_18_1","volume-title":"Accelerating very deep convolutional networks for classification and detection. 38, 10","author":"Zhang Xiangyu","year":"2015","unstructured":"Xiangyu Zhang , Jianhua Zou , Kaiming He , and Jian Sun . 2015. Accelerating very deep convolutional networks for classification and detection. 38, 10 ( 2015 ), 1943--1955. Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. 2015. Accelerating very deep convolutional networks for classification and detection. 38, 10 (2015), 1943--1955."},{"key":"e_1_3_2_1_19_1","volume-title":"Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830","author":"Courbariaux Matthieu","year":"2016","unstructured":"Matthieu Courbariaux , Itay Hubara , Daniel Soudry , Ran El-Yaniv , and Yoshua Bengio . 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830 ( 2016 ). Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830 (2016)."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_20_1","DOI":"10.1007\/978-3-319-46493-0_32"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_21_1","DOI":"10.1109\/CVPR.2016.521"},{"key":"e_1_3_2_1_22_1","volume-title":"Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149","author":"Han Song","year":"2015","unstructured":"Song Han , Huizi Mao , and William J Dally . 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 ( 2015 ). Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015)."},{"key":"e_1_3_2_1_23_1","volume-title":"Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866","author":"He Yang","year":"2018","unstructured":"Yang He , Guoliang Kang , Xuanyi Dong , Yanwei Fu , and Yi Yang . 2018. Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866 ( 2018 ). Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. 2018. Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866 (2018)."},{"key":"e_1_3_2_1_24_1","volume-title":"Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710","author":"Li Hao","year":"2016","unstructured":"Hao Li , Asim Kadav , Igor Durdanovic , Hanan Samet , and Hans Peter Graf . 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 ( 2016 ). Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016)."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_25_1","DOI":"10.1109\/CVPR.2018.00958"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_26_1","DOI":"10.1007\/978-3-030-01270-0_19"},{"volume-title":"Advances in neural information processing systems","author":"Han Song","unstructured":"Song Han , Jeff Pool , John Tran , and William Dally . 2015. Learning both weights and connections for efficient neural network . In Advances in neural information processing systems . MIT Press , Cambridge , 1135--1143. Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems. MIT Press, Cambridge, 1135--1143.","key":"e_1_3_2_1_27_1"},{"volume-title":"Advances In Neural Information Processing Systems","author":"Guo Yiwen","unstructured":"Yiwen Guo , Anbang Yao , and Yurong Chen . 2016. Dynamic network surgery for efficient dnns . In Advances In Neural Information Processing Systems . MIT Press , Cambridge , 1379--1387. Yiwen Guo, Anbang Yao, and Yurong Chen. 2016. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems. MIT Press, Cambridge, 1379--1387.","key":"e_1_3_2_1_28_1"},{"key":"e_1_3_2_1_29_1","volume-title":"Advances in neural information processing systems","author":"Wen Wei","year":"2074","unstructured":"Wei Wen , Chunpeng Wu , Yandan Wang , Yiran Chen , and Hai Li. 2016. Learning structured sparsity in deep neural networks . In Advances in neural information processing systems . MIT Press , Cambridge , 2074 --2082. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. 2016. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems. MIT Press, Cambridge, 2074--2082."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_30_1","DOI":"10.1007\/978-3-319-46493-0_40"},{"key":"e_1_3_2_1_31_1","volume-title":"Pruning convolutional neural networks for resource efficient transfer learning. arXiv preprint arXiv:1611.06440-3","author":"Molchanov Pavlo","year":"2016","unstructured":"Pavlo Molchanov , Stephen Tyree , Tero Karras , Timo Aila , and Jan Kautz . 2016. Pruning convolutional neural networks for resource efficient transfer learning. arXiv preprint arXiv:1611.06440-3 ( 2016 ). Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. 2016. Pruning convolutional neural networks for resource efficient transfer learning. arXiv preprint arXiv:1611.06440-3 (2016)."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_32_1","DOI":"10.1109\/ICCV.2017.155"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_33_1","DOI":"10.1109\/ICCV.2017.298"},{"key":"e_1_3_2_1_34_1","volume-title":"[Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks. IEEE Computer Society","author":"Dasgupta Dipankar","year":"1992","unstructured":"Dipankar Dasgupta and Douglas R McGregor . 1992 . Designing applicationspecific neural networks using the structured genetic algorithm . In [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks. IEEE Computer Society , Los Alamitos, CA, 87--96. Dipankar Dasgupta and Douglas R McGregor. 1992. Designing applicationspecific neural networks using the structured genetic algorithm. In [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks. IEEE Computer Society, Los Alamitos, CA, 87--96."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_35_1","DOI":"10.5555\/3305890.3305981"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_36_1","DOI":"10.1609\/aaai.v33i01.33014780"},{"doi-asserted-by":"crossref","unstructured":"Risto Miikkulainen Jason Liang Elliot Meyerson Aditya Rawal Daniel Fink Olivier Francon Bala Raju Hormoz Shahrzad Arshak Navruzyan Nigel Duffy etal 2019. Evolving deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain Computing. Elsevier London 293--312.  Risto Miikkulainen Jason Liang Elliot Meyerson Aditya Rawal Daniel Fink Olivier Francon Bala Raju Hormoz Shahrzad Arshak Navruzyan Nigel Duffy et al. 2019. Evolving deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain Computing. Elsevier London 293--312.","key":"e_1_3_2_1_37_1","DOI":"10.1016\/B978-0-12-815480-9.00015-3"},{"key":"e_1_3_2_1_38_1","first-page":"379","article-title":"Designing Neural Networks using Genetic Algorithms","volume":"89","author":"Miller Geoffrey F","year":"1989","unstructured":"Geoffrey F Miller , Peter M Todd , and Shailesh U Hegde . 1989 . Designing Neural Networks using Genetic Algorithms .. In ICGA , Vol. 89. 379 -- 384 . Geoffrey F Miller, Peter M Todd, and Shailesh U Hegde. 1989. Designing Neural Networks using Genetic Algorithms.. In ICGA, Vol. 89. 379--384.","journal-title":"ICGA"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_39_1","DOI":"10.5555\/1704555.1704728"},{"key":"e_1_3_2_1_40_1","volume-title":"Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2","author":"Stanley Kenneth O","year":"2002","unstructured":"Kenneth O Stanley and Risto Miikkulainen . 2002. Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2 ( 2002 ), 99--127. Kenneth O Stanley and Risto Miikkulainen. 2002. Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2 (2002), 99--127."},{"key":"e_1_3_2_1_41_1","volume-title":"International conference on machine learning. ACM","author":"Xu Kelvin","year":"2015","unstructured":"Kelvin Xu , Jimmy Ba , Ryan Kiros , Kyunghyun Cho , Aaron Courville , Ruslan Salakhudinov , Rich Zemel , and Yoshua Bengio . 2015 . Show, attend and tell: Neural image caption generation with visual attention . In International conference on machine learning. ACM , New York, NY , 2048--2057. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning. ACM, New York, NY, 2048--2057."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_42_1","DOI":"10.1109\/CVPR.2018.00745"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_43_1","DOI":"10.1109\/CVPR.2017.667"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_44_1","DOI":"10.1109\/CVPR.2017.683"}],"event":{"sponsor":["Chinese Univ. of Hong Kong Chinese University of Hong Kong"],"acronym":"ACAI 2019","name":"ACAI 2019: 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence","location":"Sanya China"},"container-title":["Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3377713.3377739","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3377713.3377739","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:23:55Z","timestamp":1750202635000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3377713.3377739"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,12,20]]},"references-count":44,"alternative-id":["10.1145\/3377713.3377739","10.1145\/3377713"],"URL":"https:\/\/doi.org\/10.1145\/3377713.3377739","relation":{},"subject":[],"published":{"date-parts":[[2019,12,20]]},"assertion":[{"value":"2020-02-07","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}