{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,6]],"date-time":"2025-12-06T16:46:28Z","timestamp":1765039588905,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":7,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,7,23]],"date-time":"2021-07-23T00:00:00Z","timestamp":1626998400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,7,23]]},"DOI":"10.1145\/3478905.3478940","type":"proceedings-article","created":{"date-parts":[[2021,9,28]],"date-time":"2021-09-28T15:56:17Z","timestamp":1632844577000},"page":"173-176","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Model Compression with NAS and Knowledge Distillation for Medical Image Segmentation"],"prefix":"10.1145","author":[{"given":"Zhong","family":"Zheng","sequence":"first","affiliation":[{"name":"Beijing University of Posts and Telecommunications, China"}]},{"given":"Guixia","family":"Kang","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, China"}]}],"member":"320","published-online":{"date-parts":[[2021,9,28]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791(2019).","author":"Cai Han","year":"2019","unstructured":"Han Cai , Chuang Gan , Tianzhe Wang , Zhekai Zhang , and Song Han . 2019 . Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791(2019). Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. 2019. Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791(2019)."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.isprsjprs.2020.01.013"},{"key":"e_1_3_2_1_3_1","volume-title":"Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017).","author":"Howard G","year":"2017","unstructured":"Andrew\u00a0 G Howard , Menglong Zhu , Bo Chen , Dmitry Kalenichenko , Weijun Wang , Tobias Weyand , Marco Andreetto , and Hartwig Adam . 2017 . Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017). Andrew\u00a0G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017)."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00533"},{"key":"e_1_3_2_1_5_1","volume-title":"The multimodal brain tumor image segmentation benchmark (BRATS)","author":"Menze H","year":"2014","unstructured":"Bjoern\u00a0 H Menze , Andras Jakab , Stefan Bauer , Jayashree Kalpathy-Cramer , Keyvan Farahani , Justin Kirby , Yuliya Burren , Nicole Porz , Johannes Slotboom , Roland Wiest , 2014. The multimodal brain tumor image segmentation benchmark (BRATS) . IEEE transactions on medical imaging 34, 10 ( 2014 ), 1993\u20132024. Bjoern\u00a0H Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy-Cramer, Keyvan Farahani, Justin Kirby, Yuliya Burren, Nicole Porz, Johannes Slotboom, Roland Wiest, 2014. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE transactions on medical imaging 34, 10 (2014), 1993\u20132024."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2015.2408351"},{"key":"e_1_3_2_1_7_1","unstructured":"Sergey Zagoruyko and Nikos Komodakis. 2016. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928(2016).  Sergey Zagoruyko and Nikos Komodakis. 2016. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928(2016)."}],"event":{"name":"DSIT 2021: 2021 4th International Conference on Data Science and Information Technology","acronym":"DSIT 2021","location":"Shanghai China"},"container-title":["2021 4th International Conference on Data Science and Information Technology"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478905.3478940","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3478905.3478940","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:37Z","timestamp":1750191517000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478905.3478940"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,7,23]]},"references-count":7,"alternative-id":["10.1145\/3478905.3478940","10.1145\/3478905"],"URL":"https:\/\/doi.org\/10.1145\/3478905.3478940","relation":{},"subject":[],"published":{"date-parts":[[2021,7,23]]},"assertion":[{"value":"2021-09-28","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}