{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,18]],"date-time":"2025-11-18T09:26:09Z","timestamp":1763457969542,"version":"3.41.0"},"reference-count":95,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2020,10,15]],"date-time":"2020-10-15T00:00:00Z","timestamp":1602720000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["OAC-1747694"],"award-info":[{"award-number":["OAC-1747694"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Des. Autom. Electron. Syst."],"published-print":{"date-parts":[[2021,1,31]]},"abstract":"<jats:p>\n            Embedded devices are generally small, battery-powered computers with limited hardware resources. It is difficult to run deep neural networks (DNNs) on these devices, because DNNs perform millions of operations and consume significant amounts of energy. Prior research has shown that a considerable number of a DNN\u2019s memory accesses and computation are redundant when performing tasks like image classification. To reduce this redundancy and thereby reduce the energy consumption of DNNs, we introduce the Modular Neural Network Tree architecture. Instead of using one large DNN for the classifier, this architecture uses multiple smaller DNNs (called\n            <jats:italic>modules<\/jats:italic>\n            ) to progressively classify images into groups of categories based on a novel visual similarity metric. Once a group of categories is selected by a module, another module then continues to distinguish among the similar categories within the selected group. This process is repeated over multiple modules until we are left with a single category. The computation needed to distinguish dissimilar groups is avoided, thus reducing redundant operations, memory accesses, and energy. Experimental results using several image datasets reveal the effectiveness of our proposed solution to reduce memory requirements by 50% to 99%, inference time by 55% to 95%, energy consumption by 52% to 94%, and the number of operations by 15% to 99% when compared with existing DNN architectures, running on two different embedded systems: Raspberry Pi 3 and Raspberry Pi Zero.\n          <\/jats:p>","DOI":"10.1145\/3408062","type":"journal-article","created":{"date-parts":[[2020,10,16]],"date-time":"2020-10-16T04:29:14Z","timestamp":1602822554000},"page":"1-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["Modular Neural Networks for Low-Power Image Classification on Embedded Devices"],"prefix":"10.1145","volume":"26","author":[{"given":"Abhinav","family":"Goel","sequence":"first","affiliation":[{"name":"Purdue University"}]},{"given":"Sara","family":"Aghajanzadeh","sequence":"additional","affiliation":[{"name":"Purdue University"}]},{"given":"Caleb","family":"Tung","sequence":"additional","affiliation":[{"name":"Purdue University"}]},{"given":"Shuo-Han","family":"Chen","sequence":"additional","affiliation":[{"name":"National Taipei University of Technology, Taiwan, Republic of China"}]},{"given":"George K.","family":"Thiruvathukal","sequence":"additional","affiliation":[{"name":"Loyola University Chicago, Chicago, IL"}]},{"given":"Yung-Hsiang","family":"Lu","sequence":"additional","affiliation":[{"name":"Purdue University"}]}],"member":"320","published-online":{"date-parts":[[2020,10,15]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Proceedings of IEEE ISCAS","author":"\u00a0al A.","year":"2017","unstructured":"A. Mohan et \u00a0al . 2017 . Internet of Video Things in 2030: A world with many cameras . In Proceedings of IEEE ISCAS 2017. 1--4. A. Mohan et\u00a0al. 2017. Internet of Video Things in 2030: A world with many cameras. In Proceedings of IEEE ISCAS 2017. 1--4."},{"key":"e_1_2_1_2_1","unstructured":"S. Han etal 2015. Deep compression: Compressing deep neural networks with pruning trained quantization and Huffman coding. ArXiv:1510.00149 [cs].  S. Han et al. 2015. Deep compression: Compressing deep neural networks with pruning trained quantization and Huffman coding. ArXiv:1510.00149 [cs]."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3200904"},{"key":"e_1_2_1_4_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3209888","article-title":"Distributed machine learning on smart-gateway network toward real-time smart-grid energy management with behavior cognition","volume":"23","author":"Huang H.","year":"2018","unstructured":"H. Huang 2018 . Distributed machine learning on smart-gateway network toward real-time smart-grid energy management with behavior cognition . ACM Transactions on Design Automation of Electronic Systems 23 , 5 (2018), 1 -- 26 . H. Huang et al. 2018. Distributed machine learning on smart-gateway network toward real-time smart-grid energy management with behavior cognition. ACM Transactions on Design Automation of Electronic Systems 23, 5 (2018), 1--26.","journal-title":"ACM Transactions on Design Automation of Electronic Systems"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP40778.2020.9190851"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0041-4"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_1_8_1","article-title":"Switching predictive control using reconfigurable state-based model","volume":"24","author":"Amir M.","year":"2018","unstructured":"M. Amir 2018 . Switching predictive control using reconfigurable state-based model . ACM Transactions on Design Automation of Electronic Systems 24 , 1 (2018), Article 2, 21 pages. M. Amir et al. 2018. Switching predictive control using reconfigurable state-based model. ACM Transactions on Design Automation of Electronic Systems 24, 1 (2018), Article 2, 21 pages.","journal-title":"ACM Transactions on Design Automation of Electronic Systems"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3198457"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/TENCON.2017.8228008"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/JETCAS.2019.2911899"},{"key":"e_1_2_1_12_1","volume-title":"Proceedings of IEEE DATE","author":"\u00a0al K.","year":"2018","unstructured":"K. Gauen et \u00a0al . 2018 . Three years of low-power image recognition challenge . In Proceedings of IEEE DATE 2018. K. Gauen et\u00a0al. 2018. Three years of low-power image recognition challenge. In Proceedings of IEEE DATE 2018."},{"key":"e_1_2_1_13_1","unstructured":"K. Simonyan etal 2014. Very deep convolutional networks for large-scale image recognition. ArXiv:1409.1556 [cs].  K. Simonyan et al. 2014. Very deep convolutional networks for large-scale image recognition. ArXiv:1409.1556 [cs]."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.327"},{"key":"e_1_2_1_15_1","volume-title":"Modular Neural Networks. Retrieved","author":"Goel A.","year":"2020","unstructured":"A. Goel . 2019. Modular Neural Networks. Retrieved August 8, 2020 from https:\/\/github.com\/abhinavgoel95\/Modular_Neural_Networks. A. Goel. 2019. Modular Neural Networks. Retrieved August 8, 2020 from https:\/\/github.com\/abhinavgoel95\/Modular_Neural_Networks."},{"key":"e_1_2_1_16_1","first-page":"09091","article-title":"Observing responses to the COVID-19 pandemic using worldwide network cameras","volume":"2005","author":"Ghodgaonkar I.","year":"2020","unstructured":"I. Ghodgaonkar 2020 . Observing responses to the COVID-19 pandemic using worldwide network cameras . ArXiv : 2005 . 09091 . I. Ghodgaonkar et al. 2020. Observing responses to the COVID-19 pandemic using worldwide network cameras. ArXiv:2005.09091.","journal-title":"ArXiv"},{"key":"e_1_2_1_17_1","volume-title":"Proceedings of Advances in NeurIPS","author":"Szegedy C.","year":"2013","unstructured":"C. Szegedy 2013 . Deep neural networks for object detection . In Proceedings of Advances in NeurIPS 2013. 2553--2561. C. Szegedy et al. 2013. Deep neural networks for object detection. In Proceedings of Advances in NeurIPS 2013. 2553--2561."},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1022643204877"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.1967.1053964"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1007465528199"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.1998.682302"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00291"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2877890"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/WF-IoT48130.2020.9221198"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46493-0_32"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3057275"},{"key":"e_1_2_1_27_1","unstructured":"H. Li etal 2016. Pruning filters for efficient ConvNets. ArXiv:1608.08710 [cs].  H. Li et al. 2016. Pruning filters for efficient ConvNets. ArXiv:1608.08710 [cs]."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/BigData.2018.8622329"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3342239"},{"key":"e_1_2_1_30_1","unstructured":"F. N. Iandola etal 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt; 0.5 MB model size. ArXiv:1602.07360 [cs].  F. N. Iandola et al. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt; 0.5 MB model size. ArXiv:1602.07360 [cs]."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00474"},{"key":"e_1_2_1_32_1","volume-title":"Proceedings of ACM AAAI","author":"\u00a0al C.","year":"2017","unstructured":"C. Szegedy et \u00a0al . 2017 . Inception-v4, Inception-ResNet and the impact of residual connections on learning . In Proceedings of ACM AAAI 2017. 4278--4284. C. Szegedy et\u00a0al. 2017. Inception-v4, Inception-ResNet and the impact of residual connections on learning. In Proceedings of ACM AAAI 2017. 4278--4284."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2343229"},{"key":"e_1_2_1_34_1","volume-title":"Proceedings of Advances in NeurIPS","author":"Denton E.","year":"2014","unstructured":"E. Denton 2014 . Exploiting linear structure within convolutional networks for efficient evaluation . In Proceedings of Advances in NeurIPS 2014. 1269--1277. E. Denton et al. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In Proceedings of Advances in NeurIPS 2014. 1269--1277."},{"key":"e_1_2_1_35_1","first-page":"3866","article-title":"Speeding up convolutional neural networks with low rank expansions","volume":"1405","author":"Jaderberg M.","year":"2014","unstructured":"M. Jaderberg 2014 . Speeding up convolutional neural networks with low rank expansions . ArXiv : 1405 . 3866 . M. Jaderberg et al. 2014. Speeding up convolutional neural networks with low rank expansions. ArXiv:1405.3866.","journal-title":"ArXiv"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301278"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3275243"},{"key":"e_1_2_1_38_1","unstructured":"Y. Cheng etal 2017. A survey of model compression and acceleration for deep neural networks. ArXiv:1710.09282 [cs].  Y. Cheng et al. 2017. A survey of model compression and acceleration for deep neural networks. ArXiv:1710.09282 [cs]."},{"key":"e_1_2_1_39_1","unstructured":"G. Hinton etal 2015. Distilling the knowledge in a neural network. ArXiv:1503.02531 [cs stat].  G. Hinton et al. 2015. Distilling the knowledge in a neural network. ArXiv:1503.02531 [cs stat]."},{"key":"e_1_2_1_40_1","volume-title":"Proceedings of Advances in NeurIPS","author":"Ba J.","year":"2014","unstructured":"J. Ba 2014 . Do deep nets really need to be deep? In Proceedings of Advances in NeurIPS 2014. 2654--2662. J. Ba et al. 2014. Do deep nets really need to be deep? In Proceedings of Advances in NeurIPS 2014. 2654--2662."},{"key":"e_1_2_1_41_1","doi-asserted-by":"crossref","unstructured":"J. Gu\u00e9rin etal 2017. CNN features are also great at unsupervised classification. ArXiv:1707.01700 [cs].  J. Gu\u00e9rin et al. 2017. CNN features are also great at unsupervised classification. ArXiv:1707.01700 [cs].","DOI":"10.5121\/csit.2018.80308"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0167-8655(98)00115-9"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2015.2467315"},{"key":"e_1_2_1_44_1","volume-title":"Proceedings of AAAI","author":"Zhu H.","year":"2016","unstructured":"H. Zhu 2016 . Deep hashing network for efficient similarity retrieval . In Proceedings of AAAI 2016. 2415--2421. H. Zhu et al. 2016. Deep hashing network for efficient similarity retrieval. In Proceedings of AAAI 2016. 2415--2421."},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2008.4587410"},{"key":"e_1_2_1_46_1","volume-title":"Proceedings of Advances in NeurIPS","author":"Deng J.","year":"2011","unstructured":"J. Deng 2011 . Fast and balanced: Efficient label tree learning for large scale object recognition . In Proceedings of Advances in NeurIPS 2011. J. Deng et al. 2011. Fast and balanced: Efficient label tree learning for large scale object recognition. In Proceedings of Advances in NeurIPS 2011."},{"key":"e_1_2_1_47_1","volume-title":"Proceedings of ACM UAI","author":"Beygelzimer A.","year":"2009","unstructured":"A. Beygelzimer 2009 . Conditional probability tree estimation analysis and algorithms . In Proceedings of ACM UAI 2009. 51--58. A. Beygelzimer et al. 2009. Conditional probability tree estimation analysis and algorithms. In Proceedings of ACM UAI 2009. 51--58."},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2006.313037"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-33783-3_63"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2017.2681075"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2008.128"},{"key":"e_1_2_1_52_1","unstructured":"J. Redmon etal 2016. 2016. YOLO9000: Better faster stronger. ArXiv:1612.08242 [cs].  J. Redmon et al. 2016. 2016. YOLO9000: Better faster stronger. ArXiv:1612.08242 [cs]."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2007.4409064"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3240765.3240845"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2014.2366476"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-88693-8_35"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/MLSP.2016.7738816"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0031-3203(96)00068-4"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1006\/cgip.1993.1019"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2016.2615423"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/219717.219748"},{"key":"e_1_2_1_62_1","volume-title":"Proceedings of AAAI","author":"Xia R.","year":"2014","unstructured":"R. Xia 2014 . Supervised hashing for image retrieval via image representation learning . In Proceedings of AAAI 2014. 2156--2162. R. Xia et al. 2014. Supervised hashing for image retrieval via image representation learning. In Proceedings of AAAI 2014. 2156--2162."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2789887"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2010.5539994"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2017.09.061"},{"key":"e_1_2_1_66_1","first-page":"05800","article-title":"Tree-CNN: A hierarchical deep convolutional neural network for incremental learning","volume":"1802","author":"Roy D.","year":"2018","unstructured":"D. Roy 2018 . Tree-CNN: A hierarchical deep convolutional neural network for incremental learning . ArXiv : 1802 . 05800 . D. Roy et al. 2018. Tree-CNN: A hierarchical deep convolutional neural network for incremental learning. ArXiv:1802.05800.","journal-title":"ArXiv"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.40"},{"key":"e_1_2_1_68_1","volume-title":"Proceedings of Advances in of NeurIPS","author":"Guo Y.","year":"2016","unstructured":"Y. Guo 2016 . Dynamic network surgery for efficient DNNs . In Proceedings of Advances in of NeurIPS 2016. 1379--1387. Y. Guo et al. 2016. Dynamic network surgery for efficient DNNs. In Proceedings of Advances in of NeurIPS 2016. 1379--1387."},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0019-9958(65)90241-X"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1198\/016214504000001196"},{"key":"e_1_2_1_71_1","volume-title":"Proceedings of ICML","author":"Tan M.","year":"2019","unstructured":"M. Tan : Rethinking model scaling for convolutional neural networks . In Proceedings of ICML 2019 . 6105--6114. M. Tan et al. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of ICML 2019. 6105--6114."},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.172"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.594"},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.243"},{"key":"e_1_2_1_75_1","volume-title":"Technical Report TR-2009. University of Toronto.","author":"Krizhevsky A.","year":"2009","unstructured":"A. Krizhevsky 2009 . Learning Multiple Layers of Features from Tiny Images . Technical Report TR-2009. University of Toronto. A. Krizhevsky et al. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report TR-2009. University of Toronto."},{"key":"e_1_2_1_76_1","volume-title":"Proceedings of the NeurIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning.","author":"Netzer Y.","year":"2011","unstructured":"Y. Netzer 2011 . Reading digits in natural images with unsupervised feature learning . In Proceedings of the NeurIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning. Y. Netzer et al. 2011. Reading digits in natural images with unsupervised feature learning. In Proceedings of the NeurIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning."},{"key":"e_1_2_1_77_1","volume-title":"EMNIST: An extension of MNIST to handwritten letters. ArXiv:1702.05373 [cs].","author":"Cohen G.","year":"2017","unstructured":"G. Cohen 2017 . EMNIST: An extension of MNIST to handwritten letters. ArXiv:1702.05373 [cs]. G. Cohen et al. 2017. EMNIST: An extension of MNIST to handwritten letters. ArXiv:1702.05373 [cs]."},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_2_1_79_1","unstructured":"G. Griffin etal 2007. Caltech-256 Object Category Dataset. Technical Report. Available at http:\/\/authors.library.caltech.edu\/7694.  G. Griffin et al. 2007. Caltech-256 Object Category Dataset. Technical Report. Available at http:\/\/authors.library.caltech.edu\/7694."},{"key":"e_1_2_1_80_1","volume-title":"User\u2019s Manual. Retrieved","author":"Digital Power Meter E","year":"2020","unstructured":"Yokogawa. 2017. WT310E\/TW310EH\/WT332E\/WT333 E Digital Power Meter : User\u2019s Manual. Retrieved August 9, 2020 from https:\/\/cdn.tmi.yokogawa.com\/IMWT310E-01EN.pdf. Yokogawa. 2017. WT310E\/TW310EH\/WT332E\/WT333E Digital Power Meter: User\u2019s Manual. Retrieved August 9, 2020 from https:\/\/cdn.tmi.yokogawa.com\/IMWT310E-01EN.pdf."},{"key":"e_1_2_1_81_1","unstructured":"PyTorch. 2019. Torch.utils.data. Retrieved August 9 2020 from https:\/\/pytorch.org\/docs\/stable\/data.html.  PyTorch. 2019. Torch.utils.data. Retrieved August 9 2020 from https:\/\/pytorch.org\/docs\/stable\/data.html."},{"key":"e_1_2_1_82_1","first-page":"07678","article-title":"An analysis of deep neural network models for practical applications","volume":"1605","author":"\u00a0al A.","year":"2016","unstructured":"A. Canziani et \u00a0al . 2016 . An analysis of deep neural network models for practical applications . ArXiv : 1605 . 07678 A. Canziani et\u00a0al. 2016. An analysis of deep neural network models for practical applications. ArXiv:1605.07678","journal-title":"ArXiv"},{"key":"e_1_2_1_83_1","volume-title":"Introducing the CVPR 2018 On-Device Visual Intelligence Challenge. Google AI Blog. Retrieved","author":"\u00a0al B.","year":"2018","unstructured":"B. Chen et \u00a0al . 2018 . Introducing the CVPR 2018 On-Device Visual Intelligence Challenge. Google AI Blog. Retrieved August 9, 2020 from http:\/\/ai.googleblog.com\/2018\/04\/introducing-cvpr-2018-on-device-visual.html. B. Chen et\u00a0al. 2018. Introducing the CVPR 2018 On-Device Visual Intelligence Challenge. Google AI Blog. Retrieved August 9, 2020 from http:\/\/ai.googleblog.com\/2018\/04\/introducing-cvpr-2018-on-device-visual.html."},{"key":"e_1_2_1_84_1","volume-title":"Videos: OpenCV-Python Documentation. Retrieved","author":"Mordvintsev A.","year":"2013","unstructured":"A. Mordvintsev 2013 . Getting Started with Videos: OpenCV-Python Documentation. Retrieved August 9, 2020 from https:\/\/opencv-python-tutroals.readthedocs.io\/en\/latest\/py_tutorials\/py_gui\/py_video_display\/py_video_display.html. A. Mordvintsev et al. 2013. Getting Started with Videos: OpenCV-Python Documentation. Retrieved August 9, 2020 from https:\/\/opencv-python-tutroals.readthedocs.io\/en\/latest\/py_tutorials\/py_gui\/py_video_display\/py_video_display.html."},{"key":"e_1_2_1_85_1","volume-title":"Adam: A method for stochastic optimization. ArXiv:1412.6980 [cs].","author":"Kingma D. P.","year":"2014","unstructured":"D. P. Kingma 2014 . Adam: A method for stochastic optimization. ArXiv:1412.6980 [cs]. D. P. Kingma et al. 2014. Adam: A method for stochastic optimization. ArXiv:1412.6980 [cs]."},{"key":"e_1_2_1_86_1","volume-title":"Image Classification with the Modular Neural Network Tree. Retrieved","author":"Goel A.","year":"2020","unstructured":"A. Goel . 2019. Image Classification with the Modular Neural Network Tree. Retrieved August 9, 2020 from https:\/\/youtu.be\/gdae-v-ZyVs. A. Goel. 2019. Image Classification with the Modular Neural Network Tree. Retrieved August 9, 2020 from https:\/\/youtu.be\/gdae-v-ZyVs."},{"key":"e_1_2_1_87_1","doi-asserted-by":"crossref","unstructured":"S. Zagoruyko etal 2016. Wide residual networks. ArXiv:1605.07146 [cs].  S. Zagoruyko et al. 2016. Wide residual networks. ArXiv:1605.07146 [cs].","DOI":"10.5244\/C.30.87"},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1109\/RoboMech.2017.8261132"},{"key":"e_1_2_1_89_1","doi-asserted-by":"crossref","unstructured":"T. Hastie etal 2001. The Elements of Statistical Learning. Springer.  T. Hastie et al. 2001. The Elements of Statistical Learning. Springer.","DOI":"10.1007\/978-0-387-21606-5"},{"key":"e_1_2_1_90_1","volume-title":"Proceedings of ROCLING","author":"Jiang J. J.","year":"1997","unstructured":"J. J. Jiang 1997 . Semantic similarity based on corpus statistics and lexical taxonomy . In Proceedings of ROCLING 1997. 19--33. J. J. Jiang et al. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of ROCLING 1997. 19--33."},{"key":"e_1_2_1_91_1","doi-asserted-by":"crossref","unstructured":"R. R. Selvaraju etal 2016. Grad-CAM: Visual explanations from deep networks via gradient-based localization. ArXiv:1610.02391 [cs].  R. R. Selvaraju et al. 2016. Grad-CAM: Visual explanations from deep networks via gradient-based localization. ArXiv:1610.02391 [cs].","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_2_1_92_1","volume-title":"Proceedings of NeurIPS","author":"Krizhevsky A.","year":"2012","unstructured":"A. Krizhevsky 2012 . ImageNet classification with deep convolutional neural networks . In Proceedings of NeurIPS 2012. A. Krizhevsky et al. 2012. ImageNet classification with deep convolutional neural networks. In Proceedings of NeurIPS 2012."},{"key":"e_1_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1145\/3352460.3358291"},{"key":"e_1_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001138"},{"key":"e_1_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1145\/3079856.3080254"}],"container-title":["ACM Transactions on Design Automation of Electronic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3408062","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3408062","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3408062","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:01:33Z","timestamp":1750197693000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3408062"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,15]]},"references-count":95,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,1,31]]}},"alternative-id":["10.1145\/3408062"],"URL":"https:\/\/doi.org\/10.1145\/3408062","relation":{},"ISSN":["1084-4309","1557-7309"],"issn-type":[{"type":"print","value":"1084-4309"},{"type":"electronic","value":"1557-7309"}],"subject":[],"published":{"date-parts":[[2020,10,15]]},"assertion":[{"value":"2019-11-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-06-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-10-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}