{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,20]],"date-time":"2025-11-20T18:48:19Z","timestamp":1763664499307,"version":"3.41.0"},"reference-count":82,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2021,4,15]],"date-time":"2021-04-15T00:00:00Z","timestamp":1618444800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["J. Emerg. Technol. Comput. Syst."],"published-print":{"date-parts":[[2021,4,30]]},"abstract":"<jats:p>Computationally intensive neural network applications often need to run on resource-limited low-power devices. Numerous hardware accelerators have been developed to speed up the performance of neural network applications and reduce power consumption; however, most focus on data centers and full-fledged systems. Acceleration in ultra-low-power systems has been only partially addressed. In this article, we present multiPULPly, an accelerator that integrates memristive technologies within standard low-power CMOS technology, to accelerate multiplication in neural network inference on ultra-low-power systems. This accelerator was designated for PULP, an open-source microcontroller system that uses low-power RISC-V processors. Memristors were integrated into the accelerator to enable power consumption only when the memory is active, to continue the task with no context-restoring overhead, and to enable highly parallel analog multiplication. To reduce the energy consumption, we propose novel dataflows that handle common multiplication scenarios and are tailored for our architecture. The accelerator was tested on FPGA and achieved a peak energy efficiency of 19.5 TOPS\/W, outperforming state-of-the-art accelerators by 1.5\u00d7 to 4.5\u00d7.<\/jats:p>","DOI":"10.1145\/3432815","type":"journal-article","created":{"date-parts":[[2021,4,15]],"date-time":"2021-04-15T11:04:42Z","timestamp":1618484682000},"page":"1-27","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["multiPULPly"],"prefix":"10.1145","volume":"17","author":[{"given":"Adi","family":"Eliahu","sequence":"first","affiliation":[{"name":"Technion-Israel Institute of Technology, Haifa, Israel"}]},{"given":"Ronny","family":"Ronen","sequence":"additional","affiliation":[{"name":"Technion-Israel Institute of Technology, Haifa, Israel"}]},{"given":"Pierre-Emmanuel","family":"Gaillardon","sequence":"additional","affiliation":[{"name":"University of Utah, Salt Lake City, Utah"}]},{"given":"Shahar","family":"Kvatinsky","sequence":"additional","affiliation":[{"name":"Technion-Israel Institute of Technology, Haifa, Israel"}]}],"member":"320","published-online":{"date-parts":[[2021,4,15]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"GAP9. 2021. Retrieved from https:\/\/greenwaves-technologies.com\/gap9_iot_application_processor.  GAP9. 2021. Retrieved from https:\/\/greenwaves-technologies.com\/gap9_iot_application_processor."},{"key":"e_1_2_1_2_1","unstructured":"Pulp Platform Website. 2021. Retrieved from https:\/\/www.pulp-platform.org.  Pulp Platform Website. 2021. Retrieved from https:\/\/www.pulp-platform.org."},{"key":"e_1_2_1_3_1","unstructured":"YAML. 2011. Retrieved from https:\/\/yaml.org.  YAML. 2011. Retrieved from https:\/\/yaml.org."},{"key":"e_1_2_1_4_1","unstructured":"stm32h743 datasheet.2019. https:\/\/www.st.com\/resource\/en\/datasheet\/stm32l476je.pdf.  stm32h743 datasheet.2019. https:\/\/www.st.com\/resource\/en\/datasheet\/stm32l476je.pdf."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2010.2070830"},{"key":"e_1_2_1_6_1","first-page":"1","article-title":"YodaNN: An architecture for ultralow power binary-weight CNN acceleration","volume":"37","author":"Andri R.","year":"2018","unstructured":"R. Andri , L. Cavigelli , D. Rossi , and L. Benini . 2018 . YodaNN: An architecture for ultralow power binary-weight CNN acceleration . TCAD 37 , 1 (Jan. 2018), 48--60. DOI:https:\/\/doi.org\/10.1109\/TCAD.2017.2682138 R. Andri, L. Cavigelli, D. Rossi, and L. Benini. 2018. YodaNN: An architecture for ultralow power binary-weight CNN acceleration. TCAD 37, 1 (Jan. 2018), 48--60. DOI:https:\/\/doi.org\/10.1109\/TCAD.2017.2682138","journal-title":"TCAD"},{"key":"e_1_2_1_7_1","volume-title":"Resistive random-access memory based on ratioed memristors. Nature Electron. 1 (Aug","author":"Lastras-Montano Miguel Angel","year":"2018","unstructured":"Miguel Angel Lastras-Montano and Kwang-Ting Cheng . 2018. Resistive random-access memory based on ratioed memristors. Nature Electron. 1 (Aug . 2018 ), 466--472. DOI:https:\/\/doi.org\/10.1038\/s41928-018-0115-z Miguel Angel Lastras-Montano and Kwang-Ting Cheng. 2018. Resistive random-access memory based on ratioed memristors. Nature Electron. 1 (Aug. 2018), 466--472. DOI:https:\/\/doi.org\/10.1038\/s41928-018-0115-z"},{"key":"e_1_2_1_8_1","volume-title":"Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R. Stanley Williams, Paolo Faraboschi, Wen-Mei Hwu, John Paul Strachan, Kaushik Roy, and Dejan S. Milojicic.","author":"Ankit Aayush","year":"2019","unstructured":"Aayush Ankit , Izzat El Hajj , Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R. Stanley Williams, Paolo Faraboschi, Wen-Mei Hwu, John Paul Strachan, Kaushik Roy, and Dejan S. Milojicic. 2019 . PUMA : A programmable ultra-efficient memristor-based accelerator for machine learning inference. Retrieved from http:\/\/arxiv.org\/abs\/1901.10351. Aayush Ankit, Izzat El Hajj, Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R. Stanley Williams, Paolo Faraboschi, Wen-Mei Hwu, John Paul Strachan, Kaushik Roy, and Dejan S. Milojicic. 2019. PUMA: A programmable ultra-efficient memristor-based accelerator for machine learning inference. Retrieved from http:\/\/arxiv.org\/abs\/1901.10351."},{"key":"e_1_2_1_9_1","volume-title":"Proceedings of VLSI-SoC. 1--6. DOI:https:\/\/doi.org\/10","author":"Bhattacharjee D.","year":"2016","unstructured":"D. Bhattacharjee , F. Merchant , and A. Chattopadhyay . 2016. Enabling in-memory computation of binary BLAS using ReRAM crossbar arrays . In Proceedings of VLSI-SoC. 1--6. DOI:https:\/\/doi.org\/10 .1109\/VLSI-SoC. 2016 .7753568 D. Bhattacharjee, F. Merchant, and A. Chattopadhyay. 2016. Enabling in-memory computation of binary BLAS using ReRAM crossbar arrays. In Proceedings of VLSI-SoC. 1--6. DOI:https:\/\/doi.org\/10.1109\/VLSI-SoC.2016.7753568"},{"volume-title":"Proceedings of SenSys. ACM","author":"Bhattacharya Sourav","key":"e_1_2_1_10_1","unstructured":"Sourav Bhattacharya and Nicholas D. Lane . 2016. Sparsification and separation of deep learning layers for constrained resource inference on wearables . In Proceedings of SenSys. ACM , New York, NY, 176--189. DOI:https:\/\/doi.org\/10.1145\/2994551.2994564 Sourav Bhattacharya and Nicholas D. Lane. 2016. Sparsification and separation of deep learning layers for constrained resource inference on wearables. In Proceedings of SenSys. ACM, New York, NY, 176--189. DOI:https:\/\/doi.org\/10.1145\/2994551.2994564"},{"key":"e_1_2_1_11_1","volume-title":"Proceedings of ISSCC. 248--249","author":"Bong K.","year":"2017","unstructured":"K. Bong , S. Choi , C. Kim , S. Kang , Y. Kim , and H. Yoo . 2017. 14.6 A 0.62mW ultra-low-power convolutional-neural-network face-recognition processor and a CIS integrated with always-on haar-like face detector . In Proceedings of ISSCC. 248--249 . DOI:https:\/\/doi.org\/10.1109\/ISSCC. 2017 .7870354 K. Bong, S. Choi, C. Kim, S. Kang, Y. Kim, and H. Yoo. 2017. 14.6 A 0.62mW ultra-low-power convolutional-neural-network face-recognition processor and a CIS integrated with always-on haar-like face detector. In Proceedings of ISSCC. 248--249. DOI:https:\/\/doi.org\/10.1109\/ISSCC.2017.7870354"},{"key":"e_1_2_1_12_1","volume-title":"Mintz","author":"Bridges Robert A.","year":"2016","unstructured":"Robert A. Bridges , Neena Imam , and Tiffany M . Mintz . 2016 . Understanding GPU power: A survey of profiling, modeling, and simulation methods. ACM Comput. Surv. 49, 3, Article 41 (Sept. 2016), 27 pages. DOI:https:\/\/doi.org\/10.1145\/2962131 Robert A. Bridges, Neena Imam, and Tiffany M. Mintz. 2016. Understanding GPU power: A survey of profiling, modeling, and simulation methods. ACM Comput. Surv. 49, 3, Article 41 (Sept. 2016), 27 pages. DOI:https:\/\/doi.org\/10.1145\/2962131"},{"key":"e_1_2_1_13_1","unstructured":"Ermao Cai Da-Cheng Juan Dimitrios Stamoulis and Diana Marculescu. 2017. NeuralPower: Predict and deploy energy-efficient convolutional neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1710.05420.  Ermao Cai Da-Cheng Juan Dimitrios Stamoulis and Diana Marculescu. 2017. NeuralPower: Predict and deploy energy-efficient convolutional neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1710.05420."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2016.2592330"},{"key":"e_1_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Yu-Hsin Chen Joel S. Emer and Vivienne Sze. 2018. Eyeriss v2: A flexible and high-performance accelerator for emerging deep neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1807.07928.  Yu-Hsin Chen Joel S. Emer and Vivienne Sze. 2018. Eyeriss v2: A flexible and high-performance accelerator for emerging deep neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1807.07928.","DOI":"10.1109\/JETCAS.2019.2910232"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001140"},{"key":"e_1_2_1_17_1","unstructured":"Kyunghyun Cho Bart van Merrienboer \u00c7aglar G\u00fcl\u00e7ehre Fethi Bougares Holger Schwenk and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Retrieved from http:\/\/arxiv.org\/abs\/1406.1078.  Kyunghyun Cho Bart van Merrienboer \u00c7aglar G\u00fcl\u00e7ehre Fethi Bougares Holger Schwenk and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Retrieved from http:\/\/arxiv.org\/abs\/1406.1078."},{"key":"e_1_2_1_18_1","volume-title":"Proceedings of DATE. 683--688","author":"Conti F.","year":"2015","unstructured":"F. Conti and L. Benini . 2015. A ultra-low-energy convolution engine for fast brain-inspired vision in multicore clusters . In Proceedings of DATE. 683--688 . DOI:https:\/\/doi.org\/10.7873\/DATE. 2015 .0404 F. Conti and L. Benini. 2015. A ultra-low-energy convolution engine for fast brain-inspired vision in multicore clusters. In Proceedings of DATE. 683--688. DOI:https:\/\/doi.org\/10.7873\/DATE.2015.0404"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/CODES-ISSS.2013.6658992"},{"key":"e_1_2_1_20_1","volume-title":"Proceedings of ISCAS. 121--124","author":"Cui J.","year":"2016","unstructured":"J. Cui and Q. Qiu . 2016. Towards memristor based accelerator for sparse matrix vector multiplication . In Proceedings of ISCAS. 121--124 . DOI:https:\/\/doi.org\/10.1109\/ISCAS. 2016 .7527185 J. Cui and Q. Qiu. 2016. Towards memristor based accelerator for sparse matrix vector multiplication. In Proceedings of ISCAS. 121--124. DOI:https:\/\/doi.org\/10.1109\/ISCAS.2016.7527185"},{"key":"e_1_2_1_21_1","volume-title":"Proceedings of ECCV","volume":"8689","author":"Matthew","year":"2014","unstructured":"Matthew D. Zeiler and Rob Fergus. 2013. Visualizing and understanding convolutional neural networks . In Proceedings of ECCV 2014 , Vol. 8689 . DOI:https:\/\/doi.org\/10.1007\/978-3-319-10590-1_53 Matthew D. Zeiler and Rob Fergus. 2013. Visualizing and understanding convolutional neural networks. In Proceedings of ECCV 2014, Vol. 8689. DOI:https:\/\/doi.org\/10.1007\/978-3-319-10590-1_53"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/77626.79170"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/42288.42291"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2524713.2524725"},{"volume-title":"Proceedings of ISCA. 92--104","author":"Du Z.","key":"e_1_2_1_25_1","unstructured":"Z. Du , R. Fasthuber , T. Chen , P. Ienne , L. Li , T. Luo , X. Feng , Y. Chen , and O. Temam . 2015. ShiDianNao: Shifting vision processing closer to the sensor . In Proceedings of ISCA. 92--104 . DOI:https:\/\/doi.org\/10.1145\/2749469.2750389 Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y. Chen, and O. Temam. 2015. ShiDianNao: Shifting vision processing closer to the sensor. In Proceedings of ISCA. 92--104. DOI:https:\/\/doi.org\/10.1145\/2749469.2750389"},{"key":"e_1_2_1_26_1","unstructured":"Ahmed T. Elthakeb Prannoy Pilligundla Amir Yazdanbakhsh Sean Kinzer and Hadi Esmaeilzadeh. 2018. ReLeQ: A reinforcement learning approach for deep quantization of neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1811.01704.  Ahmed T. Elthakeb Prannoy Pilligundla Amir Yazdanbakhsh Sean Kinzer and Hadi Esmaeilzadeh. 2018. ReLeQ: A reinforcement learning approach for deep quantization of neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1811.01704."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCSI.2018.2872455"},{"key":"e_1_2_1_28_1","doi-asserted-by":"crossref","unstructured":"Graham Gobieski Nathan Beckmann and Brandon Lucia. 2018. Intelligence beyond the edge: Inference on intermittent embedded systems. Retrieved from http:\/\/arxiv.org\/abs\/1810.07751.  Graham Gobieski Nathan Beckmann and Brandon Lucia. 2018. Intelligence beyond the edge: Inference on intermittent embedded systems. Retrieved from http:\/\/arxiv.org\/abs\/1810.07751.","DOI":"10.1145\/3297858.3304011"},{"key":"e_1_2_1_29_1","unstructured":"Maximilian Golub Guy Lemieux and Mieszko Lis. 2018. DropBack: Continuous pruning during training. Retrieved from http:\/\/arxiv.org\/abs\/1806.06949.  Maximilian Golub Guy Lemieux and Mieszko Lis. 2018. DropBack: Continuous pruning during training. Retrieved from http:\/\/arxiv.org\/abs\/1806.06949."},{"key":"e_1_2_1_30_1","volume-title":"Fletcher","author":"Hegde Kartik","year":"2018","unstructured":"Kartik Hegde , Jiyong Yu , Rohit Agrawal , Mengjia Yan , Michael Pellauer , and Christopher W . Fletcher . 2018 . UCNN : Exploiting computational reuse in deep neural networks via weight repetition. In Proceedings ofISCA \u201918. IEEE Press , Piscataway, NJ, 674--687. DOI:https:\/\/doi.org\/10.1109\/ISCA.2018.00062 Kartik Hegde, Jiyong Yu, Rohit Agrawal, Mengjia Yan, Michael Pellauer, and Christopher W. Fletcher. 2018. UCNN: Exploiting computational reuse in deep neural networks via weight repetition. In Proceedings ofISCA \u201918. IEEE Press, Piscataway, NJ, 674--687. DOI:https:\/\/doi.org\/10.1109\/ISCA.2018.00062"},{"key":"e_1_2_1_31_1","unstructured":"Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios C. Papaefthymiou Scott A. Mahlke Thomas F. Wenisch Jia Deng Lingjia Tang and Jason Mars. 2018. Rethinking numerical representations for deep neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1808.02513.  Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios C. Papaefthymiou Scott A. Mahlke Thomas F. Wenisch Jia Deng Lingjia Tang and Jason Mars. 2018. Rethinking numerical representations for deep neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1808.02513."},{"key":"e_1_2_1_32_1","unstructured":"Andrew G. Howard Menglong Zhu Bo Chen Dmitry Kalenichenko Weijun Wang Tobias Weyand Marco Andreetto and Hartwig Adam. 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. Retrieved from http:\/\/arxiv.org\/abs\/1704.04861.  Andrew G. Howard Menglong Zhu Bo Chen Dmitry Kalenichenko Weijun Wang Tobias Weyand Marco Andreetto and Hartwig Adam. 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. Retrieved from http:\/\/arxiv.org\/abs\/1704.04861."},{"volume-title":"Proceedings of VLSI. T166--T167","author":"Hsu C.","key":"e_1_2_1_33_1","unstructured":"C. Hsu , I. Wang , C. Lo , M. Chiang , W. Jang , C. Lin , and T. Hou . 2013. Self-rectifying bipolar TaOx\/TiO2 RRAM with superior endurance over 1012 cycles for 3D high-density storage-class memory . In Proceedings of VLSI. T166--T167 . C. Hsu, I. Wang, C. Lo, M. Chiang, W. Jang, C. Lin, and T. Hou. 2013. Self-rectifying bipolar TaOx\/TiO2 RRAM with superior endurance over 1012 cycles for 3D high-density storage-class memory. In Proceedings of VLSI. T166--T167."},{"key":"e_1_2_1_34_1","volume-title":"Proceedings of IPDPS. 1--8. DOI:https:\/\/doi.org\/10","author":"Huang S.","year":"2009","unstructured":"S. Huang , S. Xiao , and W. Feng . 2009. On the energy efficiency of graphics processing units for scientific computing . In Proceedings of IPDPS. 1--8. DOI:https:\/\/doi.org\/10 .1109\/IPDPS. 2009 .5160980 S. Huang, S. Xiao, and W. Feng. 2009. On the energy efficiency of graphics processing units for scientific computing. In Proceedings of IPDPS. 1--8. DOI:https:\/\/doi.org\/10.1109\/IPDPS.2009.5160980"},{"key":"e_1_2_1_35_1","volume-title":"Proceedings of ASP-DAC. 794--799","author":"Huangfu W.","year":"2017","unstructured":"W. Huangfu , L. Xia , M. Cheng , X. Yin , T. Tang , B. Li , K. Chakrabarty , Y. Xie , Y. Wang , and H. Yang . 2017. Computation-oriented fault-tolerance schemes for RRAM computing systems . In Proceedings of ASP-DAC. 794--799 . DOI:https:\/\/doi.org\/10.1109\/ASPDAC. 2017 .7858421 W. Huangfu, L. Xia, M. Cheng, X. Yin, T. Tang, B. Li, K. Chakrabarty, Y. Xie, Y. Wang, and H. Yang. 2017. Computation-oriented fault-tolerance schemes for RRAM computing systems. In Proceedings of ASP-DAC. 794--799. DOI:https:\/\/doi.org\/10.1109\/ASPDAC.2017.7858421"},{"key":"e_1_2_1_36_1","unstructured":"Forrest N. Iandola Matthew W. Moskewicz Khalid Ashraf Song Han William J. Dally and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt;1MB model size. Retrieved from http:\/\/arxiv.org\/abs\/1602.07360.  Forrest N. Iandola Matthew W. Moskewicz Khalid Ashraf Song Han William J. Dally and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt;1MB model size. Retrieved from http:\/\/arxiv.org\/abs\/1602.07360."},{"key":"e_1_2_1_37_1","volume-title":"RAPIDNN: In-memory deep neural network acceleration framework.","author":"Imani Mohsen","year":"2018","unstructured":"Mohsen Imani , Mohammad Samragh , Yeseong Kim , Saransh Gupta , Farinaz Koushanfar , and Tajana Rosing . 2018 . RAPIDNN: In-memory deep neural network acceleration framework. Retrieved from http:\/\/arxiv.org\/abs\/1806.05794. Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, and Tajana Rosing. 2018. RAPIDNN: In-memory deep neural network acceleration framework. Retrieved from http:\/\/arxiv.org\/abs\/1806.05794."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.parco.2017.05.003"},{"key":"e_1_2_1_39_1","volume-title":"Proceedings of SAAHPC. 64--73","author":"Kasichayanula K.","year":"2012","unstructured":"K. Kasichayanula , D. Terpstra , P. Luszczek , S. Tomov , S. Moore , and G. D. Peterson . 2012. Power aware computing on GPUs . In Proceedings of SAAHPC. 64--73 . DOI:https:\/\/doi.org\/10.1109\/SAAHPC. 2012 .26 K. Kasichayanula, D. Terpstra, P. Luszczek, S. Tomov, S. Moore, and G. D. Peterson. 2012. Power aware computing on GPUs. In Proceedings of SAAHPC. 64--73. DOI:https:\/\/doi.org\/10.1109\/SAAHPC.2012.26"},{"key":"e_1_2_1_40_1","unstructured":"Yulhwa Kim Hyungjun Kim and Jae-Joon Kim. 2018. Neural network-hardware co-design for scalable RRAM-based BNN accelerators. Retrieved from http:\/\/arxiv.org\/abs\/1811.02187.  Yulhwa Kim Hyungjun Kim and Jae-Joon Kim. 2018. Neural network-hardware co-design for scalable RRAM-based BNN accelerators. Retrieved from http:\/\/arxiv.org\/abs\/1811.02187."},{"key":"e_1_2_1_41_1","unstructured":"Raghuraman Krishnamoorthi. 2018. Quantizing deep convolutional networks for efficient inference: A whitepaper. Retrieved from http:\/\/arxiv.org\/abs\/1806.08342.  Raghuraman Krishnamoorthi. 2018. Quantizing deep convolutional networks for efficient inference: A whitepaper. Retrieved from http:\/\/arxiv.org\/abs\/1806.08342."},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3065386"},{"key":"e_1_2_1_43_1","volume-title":"Proceedings of ISSCC. 468--469","author":"Kull L.","year":"2013","unstructured":"L. Kull , T. Toifl , M. Schmatz , P. A. Francese , C. Menolfi , M. Braendli , M. Kossel , T. Morf , T. M. Andersen , and Y. Leblebici . 2013. A 3.1mW 8b 1.2GS\/s single-channel asynchronous SAR ADC with alternate comparators for enhanced speed in 32nm digital SOI CMOS . In Proceedings of ISSCC. 468--469 . DOI:https:\/\/doi.org\/10.1109\/ISSCC. 2013 .6487818 L. Kull, T. Toifl, M. Schmatz, P. A. Francese, C. Menolfi, M. Braendli, M. Kossel, T. Morf, T. M. Andersen, and Y. Leblebici. 2013. A 3.1mW 8b 1.2GS\/s single-channel asynchronous SAR ADC with alternate comparators for enhanced speed in 32nm digital SOI CMOS. In Proceedings of ISSCC. 468--469. DOI:https:\/\/doi.org\/10.1109\/ISSCC.2013.6487818"},{"key":"e_1_2_1_44_1","volume-title":"HERO: Heterogeneous embedded research platform for exploring RISC-V manycore accelerators on FPGA.","author":"Kurth Andreas","year":"2017","unstructured":"Andreas Kurth , Pirmin Vogel , Alessandro Capotondi , Andrea Marongiu , and Luca Benini . 2017 . HERO: Heterogeneous embedded research platform for exploring RISC-V manycore accelerators on FPGA. Retrieved from http:\/\/arxiv.org\/abs\/1712.06497. Andreas Kurth, Pirmin Vogel, Alessandro Capotondi, Andrea Marongiu, and Luca Benini. 2017. HERO: Heterogeneous embedded research platform for exploring RISC-V manycore accelerators on FPGA. Retrieved from http:\/\/arxiv.org\/abs\/1712.06497."},{"key":"e_1_2_1_45_1","unstructured":"Liangzhen Lai Naveen Suda and Vikas Chandra. 2018. CMSIS-NN: Efficient neural network kernels for arm cortex-M CPUs. Retrieved from http:\/\/arxiv.org\/abs\/1801.06601.  Liangzhen Lai Naveen Suda and Vikas Chandra. 2018. CMSIS-NN: Efficient neural network kernels for arm cortex-M CPUs. Retrieved from http:\/\/arxiv.org\/abs\/1801.06601."},{"key":"e_1_2_1_46_1","volume-title":"Proceedings of VLSIT. 71--72","author":"Lee S. R.","year":"2012","unstructured":"S. R. Lee , Y. Kim , M. Chang , K. M. Kim , C. B. Lee , J. H. Hur , G. Park , D. Lee , M. Lee , C. J. Kim , U. Chung , I. Yoo , and K. Kim . 2012. Multi-level switching of triple-layered TaOx RRAM with excellent reliability for storage class memory . In Proceedings of VLSIT. 71--72 . DOI:https:\/\/doi.org\/10.1109\/VLSIT. 2012 .6242466 S. R. Lee, Y. Kim, M. Chang, K. M. Kim, C. B. Lee, J. H. Hur, G. Park, D. Lee, M. Lee, C. J. Kim, U. Chung, I. Yoo, and K. Kim. 2012. Multi-level switching of triple-layered TaOx RRAM with excellent reliability for storage class memory. In Proceedings of VLSIT. 71--72. DOI:https:\/\/doi.org\/10.1109\/VLSIT.2012.6242466"},{"key":"e_1_2_1_47_1","volume-title":"Proceedings of DATE. 815--820","author":"Li B.","year":"2018","unstructured":"B. Li , L. Song , F. Chen , X. Qian , Y. Chen , and H. H. Li . 2018. ReRAM-based accelerator for deep learning . In Proceedings of DATE. 815--820 . DOI:https:\/\/doi.org\/10.23919\/DATE. 2018 .8342118 B. Li, L. Song, F. Chen, X. Qian, Y. Chen, and H. H. Li. 2018. ReRAM-based accelerator for deep learning. In Proceedings of DATE. 815--820. DOI:https:\/\/doi.org\/10.23919\/DATE.2018.8342118"},{"key":"e_1_2_1_48_1","doi-asserted-by":"crossref","unstructured":"S. Liao Y. Xie X. Lin Y. Wang M. Zhang and B. Yuan. 2018. Reduced-complexity deep neural networks design using multi-level compression. IEEE Trans. Sustain. Comput. (2018) 1--1. DOI:https:\/\/doi.org\/10.1109\/TSUSC.2017.2710178  S. Liao Y. Xie X. Lin Y. Wang M. Zhang and B. Yuan. 2018. Reduced-complexity deep neural networks design using multi-level compression. IEEE Trans. Sustain. Comput. (2018) 1--1. DOI:https:\/\/doi.org\/10.1109\/TSUSC.2017.2710178","DOI":"10.1109\/TSUSC.2017.2710178"},{"key":"e_1_2_1_49_1","volume-title":"EdgeSpeechNets: Highly efficient deep neural networks for speech recognition on the edge. CoRR abs\/1810.08559","author":"Lin Zhong Qiu","year":"2018","unstructured":"Zhong Qiu Lin , Audrey G. Chung , and Alexander Wong . 2018. EdgeSpeechNets: Highly efficient deep neural networks for speech recognition on the edge. CoRR abs\/1810.08559 ( 2018 ). Zhong Qiu Lin, Audrey G. Chung, and Alexander Wong. 2018. EdgeSpeechNets: Highly efficient deep neural networks for speech recognition on the edge. CoRR abs\/1810.08559 (2018)."},{"volume-title":"Proceedings of DAC. 1--6. DOI:https:\/\/doi.org\/10","author":"Liu C.","key":"e_1_2_1_50_1","unstructured":"C. Liu , M. Hu , J. P. Strachan , and H. Li . 2017. Rescuing memristor-based neuromorphic design with high defects . In Proceedings of DAC. 1--6. DOI:https:\/\/doi.org\/10 .1145\/3061639.3062310 C. Liu, M. Hu, J. P. Strachan, and H. Li. 2017. Rescuing memristor-based neuromorphic design with high defects. In Proceedings of DAC. 1--6. DOI:https:\/\/doi.org\/10.1145\/3061639.3062310"},{"key":"e_1_2_1_51_1","volume-title":"A survey of ReRAM-based architectures for processing-in-memory and neural networks. Mach. Learn. Knowl. Extract. 1 (04","author":"Mittal Sparsh","year":"2018","unstructured":"Sparsh Mittal . 2018. A survey of ReRAM-based architectures for processing-in-memory and neural networks. Mach. Learn. Knowl. Extract. 1 (04 2018 ). DOI:https:\/\/doi.org\/10.3390\/make1010005 Sparsh Mittal. 2018. A survey of ReRAM-based architectures for processing-in-memory and neural networks. Mach. Learn. Knowl. Extract. 1 (04 2018). DOI:https:\/\/doi.org\/10.3390\/make1010005"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASSCC.2016.7844126"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/MM.2018.053631140"},{"volume-title":"Proceedings of ISCA. 27--40","author":"Parashar A.","key":"e_1_2_1_54_1","unstructured":"A. Parashar , M. Rhu , A. Mukkara , A. Puglielli , R. Venkatesan , B. Khailany , J. Emer , S. W. Keckler , and W. J. Dally . 2017. SCNN: An accelerator for compressed-sparse convolutional neural networks . In Proceedings of ISCA. 27--40 . DOI:https:\/\/doi.org\/10.1145\/3079856.3080254 A. Parashar, M. Rhu, A. Mukkara, A. Puglielli, R. Venkatesan, B. Khailany, J. Emer, S. W. Keckler, and W. J. Dally. 2017. SCNN: An accelerator for compressed-sparse convolutional neural networks. In Proceedings of ISCA. 27--40. DOI:https:\/\/doi.org\/10.1145\/3079856.3080254"},{"key":"e_1_2_1_55_1","volume-title":"Proceedings of ISSCC. 492--494","author":"Pawlowski R.","year":"2012","unstructured":"R. Pawlowski , E. Krimer , J. Crop , J. Postman , N. Moezzi-Madani , M. Erez , and P. Chiang . 2012. A 530mV 10-lane SIMD processor with variation resiliency in 45nm SOI . In Proceedings of ISSCC. 492--494 . DOI:https:\/\/doi.org\/10.1109\/ISSCC. 2012 .6177105 R. Pawlowski, E. Krimer, J. Crop, J. Postman, N. Moezzi-Madani, M. Erez, and P. Chiang. 2012. A 530mV 10-lane SIMD processor with variation resiliency in 45nm SOI. In Proceedings of ISSCC. 492--494. DOI:https:\/\/doi.org\/10.1109\/ISSCC.2012.6177105"},{"key":"e_1_2_1_56_1","first-page":"8","article-title":"A heterogeneous multicore system on chip for energy efficient brain inspired computing","volume":"65","author":"Pullini A.","year":"2018","unstructured":"A. Pullini , F. Conti , D. Rossi , I. Loi , M. Gautschi , and L. Benini . 2018 . A heterogeneous multicore system on chip for energy efficient brain inspired computing . TCAS II: Express Briefs 65 , 8 (Aug. 2018), 1094--1098. DOI:https:\/\/doi.org\/10.1109\/TCSII.2017.2652982 A. Pullini, F. Conti, D. Rossi, I. Loi, M. Gautschi, and L. Benini. 2018. A heterogeneous multicore system on chip for energy efficient brain inspired computing. TCAS II: Express Briefs 65, 8 (Aug. 2018), 1094--1098. DOI:https:\/\/doi.org\/10.1109\/TCSII.2017.2652982","journal-title":"TCAS II: Express Briefs"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3195970.3195998"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001165"},{"volume-title":"Ultra-Low-Power Digital Architectures for the Internet of Things","author":"Rossi Davide","key":"e_1_2_1_59_1","unstructured":"Davide Rossi , Igor Loi , Antonio Pullini , and Luca Benini . 2017. Ultra-Low-Power Digital Architectures for the Internet of Things . Springer , Cham , 69--93. DOI:https:\/\/doi.org\/10.1007\/978-3-319-51482-6_3 Davide Rossi, Igor Loi, Antonio Pullini, and Luca Benini. 2017. Ultra-Low-Power Digital Architectures for the Internet of Things. Springer, Cham, 69--93. DOI:https:\/\/doi.org\/10.1007\/978-3-319-51482-6_3"},{"key":"e_1_2_1_60_1","volume-title":"170--184. DOI:https:\/\/doi.org\/10.1016\/j.sse.2015.11.015","author":"Rossi Davide","year":"2016","unstructured":"Davide Rossi , Antonio Pullini , Igor Loi , Michael Gautschi , Frank K. Gurkaynak , Andrea Bartolini , Philippe Flatresse , and Luca Benini . 2016. A 60 GOPS\/W, &minus;1.8 V to 0.9 V body bias ULP cluster in 28 nm UTBB FD-SOI technology. Solid-State Electron . 117, C ( 2016 ), 170--184. DOI:https:\/\/doi.org\/10.1016\/j.sse.2015.11.015 Davide Rossi, Antonio Pullini, Igor Loi, Michael Gautschi, Frank K. Gurkaynak, Andrea Bartolini, Philippe Flatresse, and Luca Benini. 2016. A 60 GOPS\/W, &minus;1.8 V to 0.9 V body bias ULP cluster in 28 nm UTBB FD-SOI technology. Solid-State Electron. 117, C (2016), 170--184. DOI:https:\/\/doi.org\/10.1016\/j.sse.2015.11.015"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1109\/S3S.2018.8640145"},{"key":"e_1_2_1_62_1","doi-asserted-by":"crossref","unstructured":"Florian Schroff Dmitry Kalenichenko and James Philbin. 2015. FaceNet: A unified embedding for face recognition and clustering. Retrieved from http:\/\/arxiv.org\/abs\/1503.03832.  Florian Schroff Dmitry Kalenichenko and James Philbin. 2015. FaceNet: A unified embedding for face recognition and clustering. Retrieved from http:\/\/arxiv.org\/abs\/1503.03832.","DOI":"10.1109\/CVPR.2015.7298682"},{"key":"e_1_2_1_63_1","volume-title":"Proceedings of ISLPED. 79--84","author":"Seo S.","year":"1840","unstructured":"S. Seo , R. G. Dreslinski , M. Woh , C. Chakrabarti , S. Mahlke , and T. Mudge . 2010. Diet SODA: A power-efficient processor for digital cameras . In Proceedings of ISLPED. 79--84 . DOI:https:\/\/doi.org\/10.1145\/ 1840 845.1840862 S. Seo, R. G. Dreslinski, M. Woh, C. Chakrabarti, S. Mahlke, and T. Mudge. 2010. Diet SODA: A power-efficient processor for digital cameras. In Proceedings of ISLPED. 79--84. DOI:https:\/\/doi.org\/10.1145\/1840845.1840862"},{"key":"e_1_2_1_64_1","volume-title":"Proceedings of ISCA. 14--26","author":"Shafiee A.","year":"2016","unstructured":"A. Shafiee , A. Nag , N. Muralimanohar , R. Balasubramonian , J. P. Strachan , M. Hu , R. S. Williams , and V. Srikumar . 2016. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars . In Proceedings of ISCA. 14--26 . DOI:https:\/\/doi.org\/10.1109\/ISCA. 2016 .12 A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P. Strachan, M. Hu, R. S. Williams, and V. Srikumar. 2016. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In Proceedings of ISCA. 14--26. DOI:https:\/\/doi.org\/10.1109\/ISCA.2016.12"},{"key":"e_1_2_1_65_1","unstructured":"Laurent Sifre and St\u00e9phane Mallat. 2014. Rigid-motion scattering for texture classification. Retrieved from http:\/\/arxiv.org\/abs\/1403.1687.  Laurent Sifre and St\u00e9phane Mallat. 2014. Rigid-motion scattering for texture classification. Retrieved from http:\/\/arxiv.org\/abs\/1403.1687."},{"key":"e_1_2_1_66_1","volume-title":"Proceedings of ISSCC. 264--265","author":"Sim J.","year":"2016","unstructured":"J. Sim , J. Park , M. Kim , D. Bae , Y. Choi , and L. Kim . 2016. 14.6 A 1.42TOPS\/W deep convolutional neural network recognition processor for intelligent IoE systems . In Proceedings of ISSCC. 264--265 . DOI:https:\/\/doi.org\/10.1109\/ISSCC. 2016 .7418008 J. Sim, J. Park, M. Kim, D. Bae, Y. Choi, and L. Kim. 2016. 14.6 A 1.42TOPS\/W deep convolutional neural network recognition processor for intelligent IoE systems. In Proceedings of ISSCC. 264--265. DOI:https:\/\/doi.org\/10.1109\/ISSCC.2016.7418008"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.vlsi.2017.02.002"},{"key":"e_1_2_1_69_1","volume-title":"Proceedings of IEEE CVPR. 1--9. DOI:https:\/\/doi.org\/10","author":"Szegedy C.","year":"2015","unstructured":"C. Szegedy , P. Sermanet , S. Reed , D. Anguelov , D. Erhan , V. Vanhoucke , and A. Rabinovich . 2015. Going deeper with convolutions . In Proceedings of IEEE CVPR. 1--9. DOI:https:\/\/doi.org\/10 .1109\/CVPR. 2015 .7298594 C. Szegedy, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. 2015. Going deeper with convolutions. In Proceedings of IEEE CVPR. 1--9. DOI:https:\/\/doi.org\/10.1109\/CVPR.2015.7298594"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.220"},{"key":"e_1_2_1_71_1","volume-title":"Proceedings of ICASSP. 5484--5488","author":"Tang R.","year":"2018","unstructured":"R. Tang and J. Lin . 2018. Deep residual learning for small-footprint keyword spotting . In Proceedings of ICASSP. 5484--5488 . DOI:https:\/\/doi.org\/10.1109\/ICASSP. 2018 .8462688 R. Tang and J. Lin. 2018. Deep residual learning for small-footprint keyword spotting. In Proceedings of ICASSP. 5484--5488. DOI:https:\/\/doi.org\/10.1109\/ICASSP.2018.8462688"},{"key":"e_1_2_1_72_1","volume-title":"Proceedings of NVMSA. 1--6. DOI:https:\/\/doi.org\/10","author":"Tang S.","year":"2017","unstructured":"S. Tang , S. Yin , S. Zheng , P. Ouyang , F. Tu , L. Yao , J. Wu , W. Cheng , L. Liu , and S. Wei . 2017. AEPE: An area and power efficient RRAM crossbar-based accelerator for deep CNNs . In Proceedings of NVMSA. 1--6. DOI:https:\/\/doi.org\/10 .1109\/NVMSA. 2017 .8064475 S. Tang, S. Yin, S. Zheng, P. Ouyang, F. Tu, L. Yao, J. Wu, W. Cheng, L. Liu, and S. Wei. 2017. AEPE: An area and power efficient RRAM crossbar-based accelerator for deep CNNs. In Proceedings of NVMSA. 1--6. DOI:https:\/\/doi.org\/10.1109\/NVMSA.2017.8064475"},{"key":"e_1_2_1_73_1","unstructured":"Chakkrit Termritthikun Surachet Kanprachar and Paisarn Muneesawang. 2018. NU-LiteNet: Mobile landmark recognition using convolutional neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1810.01074.  Chakkrit Termritthikun Surachet Kanprachar and Paisarn Muneesawang. 2018. NU-LiteNet: Mobile landmark recognition using convolutional neural networks. Retrieved from http:\/\/arxiv.org\/abs\/1810.01074."},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1145\/2764454"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3195970.3196116"},{"key":"e_1_2_1_76_1","volume-title":"Proceedings of DATE. 1032--1037","author":"Wang S.","year":"2017","unstructured":"S. Wang , D. Zhou , X. Han , and T. Yoshimura . 2017. Chain-NN: An energy-efficient 1D chain architecture for accelerating deep convolutional neural networks . In Proceedings of DATE. 1032--1037 . DOI:https:\/\/doi.org\/10.23919\/DATE. 2017 .7927142 S. Wang, D. Zhou, X. Han, and T. Yoshimura. 2017. Chain-NN: An energy-efficient 1D chain architecture for accelerating deep convolutional neural networks. In Proceedings of DATE. 1032--1037. DOI:https:\/\/doi.org\/10.23919\/DATE.2017.7927142"},{"key":"e_1_2_1_77_1","doi-asserted-by":"crossref","unstructured":"Andrew Waterman Yunsup Lee and David Patterson. 2014. The RISC-V Instruction Set Manual.  Andrew Waterman Yunsup Lee and David Patterson. 2014. The RISC-V Instruction Set Manual.","DOI":"10.1109\/HOTCHIPS.2013.7478332"},{"key":"e_1_2_1_78_1","first-page":"2","article-title":"Minimizing development and maintenance costs in supporting persistently optimized BLAS. Software","volume":"35","author":"Clint Whaley R.","year":"2005","unstructured":"R. Clint Whaley and Antoine Petitet . 2005 . Minimizing development and maintenance costs in supporting persistently optimized BLAS. Software : Pract. Exper. 35 , 2 (Feb. 2005), 101--121. Retrieved from http:\/\/www.cs.utsa.edu\/ whaley\/papers\/spercw04.ps. R. Clint Whaley and Antoine Petitet. 2005. Minimizing development and maintenance costs in supporting persistently optimized BLAS. Software: Pract. Exper. 35, 2 (Feb. 2005), 101--121. Retrieved from http:\/\/www.cs.utsa.edu\/ whaley\/papers\/spercw04.ps.","journal-title":"Pract. Exper."},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1109\/NANOARCH.2017.8053729"},{"key":"e_1_2_1_80_1","volume-title":"Proceedings of HPCA. 476--488","author":"Xu C.","year":"2015","unstructured":"C. Xu , D. Niu , N. Muralimanohar , R. Balasubramonian , T. Zhang , S. Yu , and Y. Xie . 2015. Overcoming the challenges of crossbar resistive memory architectures . In Proceedings of HPCA. 476--488 . DOI:https:\/\/doi.org\/10.1109\/HPCA. 2015 .7056056 C. Xu, D. Niu, N. Muralimanohar, R. Balasubramonian, T. Zhang, S. Yu, and Y. Xie. 2015. Overcoming the challenges of crossbar resistive memory architectures. In Proceedings of HPCA. 476--488. DOI:https:\/\/doi.org\/10.1109\/HPCA.2015.7056056"},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1145\/2206781.2206786"},{"key":"e_1_2_1_82_1","first-page":"1815","article-title":"Design and optimization of nonvolatile multibit 1T1R resistive RAM","volume":"22","author":"Zangeneh M.","year":"2014","unstructured":"M. Zangeneh and A. Joshi . 2014 . Design and optimization of nonvolatile multibit 1T1R resistive RAM . TVLSI 22 , 8 (2014), 1815 -- 1828 . M. Zangeneh and A. Joshi. 2014. Design and optimization of nonvolatile multibit 1T1R resistive RAM. TVLSI 22, 8 (2014), 1815--1828.","journal-title":"TVLSI"},{"key":"e_1_2_1_83_1","volume-title":"Proceedings of MICRO. 15--28","author":"Zhou X.","year":"2018","unstructured":"X. Zhou , Z. Du , Q. Guo , S. Liu , C. Liu , C. Wang , X. Zhou , L. Li , T. Chen , and Y. Chen . 2018. Cambricon-S: Addressing irregularity in sparse neural networks through a cooperative software\/hardware approach . In Proceedings of MICRO. 15--28 . DOI:https:\/\/doi.org\/10.1109\/MICRO. 2018 .00011 X. Zhou, Z. Du, Q. Guo, S. Liu, C. Liu, C. Wang, X. Zhou, L. Li, T. Chen, and Y. Chen. 2018. Cambricon-S: Addressing irregularity in sparse neural networks through a cooperative software\/hardware approach. In Proceedings of MICRO. 15--28. DOI:https:\/\/doi.org\/10.1109\/MICRO.2018.00011"}],"container-title":["ACM Journal on Emerging Technologies in Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3432815","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3432815","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:47:11Z","timestamp":1750193231000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3432815"}},"subtitle":["A Multiplication Engine for Accelerating Neural Networks on Ultra-low-power Architectures"],"short-title":[],"issued":{"date-parts":[[2021,4,15]]},"references-count":82,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2021,4,30]]}},"alternative-id":["10.1145\/3432815"],"URL":"https:\/\/doi.org\/10.1145\/3432815","relation":{},"ISSN":["1550-4832","1550-4840"],"issn-type":[{"type":"print","value":"1550-4832"},{"type":"electronic","value":"1550-4840"}],"subject":[],"published":{"date-parts":[[2021,4,15]]},"assertion":[{"value":"2020-04-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-10-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-04-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}