{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,22]],"date-time":"2025-06-22T04:03:28Z","timestamp":1750565008310,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":69,"publisher":"ACM","license":[{"start":{"date-parts":[[2025,6,20]],"date-time":"2025-06-20T00:00:00Z","timestamp":1750377600000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["1763699"],"award-info":[{"award-number":["1763699"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,6,21]]},"DOI":"10.1145\/3695053.3731027","type":"proceedings-article","created":{"date-parts":[[2025,6,20]],"date-time":"2025-06-20T16:43:11Z","timestamp":1750437791000},"page":"916-929","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Single Spike Artificial Neural Networks"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-1416-5532","authenticated-orcid":false,"given":"Rhys","family":"Gretsch","sequence":"first","affiliation":[{"name":"ECE, UC Santa Barbara, Santa Barbara, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6199-4879","authenticated-orcid":false,"given":"Michael","family":"Beyeler","sequence":"additional","affiliation":[{"name":"CS, UC Santa Barbara, Santa Barbara, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-6195-187X","authenticated-orcid":false,"given":"Jeremy","family":"Lau","sequence":"additional","affiliation":[{"name":"CS, UC Santa Barbara, Santa Barbara, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6550-6075","authenticated-orcid":false,"given":"Timothy","family":"Sherwood","sequence":"additional","affiliation":[{"name":"CS, UC Santa Barbara, Santa Barbara, California, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,6,20]]},"reference":[{"unstructured":"2024. https:\/\/mlcommons.org\/benchmarks\/inference-tiny\/","key":"e_1_3_3_1_2_2"},{"unstructured":"Mart\u00edn Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg\u00a0S. Corrado Andy Davis Jeffrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geoffrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dandelion Man\u00e9 Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Vi\u00e9gas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https:\/\/www.tensorflow.org\/ Software available from tensorflow.org.","key":"e_1_3_3_1_3_2"},{"unstructured":"Abien\u00a0Fred Agarap. 2018. Deep learning using rectified linear units (ReLU). arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1803.08375 (2018).","key":"e_1_3_3_1_4_2"},{"doi-asserted-by":"crossref","unstructured":"Ankur Agrawal Sae\u00a0Kyu Lee Joel Silberman Matthew\u00a0M. Ziegler Mingu Kang Swagath Venkataramani Nianzheng Cao Bruce\u00a0M. Fleischer Michael Guillorn Matthew Cohen Silvia\u00a0Melitta Mueller Jinwook Oh Martin Lutz Jinwook Jung Siyuranga\u00a0O. Koswatta Ching Zhou Vidhi Zalani James Bonanno Robert Casatuta Chia-Yu Chen Jungwook Choi Howard Haynie A. Herbert Radhika Jain Monodeep Kar Kyu-Hyoun Kim Yulong Li Zhibin Ren Scot Rider Marcel Schaal Kerstin Schelm Michael Scheuermann Xiao Sun Hung Tran Naigang Wang Wei Wang Xin Zhang Vinay Shah Brian\u00a0W. Curran Vijayalakshmi Srinivasan Pong-Fei Lu Sunil Shukla Leland Chang and K. Gopalakrishnan. 2021. A 7nm 4-Core AI Chip with 25.6TFLOPS Hybrid FP8 Training 102.4TOPS INT4 Inference and Workload-Aware Throttling. 2021 IEEE International Solid- State Circuits Conference (ISSCC) 64 (2021) 144\u2013146. https:\/\/api.semanticscholar.org\/CorpusID:232152824","key":"e_1_3_3_1_5_2","DOI":"10.1109\/ISSCC42613.2021.9365791"},{"doi-asserted-by":"crossref","unstructured":"Filipp Akopyan Jun Sawada Andrew Cassidy Rodrigo Alvarez-Icaza John Arthur Paul Merolla Nabil Imam Yutaka Nakamura Pallab Datta Gi-Joon Nam et\u00a0al. 2015. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE transactions on computer-aided design of integrated circuits and systems 34 10 (2015) 1537\u20131557.","key":"e_1_3_3_1_6_2","DOI":"10.1109\/TCAD.2015.2474396"},{"doi-asserted-by":"crossref","unstructured":"Kartik Audhkhasi Osonde Osoba and Bart Kosko. 2016. Noise-enhanced convolutional neural networks. Neural Networks 78 (2016) 15\u201323.","key":"e_1_3_3_1_7_2","DOI":"10.1016\/j.neunet.2015.09.014"},{"doi-asserted-by":"crossref","unstructured":"Rajeev Balasubramonian Andrew\u00a0B Kahng Naveen Muralimanohar Ali Shafiee and Vaishnav Srinivas. 2017. CACTI 7: New tools for interconnect exploration in innovative off-chip memories. ACM Transactions on Architecture and Code Optimization (TACO) 14 2 (2017) 1\u201325.","key":"e_1_3_3_1_8_2","DOI":"10.1145\/3085572"},{"unstructured":"Colby Banbury Vijay\u00a0Janapa Reddi Peter Torelli Jeremy Holleman Nat Jeffries Csaba Kiraly Pietro Montino David Kanter Sebastian Ahmed Danilo Pau Urmish Thakker Antonio Torrini Peter Warden Jay Cordaro Giuseppe Di\u00a0Guglielmo Javier Duarte Stephen Gibellini Videet Parekh Honson Tran Nhan Tran Niu Wenxu and Xu Xuesong. 2021. MLPerf Tiny Benchmark. arxiv:https:\/\/arXiv.org\/abs\/2106.07597","key":"e_1_3_3_1_9_2"},{"doi-asserted-by":"crossref","unstructured":"Romain Brette. 2015. Philosophy of the spike: rate-based vs. spike-based theories of the brain. Frontiers in systems neuroscience 9 (2015) 151.","key":"e_1_3_3_1_10_2","DOI":"10.3389\/fnsys.2015.00151"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_11_2","DOI":"10.1109\/ISLPED58423.2023.10244267"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_12_2","DOI":"10.1109\/MICRO.2014.58"},{"doi-asserted-by":"crossref","unstructured":"Yu-Hsin Chen Tushar Krishna Joel\u00a0S Emer and Vivienne Sze. 2016. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE journal of solid-state circuits 52 1 (2016) 127\u2013138.","key":"e_1_3_3_1_13_2","DOI":"10.1109\/JSSC.2016.2616357"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_14_2","DOI":"10.1145\/2934583.2934585"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_15_2","DOI":"10.1109\/ISSCC.2019.8662340"},{"doi-asserted-by":"crossref","unstructured":"Zhengyu Chen and Jie Gu. 2020. High-throughput dynamic time warping accelerator for time-series classification with pipelined mixed-signal time-domain computing. IEEE Journal of Solid-State Circuits 56 2 (2020) 624\u2013635.","key":"e_1_3_3_1_16_2","DOI":"10.1109\/JSSC.2020.3021066"},{"doi-asserted-by":"crossref","unstructured":"Matthew\u00a0W Daniels Advait Madhavan Philippe Talatchian Alice Mizrahi and Mark\u00a0D Stiles. 2020. Energy-efficient stochastic computing with superparamagnetic tunnel junctions. Physical review applied 13 3 (2020) 034016.","key":"e_1_3_3_1_17_2","DOI":"10.1103\/PhysRevApplied.13.034016"},{"doi-asserted-by":"crossref","unstructured":"Mike Davies Narayan Srinivasa Tsung-Han Lin Gautham Chinya Yongqiang Cao Sri\u00a0Harsha Choday Georgios Dimou Prasad Joshi Nabil Imam Shweta Jain et\u00a0al. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro 38 1 (2018) 82\u201399.","key":"e_1_3_3_1_18_2","DOI":"10.1109\/MM.2018.112130359"},{"doi-asserted-by":"crossref","unstructured":"Asma Dehghani Mohsen Saneei and Ali Mahani. 2016. A high-resolution time-to-digital converter using a three-level resolution. International Journal of Electronics 103 8 (2016) 1248\u20131261.","key":"e_1_3_3_1_19_2","DOI":"10.1080\/00207217.2015.1092599"},{"doi-asserted-by":"crossref","unstructured":"Xiangyu Dong Cong Xu Yuan Xie and Norman\u00a0P Jouppi. 2012. Nvsim: A circuit-level performance energy and area model for emerging nonvolatile memory. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 31 7 (2012) 994\u20131007.","key":"e_1_3_3_1_20_2","DOI":"10.1109\/TCAD.2012.2185930"},{"unstructured":"Steve\u00a0K Esser Rathinakumar Appuswamy Paul Merolla John\u00a0V Arthur and Dharmendra\u00a0S Modha. 2015. Backpropagation for energy-efficient neuromorphic computing. Advances in neural information processing systems 28 (2015).","key":"e_1_3_3_1_21_2"},{"doi-asserted-by":"crossref","unstructured":"Samanwoy Ghosh-Dastidar and Hojjat Adeli. 2009. Spiking neural networks. International journal of neural systems 19 04 (2009) 295\u2013308.","key":"e_1_3_3_1_22_2","DOI":"10.1142\/S0129065709002002"},{"key":"e_1_3_3_1_23_2","first-page":"4045","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Gra\u00e7a Rui","year":"2023","unstructured":"Rui Gra\u00e7a, Brian McReynolds, and Tobi Delbruck. 2023. Shining light on the DVS pixel: A tutorial and discussion about biasing and optimization. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 4045\u20134053."},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_24_2","DOI":"10.1145\/3620665.3640395"},{"unstructured":"Andrew\u00a0G Howard. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1704.04861 (2017).","key":"e_1_3_3_1_25_2"},{"unstructured":"Yangfan Hu Qian Zheng Guoqi Li Huajin Tang and Gang Pan. 2024. Toward Large-scale Spiking Neural Networks: A Comprehensive Survey and Future Directions. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2409.02111 (2024).","key":"e_1_3_3_1_26_2"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_27_2","DOI":"10.1109\/CVPR.2018.00286"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_28_2","DOI":"10.1145\/3079856.3080246"},{"doi-asserted-by":"crossref","unstructured":"Edin Kadric Paul Gurniak and Andr\u00e9 DeHon. 2016. Accurate parallel floating-point accumulation. IEEE Trans. Comput. 65 11 (2016) 3224\u20133238.","key":"e_1_3_3_1_29_2","DOI":"10.1109\/TC.2016.2532874"},{"unstructured":"Anders Krogh and John Hertz. 1991. A simple weight decay can improve generalization. Advances in neural information processing systems 4 (1991).","key":"e_1_3_3_1_30_2"},{"doi-asserted-by":"crossref","unstructured":"Can Li Miao Hu Yunning Li Hao Jiang Ning Ge Eric Montgomery Jiaming Zhang Wenhao Song Noraica D\u00e1vila Catherine\u00a0E Graves et\u00a0al. 2018. Analogue signal and image processing with large memristor crossbars. Nature electronics 1 1 (2018) 52\u201359.","key":"e_1_3_3_1_31_2","DOI":"10.1038\/s41928-017-0002-z"},{"doi-asserted-by":"crossref","unstructured":"Yang Li Dongcheng Zhao and Yi Zeng. 2022. Bsnn: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons. Frontiers in neuroscience 16 (2022) 991851.","key":"e_1_3_3_1_32_2","DOI":"10.3389\/fnins.2022.991851"},{"doi-asserted-by":"crossref","unstructured":"Yidong Liu Siting Liu Yanzhi Wang Fabrizio Lombardi and Jie Han. 2020. A survey of stochastic computing neural networks for machine learning applications. IEEE Transactions on Neural Networks and Learning Systems 32 7 (2020) 2809\u20132824.","key":"e_1_3_3_1_33_2","DOI":"10.1109\/TNNLS.2020.3009047"},{"unstructured":"Sangkug Lym and Mattan Erez. 2020. FlexSA: Flexible systolic array architecture for efficient pruned DNN model training. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2004.13027 (2020).","key":"e_1_3_3_1_34_2"},{"doi-asserted-by":"crossref","unstructured":"Advait Madhavan Matthew\u00a0W Daniels and Mark\u00a0D Stiles. 2021. Temporal state machines: Using temporal memory to stitch time-based graph computations. ACM Journal on Emerging Technologies in Computing Systems (JETC) 17 3 (2021) 1\u201327.","key":"e_1_3_3_1_35_2","DOI":"10.1145\/3451214"},{"doi-asserted-by":"crossref","unstructured":"Advait Madhavan Timothy Sherwood and Dmitri Strukov. 2014. Race logic: A hardware acceleration for dynamic programming algorithms. ACM SIGARCH Computer Architecture News 42 3 (2014) 517\u2013528.","key":"e_1_3_3_1_36_2","DOI":"10.1145\/2678373.2665747"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_37_2","DOI":"10.1109\/CICC.2017.7993630"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_38_2","DOI":"10.1109\/ISCAS45731.2020.9180662"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_39_2","DOI":"10.1109\/MicroCom.2016.7522456"},{"doi-asserted-by":"publisher","unstructured":"M. Maymandi-Nejad and M. Sachdev. 2003. A digitally programmable delay element: design and analysis. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 11 5 (2003) 871\u2013878. 10.1109\/TVLSI.2003.810787","key":"e_1_3_3_1_40_2","DOI":"10.1109\/TVLSI.2003.810787"},{"unstructured":"Daisuke Miyashita Edward\u00a0H Lee and Boris Murmann. 2016. Convolutional neural networks using logarithmic data representation. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1603.01025 (2016).","key":"e_1_3_3_1_41_2"},{"doi-asserted-by":"crossref","unstructured":"Xunjun Mo Jiaqi Wu Nijwm Wary and Tony\u00a0Chan Carusone. 2021. Design Methodologies for Low-Jitter CMOS Clock Distribution. IEEE Open Journal of the Solid-State Circuits Society 1 (2021) 94\u2013103. https:\/\/api.semanticscholar.org\/CorpusID:238750409","key":"e_1_3_3_1_42_2","DOI":"10.1109\/OJSSCS.2021.3117930"},{"unstructured":"Markus Nagel Marios Fournarakis Rana\u00a0Ali Amjad Yelysei Bondarenko Mart Van\u00a0Baalen and Tijmen Blankevoort. 2021. A white paper on neural network quantization. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2106.08295 (2021).","key":"e_1_3_3_1_43_2"},{"doi-asserted-by":"publisher","unstructured":"Jens\u00a0E. Pedersen Steven Abreu Matthias Jobst Gregor Lenz Vittorio Fra Felix\u00a0Christian Bauer Dylan\u00a0Richard Muir Peng Zhou Bernhard Vogginger Kade Heckel Gianvito Urgese Sadasivan Shankar Terrence\u00a0C. Stewart Sadique Sheik and Jason\u00a0K. Eshraghian. 2024. Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computing. Nature Communications 15 1 (Sept. 2024) 8122. 10.1038\/s41467-024-52259-9","key":"e_1_3_3_1_44_2","DOI":"10.1038\/s41467-024-52259-9"},{"doi-asserted-by":"crossref","unstructured":"Juan Riquelme and Ioannis Vourkas. 2024. A Star Network of Bipolar Memristive Devices Enables Sensing and Temporal Computing. Sensors 24 2 (2024) 512.","key":"e_1_3_3_1_45_2","DOI":"10.3390\/s24020512"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_46_2","DOI":"10.1109\/WF-IoT51360.2021.9595467"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_47_2","DOI":"10.1109\/ISPASS48437.2020.00016"},{"unstructured":"Ananda Samajdar Yuhao Zhu Paul Whatmough Matthew Mattina and Tushar Krishna. 2018. SCALE-Sim: Systolic CNN accelerator simulator. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1811.02883 (2018).","key":"e_1_3_3_1_48_2"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_49_2","DOI":"10.1109\/ISCAS51556.2021.9401196"},{"doi-asserted-by":"crossref","unstructured":"Aseem Sayal Shirin Fathima SS\u00a0Teja Nibhanupudi and Jaydeep\u00a0P Kulkarni. 2020. Compac: Compressed time-domain pooling-aware convolution CNN engine with reduced data movement for energy-efficient AI computing. IEEE Journal of Solid-State Circuits 56 7 (2020) 2205\u20132220.","key":"e_1_3_3_1_50_2","DOI":"10.1109\/JSSC.2020.3041502"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_51_2","DOI":"10.1109\/CVPR52733.2024.02600"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_52_2","DOI":"10.1109\/ISCA.2018.00033"},{"unstructured":"Nitish Srivastava Geoffrey Hinton Alex Krizhevsky Ilya Sutskever and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 1 (2014) 1929\u20131958.","key":"e_1_3_3_1_53_2"},{"doi-asserted-by":"crossref","unstructured":"Dmitri\u00a0B Strukov Gregory\u00a0S Snider Duncan\u00a0R Stewart and R\u00a0Stanley Williams. 2008. The missing memristor found. nature 453 7191 (2008) 80\u201383.","key":"e_1_3_3_1_54_2","DOI":"10.1038\/nature06932"},{"doi-asserted-by":"crossref","unstructured":"Corinne Teeter Ramakrishnan Iyer Vilas Menon Nathan Gouwens David Feng Jim Berg Aaron Szafer Nicholas Cain Hongkui Zeng Michael Hawrylycz et\u00a0al. 2018. Generalized leaky integrate-and-fire models classify multiple neuron types. Nature communications 9 1 (2018) 709.","key":"e_1_3_3_1_55_2","DOI":"10.1038\/s41467-017-02717-4"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_56_2","DOI":"10.1109\/ISSCC42613.2021.9366058"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_57_2","DOI":"10.1145\/3297858.3304036"},{"doi-asserted-by":"crossref","unstructured":"Hamed Vakili Mohammad\u00a0Nazmus Sakib Samiran Ganguly Mircea Stan Matthew\u00a0W Daniels Advait Madhavan Mark\u00a0D Stiles and Avik\u00a0W Ghosh. 2020. Temporal memory with magnetic racetracks. IEEE Journal on Exploratory Solid-State Computational Devices and Circuits 6 2 (2020) 107\u2013115.","key":"e_1_3_3_1_58_2","DOI":"10.1109\/JXCDC.2020.3022381"},{"doi-asserted-by":"crossref","unstructured":"Yuchen Wang Hanwen Liu Malu Zhang Xiaoling Luo and Hong Qu. 2024. A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks. Neural Networks 174 (2024) 106244.","key":"e_1_3_3_1_59_2","DOI":"10.1016\/j.neunet.2024.106244"},{"doi-asserted-by":"crossref","unstructured":"Refael Whyte Lee Streeter Michael\u00a0J Cree and Adrian\u00a0A Dorrington. 2015. Application of lidar techniques to time-of-flight range imaging. Applied optics 54 33 (2015) 9654\u20139664.","key":"e_1_3_3_1_60_2","DOI":"10.1364\/AO.54.009654"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_61_2","DOI":"10.1109\/LASCAS.2013.6519040"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_62_2","DOI":"10.1109\/ISCA45697.2020.00040"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_63_2","DOI":"10.1109\/HPCA53966.2022.00010"},{"unstructured":"Hao Wu Patrick Judd Xiaojie Zhang Mikhail Isaev and Paulius Micikevicius. 2020. Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2004.09602 (2020).","key":"e_1_3_3_1_64_2"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_65_2","DOI":"10.1007\/978-1-4419-0261-0"},{"doi-asserted-by":"crossref","unstructured":"Lei Yang Zheyu Yan Meng Li Hyoukjun Kwon Liangzhen Lai Tushar Krishna Vikas Chandra Weiwen Jiang and Yiyu Shi. 2020. Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks. arxiv:https:\/\/arXiv.org\/abs\/2002.04116\u00a0[cs.LG]","key":"e_1_3_3_1_66_2","DOI":"10.1109\/DAC18072.2020.9218676"},{"doi-asserted-by":"publisher","unstructured":"Ruokai Yin Abhishek Moitra Abhiroop Bhattacharjee Youngeun Kim and Priyadarshini Panda. 2023. SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 42 6 (2023) 1926\u20131938. 10.1109\/TCAD.2022.3213211","key":"e_1_3_3_1_67_2","DOI":"10.1109\/TCAD.2022.3213211"},{"doi-asserted-by":"publisher","key":"e_1_3_3_1_68_2","DOI":"10.1145\/3489517.3530502"},{"doi-asserted-by":"crossref","unstructured":"Jiawei Zhao Steve Dai Rangharajan Venkatesan Brian Zimmer Mustafa Ali Ming-Yu Liu Brucek Khailany William\u00a0J Dally and Anima Anandkumar. 2022. Lns-madam: Low-precision training in logarithmic number system using multiplicative weight update. IEEE Trans. Comput. 71 12 (2022) 3179\u20133190.","key":"e_1_3_3_1_69_2","DOI":"10.1109\/TC.2022.3202747"},{"doi-asserted-by":"crossref","unstructured":"Wei Zhao and Yu Cao. 2006. New generation of predictive technology model for sub-45 nm early design exploration. IEEE Transactions on electron Devices 53 11 (2006) 2816\u20132823.","key":"e_1_3_3_1_70_2","DOI":"10.1109\/TED.2006.884077"}],"event":{"sponsor":["SIGARCH ACM Special Interest Group on Computer Architecture"],"acronym":"SIGARCH '25","name":"ISCA '25: Proceedings of the 52nd Annual International Symposium on Computer Architecture","location":"Tokyo Japan"},"container-title":["Proceedings of the 52nd Annual International Symposium on Computer Architecture"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3695053.3731027","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3695053.3731027","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,21]],"date-time":"2025-06-21T11:01:00Z","timestamp":1750503660000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3695053.3731027"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,20]]},"references-count":69,"alternative-id":["10.1145\/3695053.3731027","10.1145\/3695053"],"URL":"https:\/\/doi.org\/10.1145\/3695053.3731027","relation":{},"subject":[],"published":{"date-parts":[[2025,6,20]]},"assertion":[{"value":"2025-06-20","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}