{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,10]],"date-time":"2026-03-10T13:07:51Z","timestamp":1773148071030,"version":"3.50.1"},"reference-count":43,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2022,11,30]],"date-time":"2022-11-30T00:00:00Z","timestamp":1669766400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Thales Research Technology (Fr), ANRT (Association Nationale de Recherche & Technologie), and University Cote d\u2019Azur"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Embed. Comput. Syst."],"published-print":{"date-parts":[[2022,11,30]]},"abstract":"<jats:p>Spiking neural networks are expected to bring high resources, power, and energy efficiency to machine learning hardware implementations. In this regard, they could facilitate the integration of Artificial Intelligence in highly constrained embedded systems, such as image classification in drones or satellites. If their logic resource efficiency is widely accepted in the literature, their energy efficiency still remains debated. In this article, a novel high-level metric is used to characterize the expected energy efficiency gain when using Spiking Neural Networks (SNN) instead of Formal Neural Networks (FNN) for hardware implementation: Synaptic Activity Ratio (SAR). This metric is applied to a selection of classification tasks including images and 1D signals. Moreover, a high-level estimator for logic resources, power usage, execution time, and energy is introduced for neural network hardware implementations on FPGA, based on four existing accelerator architectures covering both sequential and parallel implementation paradigms for both spiking and formal coding domains. This estimator is used to evaluate the reliability of the Synaptic Activity Ratio metric to characterize spiking neural network energy efficiency gain on the proposed dataset benchmark. This study led to the conclusion that spiking domain offers significant power and energy savings in sequential implementations. This study also shows that synaptic activity is a critical factor that must be taken into account when addressing low-energy systems.<\/jats:p>","DOI":"10.1145\/3520133","type":"journal-article","created":{"date-parts":[[2022,3,21]],"date-time":"2022-03-21T12:37:55Z","timestamp":1647866275000},"page":"1-26","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["Synaptic Activity and Hardware Footprint of Spiking Neural Networks in Digital Neuromorphic Systems"],"prefix":"10.1145","volume":"21","author":[{"given":"Edgar","family":"Lemaire","sequence":"first","affiliation":[{"name":"Thales Research &amp; Technology, France and LEAT, University Cote d\u2019Azur, France"}]},{"given":"Beno\u00eet","family":"Miramond","sequence":"additional","affiliation":[{"name":"LEAT, University Cote d\u2019Azur, France"}]},{"given":"S\u00e9bastien","family":"Bilavarn","sequence":"additional","affiliation":[{"name":"LEAT, University Cote d\u2019Azur, France"}]},{"given":"Hadi","family":"Saoud","sequence":"additional","affiliation":[{"name":"Thales Research Technology, France"}]},{"given":"Nassim","family":"Abderrahmane","sequence":"additional","affiliation":[{"name":"LEAT, University Cote d\u2019Azur, France"}]}],"member":"320","published-online":{"date-parts":[[2022,12,12]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0361-9230(99)00161-6"},{"key":"e_1_3_2_3_2","volume-title":"Hardware Design of Spiking Neural Networks for Energy Efficient Brain-inspired Computing","author":"Abderrahmane Nassim","year":"2020","unstructured":"Nassim Abderrahmane. 2020. Hardware Design of Spiking Neural Networks for Energy Efficient Brain-inspired Computing. Ph.D. Dissertation. Universit\u00e9 C\u00f4te d\u2019Azur."},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2019.09.024"},{"key":"e_1_3_2_5_2","article-title":"Deep learning using rectified linear units (ReLU)","author":"Agarap Abien Fred","year":"2018","unstructured":"Abien Fred Agarap. 2018. Deep learning using rectified linear units (ReLU). arXiv preprint arXiv:1803.08375 (2018).","journal-title":"arXiv preprint arXiv:1803.08375"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2018.01.005"},{"key":"e_1_3_2_7_2","article-title":"N2D2-neural network design & deployment","author":"Bichler O.","year":"2017","unstructured":"O. Bichler, D. Briand, V. Gacoin, B. Bertelone, T. Allenet, and J. C. Thiele. 2017. N2D2-neural network design & deployment. Manual Available on Github (2017).","journal-title":"Manual Available on Github"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pcbi.1004566"},{"key":"e_1_3_2_9_2","article-title":"A branching and merging convolutional network with homogeneous filter capsules","author":"Byerly Adam","year":"2020","unstructured":"Adam Byerly, Tatiana Kalganova, and Ian Dear. 2020. A branching and merging convolutional network with homogeneous filter capsules. arXiv preprint arXiv:2001.09136 (2020).","journal-title":"arXiv preprint arXiv:2001.09136"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2018.00063"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-014-0788-3"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-33269-2_15"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2021.651141"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3400302.3415608"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.1149639"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/0893-6080(88)90023-8"},{"key":"e_1_3_2_17_2","unstructured":"Muhammad K. A. Hamdan. 2018. VHDL auto-generation tool for optimized hardware acceleration of convolutional neural networks on FPGA (VGT). (2018)."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58607-2_23"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2016.7727303"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco_a_01245"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2018.8489241"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.3515934"},{"key":"e_1_3_2_23_2","first-page":"194","volume-title":"ACM\/SIGDA International Symposium on Field-programmable Gate Arrays","author":"Khodamoradi Alireza","year":"2021","unstructured":"Alireza Khodamoradi, Kristof Denolf, and Ryan Kastner. 2021. S2N2: A FPGA accelerator for streaming spiking neural networks. In ACM\/SIGDA International Symposium on Field-programmable Gate Arrays. 194\u2013205."},{"issue":"7","key":"e_1_3_2_24_2","first-page":"1","article-title":"Convolutional deep belief networks on CIFAR-10","volume":"40","author":"Krizhevsky Alex","year":"2010","unstructured":"Alex Krizhevsky and Geoff Hinton. 2010. Convolutional deep belief networks on CIFAR-10. Unpublished Manuscript 40, 7 (2010), 1\u20139.","journal-title":"Unpublished Manuscript"},{"key":"e_1_3_2_25_2","unstructured":"Alex Krizhevsky Geoffrey Hinton et\u00a0al. 2009. Learning multiple layers of features from tiny images. (2009)."},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV48630.2021.00400"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_2_28_2","first-page":"1","volume-title":"IEEE International Symposium on Circuits and Systems (ISCAS)","author":"Lemaire Edgar","year":"2020","unstructured":"Edgar Lemaire, Matthieu Moretti, Lionel Daniel, Beno\u00eet Miramond, Philippe Millet, Fr\u00e9d\u00e9ric Feresin, and S\u00e9bastien Bilavarn. 2020. An FPGA-based hybrid neural network accelerator for embedded satellite image classification. In IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1\u20135."},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2019.2931595"},{"key":"e_1_3_2_30_2","article-title":"Deep spiking networks","author":"O\u2019Connor Peter","year":"2016","unstructured":"Peter O\u2019Connor and Max Welling. 2016. Deep spiking networks. arXiv preprint arXiv:1602.08323 (2016).","journal-title":"arXiv preprint arXiv:1602.08323"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2015.2392947"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2018.2797022"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2020.00653"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1523\/ENEURO.0134-15.2016"},{"key":"e_1_3_2_35_2","doi-asserted-by":"crossref","unstructured":"Nitin Rathi and Kaushik Roy. 2021. DIET-SNN: A Low-Latency Spiking Neural Network with Direct Input Encoding & Leakage and Threshold Optimization. Retrieved from: https:\/\/openreview.net\/forum?id=u_bGm5lrm72.","DOI":"10.1109\/TNNLS.2021.3111897"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1038\/323533a0"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2019.00095"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2012.02.016"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2018.12.002"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4615-4831-7_19"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2013.00014"},{"key":"e_1_3_2_42_2","article-title":"Speech commands: A dataset for limited-vocabulary speech recognition","author":"Warden Pete","year":"2018","unstructured":"Pete Warden. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209 (2018).","journal-title":"arXiv preprint arXiv:1804.03209"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2019.8852380"},{"key":"e_1_3_2_44_2","unstructured":"Liu Zhejun. 2019. ResNet for Radio Recognition. Retrieved from: https:\/\/github.com\/liuzhejun\/ResNet-for-Radio-Recognition."}],"container-title":["ACM Transactions on Embedded Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3520133","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3520133","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T18:10:32Z","timestamp":1750183832000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3520133"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,30]]},"references-count":43,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2022,11,30]]}},"alternative-id":["10.1145\/3520133"],"URL":"https:\/\/doi.org\/10.1145\/3520133","relation":{},"ISSN":["1539-9087","1558-3465"],"issn-type":[{"value":"1539-9087","type":"print"},{"value":"1558-3465","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,30]]},"assertion":[{"value":"2021-07-15","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-02-19","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-12-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}