{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,3]],"date-time":"2026-01-03T22:54:55Z","timestamp":1767480895474,"version":"3.41.0"},"reference-count":54,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2024,7,31]],"date-time":"2024-07-31T00:00:00Z","timestamp":1722384000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62074100"],"award-info":[{"award-number":["62074100"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["J. Emerg. Technol. Comput. Syst."],"published-print":{"date-parts":[[2024,7,31]]},"abstract":"<jats:p>\n            Artificial Neural Networks (ANNs) have achieved remarkable performance in many artificial intelligence tasks. As the application scenarios become more sophisticated, the computation and energy consumption of ANNs are also constantly increasing, which poses a challenge for deploying ANNs on energy-constrained devices. Spiking Neural Networks (SNNs) provide a promising solution to build energy-efficiency neural networks. However, the current training methods of SNNs cannot output values as precise as ANNs. This limits the applications of SNNs to relatively simple image classification tasks. In this article, we extend the application of SNNs to neural rendering tasks and propose an energy-efficient spiking neural rendering model, called Spiking-NeRF (Spiking Neural Radiance Fields). We first analyze the ANN-to-SNN conversion theory and propose an output scheme for SNNs to obtain the precise scene property values. Then we customize the parameter normalization method for the special network architecture of neural rendering. Furthermore, we present an early termination strategy (ETS) based on the discrete nature of spikes to reduce energy consumption. We evaluate the performance of Spiking-NeRF on both realistic and synthetic scenes. Experimental results show that Spiking-NeRF can achieve comparable rendering performance to ANN-based NeRF with up to\n            <jats:inline-formula content-type=\"math\/tex\">\n              <jats:tex-math notation=\"LaTeX\" version=\"MathJax\">\\(2.27\\times\\)<\/jats:tex-math>\n            <\/jats:inline-formula>\n            energy reduction.\n          <\/jats:p>","DOI":"10.1145\/3675808","type":"journal-article","created":{"date-parts":[[2024,7,11]],"date-time":"2024-07-11T17:26:07Z","timestamp":1720718767000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Spiking-NeRF: Spiking Neural Network for Energy-Efficient Neural Rendering"],"prefix":"10.1145","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8506-7946","authenticated-orcid":false,"given":"Ziwen","family":"Li","sequence":"first","affiliation":[{"name":"School of Information Science and Technology, ShanghaiTech University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4360-2098","authenticated-orcid":false,"given":"Yu","family":"Ma","sequence":"additional","affiliation":[{"name":"School of Information Science and Technology, ShanghaiTech University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8638-2174","authenticated-orcid":false,"given":"Jindong","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Information Science and Technology, ShanghaiTech University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9515-9302","authenticated-orcid":false,"given":"Pingqiang","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Information Science and Technology, ShanghaiTech University, Shanghai, China"}]}],"member":"320","published-online":{"date-parts":[[2024,8,26]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2015.2474396"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00580"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01245"},{"key":"e_1_3_1_5_2","unstructured":"Tong Bu Jianhao Ding Zhaofei Yu and Tiejun Huang. 2022. Optimized Potential Initialization for Low-Latency Spiking Neural Networks. arXiv: 2202.01440. Retrieved from https:\/\/arxiv.org\/abs\/2202.01440"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-014-0788-3"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3203205"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/JETCAS.2019.2910232"},{"key":"e_1_3_1_9_2","unstructured":"Jungwook Choi Zhuo Wang Swagath Venkataramani Pierce I-Jen Chuang Vijayalakshmi Srinivasan and Kailash Gopalakrishnan. 2018. PACT: Parameterized Clipping Activation for Quantized Neural Networks. arXiv: 1805.06085. Retrieved from https:\/\/arxiv.org\/abs\/1805.06085"},{"key":"e_1_3_1_10_2","unstructured":"Shikuang Deng and Shi Gu. 2021. Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks. arXiv: 2103.00476. Retrieved from https:\/\/arxiv.org\/abs\/2103.00476"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2015.7280696"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.3389\/fncom.2018.00024"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00854"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01357"},{"key":"e_1_3_1_15_2","first-page":"1135","article-title":"Learning Both Weights and Connections for Efficient Neural Network","volume":"28","author":"Han Song","year":"2015","unstructured":"Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning Both Weights and Connections for Efficient Neural Network. Advances in Neural Information Processing Systems (NeurIPS) 28 (2015), 1135\u20131143.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_3_1_16_2","first-page":"1","volume-title":"Handout","author":"Heeger David","year":"2000","unstructured":"David Heeger. 2000. Poisson Model of Spike Generation. Handout, University of Standford 5, 1\u201313 (2000), 76."},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/DAC18074.2021.9586266"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1113\/jphysiol.1952.sp004764"},{"key":"e_1_3_1_19_2","unstructured":"Eric Hunsberger and Chris Eliasmith. 2015. Spiking Deep Networks with LIF Neurons. arXiv: 1510.08829. Retrieved from https:\/\/arxiv.org\/abs\/1510.08829"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00143"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00065"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i07.6787"},{"key":"e_1_3_1_23_2","unstructured":"Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv: 1412.6980. Retrieved from https:\/\/arxiv.org\/abs\/1412.6980"},{"key":"e_1_3_1_24_2","first-page":"1097","article-title":"Imagenet Classification with Deep Convolutional Neural Networks","volume":"25","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems (NeurIPS) 25 (2012), 1097\u20131105.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2020.00119"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2016.00508"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00643"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00523"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00713"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/2945.468400"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58452-8_24"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530127"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2896880"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2019.2931595"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00288"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00025"},{"key":"e_1_3_1_37_2","unstructured":"Keunhong Park Utkarsh Sinha Peter Hedman Jonathan T Barron Sofien Bouaziz Dan B Goldman Ricardo Martin-Brualla and Steven M Seitz. 2021. HyperNeRF: A Higher-dimensional Representation for Topologically Varying Neural Radiance Fields. arXiv: 2106.13228. Retrieved from https:\/\/arxiv.org\/abs\/2106.13228"},{"key":"e_1_3_1_38_2","first-page":"8024","article-title":"PyTorch: An Imperative Style, High-Performance Deep Learning Library","volume":"32","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Advances in Neural Information Processing Systems (NeurIPS) 32 (2019), 8024\u20138035.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2015.00141"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3550454.3555505"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-019-1677-2"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2017.00682"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3266229"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00741"},{"key":"e_1_3_1_45_2","first-page":"12278","article-title":"A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose","volume":"34","author":"Su Shih-Yang","year":"2021","unstructured":"Shih-Yang Su, Frank Yu, Michael Zollh\u00f6fer, and Helge Rhodin. 2021. A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose. Advances in Neural Information Processing Systems (NeurIPS) 34 (2021), 12278\u201312291.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2017.2761740"},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i11.17180"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00287"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14022"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14507"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00541"},{"key":"e_1_3_1_52_2","first-page":"14955","article-title":"H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction of Humans in Motion","volume":"34","author":"Xu Hongyi","year":"2021","unstructured":"Hongyi Xu, Thiemo Alldieck, and Cristian Sminchisescu. 2021. H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction of Humans in Motion. Advances in Neural Information Processing Systems (NeurIPS) 34 (2021), 14955\u201314966.","journal-title":"Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.643"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00455"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1109\/SOCC56010.2022.9908135"}],"container-title":["ACM Journal on Emerging Technologies in Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3675808","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3675808","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:04:03Z","timestamp":1750291443000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3675808"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,31]]},"references-count":54,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,7,31]]}},"alternative-id":["10.1145\/3675808"],"URL":"https:\/\/doi.org\/10.1145\/3675808","relation":{},"ISSN":["1550-4832","1550-4840"],"issn-type":[{"type":"print","value":"1550-4832"},{"type":"electronic","value":"1550-4840"}],"subject":[],"published":{"date-parts":[[2024,7,31]]},"assertion":[{"value":"2023-05-08","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-05-25","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-08-26","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}