{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,23]],"date-time":"2025-07-23T12:19:39Z","timestamp":1753273179137,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":38,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,4,11]],"date-time":"2023-04-11T00:00:00Z","timestamp":1681171200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,4,11]]},"DOI":"10.1145\/3584954.3584960","type":"proceedings-article","created":{"date-parts":[[2023,4,12]],"date-time":"2023-04-12T13:27:54Z","timestamp":1681306074000},"page":"11-19","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5531-5235","authenticated-orcid":false,"given":"Anurag","family":"Daram","sequence":"first","affiliation":[{"name":"Neuromorphic Artificial Intelligence Lab, University of Texas at San Antonio, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4462-5224","authenticated-orcid":false,"given":"Dhireesha","family":"Kudithipudi","sequence":"additional","affiliation":[{"name":"Neuromorphic Artificial Intelligence Lab, University of Texas at San Antonio, United States"}]}],"member":"320","published-online":{"date-parts":[[2023,4,12]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1038\/nrn2356"},{"key":"e_1_3_2_1_2_1","volume-title":"Uncertainty-based continual learning with adaptive regularization. Advances in neural information processing systems 32","author":"Ahn Hongjoon","year":"2019","unstructured":"Hongjoon Ahn , Sungmin Cha , Donggyu Lee , and Taesup Moon . 2019. Uncertainty-based continual learning with adaptive regularization. Advances in neural information processing systems 32 ( 2019 ). Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. 2019. Uncertainty-based continual learning with adaptive regularization. Advances in neural information processing systems 32 (2019)."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01219-9_9"},{"key":"e_1_3_2_1_4_1","unstructured":"Rahaf Aljundi Min Lin Baptiste Goujaud and Yoshua Bengio. 2019. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems. 11816\u201311825.  Rahaf Aljundi Min Lin Baptiste Goujaud and Yoshua Bengio. 2019. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems. 11816\u201311825."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01252-6_33"},{"key":"e_1_3_2_1_6_1","volume-title":"A continual learning survey: Defying forgetting in classification tasks","author":"De\u00a0Lange Matthias","year":"2021","unstructured":"Matthias De\u00a0Lange , Rahaf Aljundi , Marc Masana , Sarah Parisot , Xu Jia , Ale\u0161 Leonardis , Gregory Slabaugh , and Tinne Tuytelaars . 2021. A continual learning survey: Defying forgetting in classification tasks . IEEE transactions on pattern analysis and machine intelligence 44, 7 ( 2021 ), 3366\u20133385. Matthias De\u00a0Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ale\u0161 Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. 2021. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence 44, 7 (2021), 3366\u20133385."},{"key":"e_1_3_2_1_7_1","volume-title":"Adversarial Continual Learning. arXiv preprint arXiv:2003.09553","author":"Ebrahimi Sayna","year":"2020","unstructured":"Sayna Ebrahimi , Franziska Meier , Roberto Calandra , Trevor Darrell , and Marcus Rohrbach . 2020. Adversarial Continual Learning. arXiv preprint arXiv:2003.09553 ( 2020 ). Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, and Marcus Rohrbach. 2020. Adversarial Continual Learning. arXiv preprint arXiv:2003.09553 (2020)."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00725"},{"key":"e_1_3_2_1_9_1","volume-title":"Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488","author":"Hsu Yen-Chang","year":"2018","unstructured":"Yen-Chang Hsu , Yen-Cheng Liu , Anita Ramasamy , and Zsolt Kira . 2018. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488 ( 2018 ). Yen-Chang Hsu, Yen-Cheng Liu, Anita Ramasamy, and Zsolt Kira. 2018. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488 (2018)."},{"key":"e_1_3_2_1_10_1","first-page":"3647","article-title":"Continual learning with node-importance based adaptive group sparse regularization","volume":"33","author":"Jung Sangwon","year":"2020","unstructured":"Sangwon Jung , Hongjoon Ahn , Sungmin Cha , and Taesup Moon . 2020 . Continual learning with node-importance based adaptive group sparse regularization . Advances in Neural Information Processing Systems 33 (2020), 3647 \u2013 3658 . Sangwon Jung, Hongjoon Ahn, Sungmin Cha, and Taesup Moon. 2020. Continual learning with node-importance based adaptive group sparse regularization. Advances in Neural Information Processing Systems 33 (2020), 3647\u20133658.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_11_1","volume-title":"Continual Learning with Neuron Activation Importance. In International Conference on Image Analysis and Processing. Springer, 310\u2013321","author":"Kim Sohee","year":"2022","unstructured":"Sohee Kim and Seungkyu Lee . 2022 . Continual Learning with Neuron Activation Importance. In International Conference on Image Analysis and Processing. Springer, 310\u2013321 . Sohee Kim and Seungkyu Lee. 2022. Continual Learning with Neuron Activation Importance. In International Conference on Image Analysis and Processing. Springer, 310\u2013321."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1611835114"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1611835114"},{"key":"e_1_3_2_1_14_1","volume-title":"International Conference on Learning Representations.","author":"Kolouri Soheil","year":"2020","unstructured":"Soheil Kolouri , Nicholas\u00a0 A Ketz , Praveen\u00a0 K Pilly , and Andrea Soltoggio . 2020 . Sliced Cramer synaptic consolidation for preserving deeply learned representations . In International Conference on Learning Representations. Soheil Kolouri, Nicholas\u00a0A Ketz, Praveen\u00a0K Pilly, and Andrea Soltoggio. 2020. Sliced Cramer synaptic consolidation for preserving deeply learned representations. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-022-00452-0"},{"key":"e_1_3_2_1_17_1","volume-title":"International Conference on Machine Learning. PMLR, 5533\u20135543","author":"Kurtz Mark","year":"2020","unstructured":"Mark Kurtz , Justin Kopinsky , Rati Gelashvili , Alexander Matveev , John Carr , Michael Goin , William Leiserson , Sage Moore , Nir Shavit , and Dan Alistarh . 2020 . Inducing and exploiting activation sparsity for fast inference on deep neural networks . In International Conference on Machine Learning. PMLR, 5533\u20135543 . Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Nir Shavit, and Dan Alistarh. 2020. Inducing and exploiting activation sparsity for fast inference on deep neural networks. In International Conference on Machine Learning. PMLR, 5533\u20135543."},{"key":"e_1_3_2_1_18_1","volume-title":"Synaptic metaplasticity in binarized neural networks. Nature communications 12, 1","author":"Laborieux Axel","year":"2021","unstructured":"Axel Laborieux , Maxence Ernoult , Tifenn Hirtzlin , and Damien Querlioz . 2021. Synaptic metaplasticity in binarized neural networks. Nature communications 12, 1 ( 2021 ), 1\u201312. Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, and Damien Querlioz. 2021. Synaptic metaplasticity in binarized neural networks. Nature communications 12, 1 (2021), 1\u201312."},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_2_1_20_1","volume-title":"A neural dirichlet process mixture model for task-free continual learning. arXiv preprint arXiv:2001.00689","author":"Lee Soochan","year":"2020","unstructured":"Soochan Lee , Junsoo Ha , Dongsu Zhang , and Gunhee Kim . 2020. A neural dirichlet process mixture model for task-free continual learning. arXiv preprint arXiv:2001.00689 ( 2020 ). Soochan Lee, Junsoo Ha, Dongsu Zhang, and Gunhee Kim. 2020. A neural dirichlet process mixture model for task-free continual learning. arXiv preprint arXiv:2001.00689 (2020)."},{"key":"e_1_3_2_1_21_1","unstructured":"Sang-Woo Lee Jin-Hwa Kim Jaehyun Jun Jung-Woo Ha and Byoung-Tak Zhang. 2017. Overcoming catastrophic forgetting by incremental moment matching. In Advances in Neural Information Processing Systems. 4652\u20134662.  Sang-Woo Lee Jin-Hwa Kim Jaehyun Jun Jung-Woo Ha and Byoung-Tak Zhang. 2017. Overcoming catastrophic forgetting by incremental moment matching. In Advances in Neural Information Processing Systems. 4652\u20134662."},{"key":"e_1_3_2_1_22_1","volume-title":"Learning without forgetting","author":"Li Zhizhong","year":"2017","unstructured":"Zhizhong Li and Derek Hoiem . 2017. Learning without forgetting . IEEE transactions on pattern analysis and machine intelligence 40, 12 ( 2017 ), 2935\u20132947. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence 40, 12 (2017), 2935\u20132947."},{"key":"e_1_3_2_1_23_1","unstructured":"David Lopez-Paz 2017. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems. 6467\u20136476.  David Lopez-Paz 2017. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems. 6467\u20136476."},{"volume-title":"Psychology of Learning and Motivation. Vol.\u00a024","author":"McCloskey Michael","key":"e_1_3_2_1_24_1","unstructured":"Michael McCloskey and Neal\u00a0 J. Cohen . 1989. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem . In Psychology of Learning and Motivation. Vol.\u00a024 . Academic Press , 109\u2013165. http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0079742108605368 Michael McCloskey and Neal\u00a0J. Cohen. 1989. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. In Psychology of Learning and Motivation. Vol.\u00a024. Academic Press, 109\u2013165. http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0079742108605368"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3381755.3381766"},{"key":"e_1_3_2_1_26_1","volume-title":"On the role of neurogenesis in overcoming catastrophic forgetting. arXiv preprint arXiv:1811.02113","author":"Parisi I","year":"2018","unstructured":"German\u00a0 I Parisi , Xu Ji , and Stefan Wermter . 2018. On the role of neurogenesis in overcoming catastrophic forgetting. arXiv preprint arXiv:1811.02113 ( 2018 ). German\u00a0I Parisi, Xu Ji, and Stefan Wermter. 2018. On the role of neurogenesis in overcoming catastrophic forgetting. arXiv preprint arXiv:1811.02113 (2018)."},{"key":"e_1_3_2_1_27_1","volume-title":"Progressive neural networks. arXiv preprint arXiv:1606.04671","author":"Rusu A","year":"2016","unstructured":"Andrei\u00a0 A Rusu , Neil\u00a0 C Rabinowitz , Guillaume Desjardins , Hubert Soyer , James Kirkpatrick , Koray Kavukcuoglu , Razvan Pascanu , and Raia Hadsell . 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671 ( 2016 ). Andrei\u00a0A Rusu, Neil\u00a0C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)."},{"key":"e_1_3_2_1_28_1","unstructured":"Steger.\u00a0Angelika Schug.\u00a0Simon Benzing.\u00a0Frederik. 2020. Task-Agnostic Continual Learning via Stochastic Synapses. https:\/\/sites.google.com\/view\/cl-icml\/accepted-papers?authuser=0. (Accessed on 09\/21\/2020).  Steger.\u00a0Angelika Schug.\u00a0Simon Benzing.\u00a0Frederik. 2020. Task-Agnostic Continual Learning via Stochastic Synapses. https:\/\/sites.google.com\/view\/cl-icml\/accepted-papers?authuser=0. (Accessed on 09\/21\/2020)."},{"key":"e_1_3_2_1_29_1","volume-title":"A scalable framework for continual learning. arXiv preprint arXiv:1805.06370","author":"Schwarz Jonathan","year":"2018","unstructured":"Jonathan Schwarz , Jelena Luketina , Wojciech\u00a0 M Czarnecki , Agnieszka Grabska-Barwinska , Yee\u00a0Whye Teh , Razvan Pascanu , and Raia Hadsell . 2018. Progress & compress : A scalable framework for continual learning. arXiv preprint arXiv:1805.06370 ( 2018 ). Jonathan Schwarz, Jelena Luketina, Wojciech\u00a0M Czarnecki, Agnieszka Grabska-Barwinska, Yee\u00a0Whye Teh, Razvan Pascanu, and Raia Hadsell. 2018. Progress & compress: A scalable framework for continual learning. arXiv preprint arXiv:1805.06370 (2018)."},{"key":"e_1_3_2_1_30_1","volume-title":"Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning. International Conference on Computer Vision (ICCV)","author":"Smith James","year":"2021","unstructured":"James Smith , Yen-Chang Hsu , Jonathan Balloch , Yilin Shen , Hongxia Jin , and Zsolt Kira . 2021 . Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning. International Conference on Computer Vision (ICCV) (2021). James Smith, Yen-Chang Hsu, Jonathan Balloch, Yilin Shen, Hongxia Jin, and Zsolt Kira. 2021. Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning. International Conference on Computer Vision (ICCV) (2021)."},{"key":"e_1_3_2_1_31_1","volume-title":"TACOS: Task Agnostic Continual Learning in Spiking Neural Networks. In Theory and Foundation of Continual Learning Workshop at ICML\u20192021","author":"Soures Nicholas","year":"2021","unstructured":"Nicholas Soures , Peter Helfer , Anurag Daram , Tej Pandit , and Dhireesha Kudithipudi . July 2021 . TACOS: Task Agnostic Continual Learning in Spiking Neural Networks. In Theory and Foundation of Continual Learning Workshop at ICML\u20192021 . Nicholas Soures, Peter Helfer, Anurag Daram, Tej Pandit, and Dhireesha Kudithipudi. July 2021. TACOS: Task Agnostic Continual Learning in Spiking Neural Networks. In Theory and Foundation of Continual Learning Workshop at ICML\u20192021."},{"key":"e_1_3_2_1_32_1","volume-title":"Brain-inspired replay for continual learning with artificial neural networks. Nature communications 11, 1","author":"van\u00a0de Ven M","year":"2020","unstructured":"Gido\u00a0 M van\u00a0de Ven , Hava\u00a0 T Siegelmann , and Andreas\u00a0 S Tolias . 2020. Brain-inspired replay for continual learning with artificial neural networks. Nature communications 11, 1 ( 2020 ), 1\u201314. Gido\u00a0M van\u00a0de Ven, Hava\u00a0T Siegelmann, and Andreas\u00a0S Tolias. 2020. Brain-inspired replay for continual learning with artificial neural networks. Nature communications 11, 1 (2020), 1\u201314."},{"key":"e_1_3_2_1_33_1","volume-title":"Three Scenarios for Continual Learning. arXiv:1904.07734 [cs, stat] (April","author":"van de Ven M.","year":"2019","unstructured":"Gido\u00a0 M. van de Ven and Andreas\u00a0 S. Tolias . 2019. Three Scenarios for Continual Learning. arXiv:1904.07734 [cs, stat] (April 2019 ). arxiv:1904.07734\u00a0[cs, stat] Gido\u00a0M. van de Ven and Andreas\u00a0S. Tolias. 2019. Three Scenarios for Continual Learning. arXiv:1904.07734 [cs, stat] (April 2019). arxiv:1904.07734\u00a0[cs, stat]"},{"key":"e_1_3_2_1_34_1","volume-title":"Three types of incremental learning. Nature Machine Intelligence","author":"van\u00a0de Ven M","year":"2022","unstructured":"Gido\u00a0 M van\u00a0de Ven , Tinne Tuytelaars , and Andreas\u00a0 S Tolias . 2022. Three types of incremental learning. Nature Machine Intelligence ( 2022 ), 1\u201313. Gido\u00a0M van\u00a0de Ven, Tinne Tuytelaars, and Andreas\u00a0S Tolias. 2022. Three types of incremental learning. Nature Machine Intelligence (2022), 1\u201313."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1080\/00401706.1962.10490022"},{"key":"e_1_3_2_1_36_1","volume-title":"Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. CoRR abs\/1708.07747","author":"Xiao Han","year":"2017","unstructured":"Han Xiao , Kashif Rasul , and Roland Vollgraf . 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. CoRR abs\/1708.07747 ( 2017 ), 6\u00a0pages. arxiv:1708.07747http:\/\/arxiv.org\/abs\/1708.07747 Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. CoRR abs\/1708.07747 (2017), 6\u00a0pages. arxiv:1708.07747http:\/\/arxiv.org\/abs\/1708.07747"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.5555\/3305890.3306093"},{"key":"e_1_3_2_1_38_1","volume-title":"Task agnostic continual learning using online variational bayes. arXiv preprint arXiv:1803.10123","author":"Zeno Chen","year":"2018","unstructured":"Chen Zeno , Itay Golan , Elad Hoffer , and Daniel Soudry . 2018. Task agnostic continual learning using online variational bayes. arXiv preprint arXiv:1803.10123 ( 2018 ). Chen Zeno, Itay Golan, Elad Hoffer, and Daniel Soudry. 2018. Task agnostic continual learning using online variational bayes. arXiv preprint arXiv:1803.10123 (2018)."},{"key":"e_1_3_2_1_39_1","volume-title":"Task agnostic continual learning using online variational bayes. arXiv preprint arXiv:1803.10123","author":"Zeno Chen","year":"2018","unstructured":"Chen Zeno , Itay Golan , Elad Hoffer , and Daniel Soudry . 2018. Task agnostic continual learning using online variational bayes. arXiv preprint arXiv:1803.10123 ( 2018 ). Chen Zeno, Itay Golan, Elad Hoffer, and Daniel Soudry. 2018. Task agnostic continual learning using online variational bayes. arXiv preprint arXiv:1803.10123 (2018)."}],"event":{"name":"NICE 2023: Neuro-Inspired Computational Elements Conference","acronym":"NICE 2023","location":"San Antonio TX USA"},"container-title":["Neuro-Inspired Computational Elements Conference"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3584954.3584960","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3584954.3584960","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:07Z","timestamp":1750178227000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3584954.3584960"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,11]]},"references-count":38,"alternative-id":["10.1145\/3584954.3584960","10.1145\/3584954"],"URL":"https:\/\/doi.org\/10.1145\/3584954.3584960","relation":{},"subject":[],"published":{"date-parts":[[2023,4,11]]},"assertion":[{"value":"2023-04-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}