{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:28:26Z","timestamp":1750220906353,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":20,"publisher":"ACM","license":[{"start":{"date-parts":[[2019,11,10]],"date-time":"2019-11-10T00:00:00Z","timestamp":1573344000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2019,11,10]]},"DOI":"10.1145\/3362743.3362965","type":"proceedings-article","created":{"date-parts":[[2019,10,23]],"date-time":"2019-10-23T15:44:57Z","timestamp":1571845497000},"page":"31-36","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["Skipping RNN State Updates without Retraining the Original Model"],"prefix":"10.1145","author":[{"given":"Jin","family":"Tao","sequence":"first","affiliation":[{"name":"Washington State University, Pullman, Washington"}]},{"given":"Urmish","family":"Thakker","sequence":"additional","affiliation":[{"name":"Arm ML Research Lab, Austin, Texas"}]},{"given":"Ganesh","family":"Dasika","sequence":"additional","affiliation":[{"name":"Arm ML Research Lab, Austin, Texas"}]},{"given":"Jesse","family":"Beu","sequence":"additional","affiliation":[{"name":"Arm ML Research Lab, Austin, Texas"}]}],"member":"320","published-online":{"date-parts":[[2019,11,10]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Jordi Torres, and Shih-Fu Chang. Skip RNN: learning to skip state updates in recurrent neural networks. CoRR, abs\/1708.06834","author":"Campos Victor","year":"2017","unstructured":"Victor Campos , Brendan Jou , Xavier Gir\u00f3 i Nieto , Jordi Torres, and Shih-Fu Chang. Skip RNN: learning to skip state updates in recurrent neural networks. CoRR, abs\/1708.06834 , 2017 . Victor Campos, Brendan Jou, Xavier Gir\u00f3 i Nieto, Jordi Torres, and Shih-Fu Chang. Skip RNN: learning to skip state updates in recurrent neural networks. CoRR, abs\/1708.06834, 2017."},{"key":"e_1_3_2_1_2_1","volume-title":"Adaptive mixture of low-rank factorizations for compact neural modeling. Advances in neural information processing systems (CDNNRIA workshop)","author":"Chen Ting","year":"2018","unstructured":"Ting Chen , Ji Lin , Tian Lin , Song Han , Chong Wang , and Denny Zhou . Adaptive mixture of low-rank factorizations for compact neural modeling. Advances in neural information processing systems (CDNNRIA workshop) , 2018 . Ting Chen, Ji Lin, Tian Lin, Song Han, Chong Wang, and Denny Zhou. Adaptive mixture of low-rank factorizations for compact neural modeling. Advances in neural information processing systems (CDNNRIA workshop), 2018."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3123939.3124552"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1474"},{"key":"e_1_3_2_1_5_1","volume-title":"The state of sparsity in deep neural networks. CoRR, abs\/1902.09574","author":"Gale Trevor","year":"2019","unstructured":"Trevor Gale , Erich Elsen , and Sara Hooker . The state of sparsity in deep neural networks. CoRR, abs\/1902.09574 , 2019 . Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. CoRR, abs\/1902.09574, 2019."},{"key":"e_1_3_2_1_6_1","volume-title":"Ternary hybrid neural-tree networks for highly constrained iot applications. CoRR, abs\/1903.01531","author":"Gope Dibakar","year":"2019","unstructured":"Dibakar Gope , Ganesh Dasika , and Matthew Mattina . Ternary hybrid neural-tree networks for highly constrained iot applications. CoRR, abs\/1903.01531 , 2019 . Dibakar Gope, Ganesh Dasika, and Matthew Mattina. Ternary hybrid neural-tree networks for highly constrained iot applications. CoRR, abs\/1903.01531, 2019."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-69900-4_44"},{"key":"e_1_3_2_1_8_1","volume-title":"A clockwork RNN. CoRR, abs\/1402.3511","author":"Koutn\u00edk Jan","year":"2014","unstructured":"Jan Koutn\u00edk , Klaus Greff , Faustino J. Gomez , and J\u00fcrgen Schmidhuber . A clockwork RNN. CoRR, abs\/1402.3511 , 2014 . Jan Koutn\u00edk, Klaus Greff, Faustino J. Gomez, and J\u00fcrgen Schmidhuber. A clockwork RNN. CoRR, abs\/1402.3511, 2014."},{"key":"e_1_3_2_1_9_1","volume-title":"Phased LSTM: accelerating recurrent network training for long or event-based sequences. CoRR, abs\/1610.09513","author":"Neil Daniel","year":"2016","unstructured":"Daniel Neil , Michael Pfeiffer , and Shih-Chii Liu . Phased LSTM: accelerating recurrent network training for long or event-based sequences. CoRR, abs\/1610.09513 , 2016 . Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased LSTM: accelerating recurrent network training for long or event-based sequences. CoRR, abs\/1610.09513, 2016."},{"key":"e_1_3_2_1_10_1","first-page":"1310","volume-title":"Proceedings of the 30th International Conference on International Conference on Machine Learning -","volume":"28","author":"Pascanu Razvan","unstructured":"Razvan Pascanu , Tomas Mikolov , and Yoshua Bengio . On the difficulty of training recurrent neural networks . In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28 , ICML'13, pages III-- 1310 --III--1318. JMLR.org, 2013. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML'13, pages III--1310--III--1318. JMLR.org, 2013."},{"key":"e_1_3_2_1_11_1","volume-title":"Neural speed reading via skim-rnn. CoRR, abs\/1711.02085","author":"Seo Min Joon","year":"2017","unstructured":"Min Joon Seo , Sewon Min , Ali Farhadi , and Hannaneh Hajishirzi . Neural speed reading via skim-rnn. CoRR, abs\/1711.02085 , 2017 . Min Joon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. Neural speed reading via skim-rnn. CoRR, abs\/1711.02085, 2017."},{"key":"e_1_3_2_1_12_1","volume-title":"Run-time efficient RNN compression for inference on edge devices. CoRR, abs\/1906.04886","author":"Thakker Urmish","year":"2019","unstructured":"Urmish Thakker , Jesse G. Beu , Dibakar Gope , Ganesh Dasika , and Matthew Mattina . Run-time efficient RNN compression for inference on edge devices. CoRR, abs\/1906.04886 , 2019 . Urmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, and Matthew Mattina. Run-time efficient RNN compression for inference on edge devices. CoRR, abs\/1906.04886, 2019."},{"key":"e_1_3_2_1_13_1","volume-title":"Compressing rnns for iot devices by 15-38x using kronecker products. CoRR, abs\/1906.02876","author":"Thakker Urmish","year":"2019","unstructured":"Urmish Thakker , Jesse G. Beu , Dibakar Gope , Chu Zhou , Igor Fedorov , Ganesh Dasika , and Matthew Mattina . Compressing rnns for iot devices by 15-38x using kronecker products. CoRR, abs\/1906.02876 , 2019 . Urmish Thakker, Jesse G. Beu, Dibakar Gope, Chu Zhou, Igor Fedorov, Ganesh Dasika, and Matthew Mattina. Compressing rnns for iot devices by 15-38x using kronecker products. CoRR, abs\/1906.02876, 2019."},{"key":"e_1_3_2_1_14_1","volume-title":"Measuring scheduling efficiency of rnns for NLP applications. CoRR, abs\/1904.03302","author":"Thakker Urmish","year":"2019","unstructured":"Urmish Thakker , Ganesh Dasika , Jesse G. Beu , and Matthew Mattina . Measuring scheduling efficiency of rnns for NLP applications. CoRR, abs\/1904.03302 , 2019 . Urmish Thakker, Ganesh Dasika, Jesse G. Beu, and Matthew Mattina. Measuring scheduling efficiency of rnns for NLP applications. CoRR, abs\/1904.03302, 2019."},{"key":"e_1_3_2_1_15_1","volume-title":"Strassennets: Deep learning with a multiplication budget. CoRR, abs\/1712.03942","author":"Tschannen Michael","year":"2017","unstructured":"Michael Tschannen , Aran Khanna , and Anima Anandkumar . Strassennets: Deep learning with a multiplication budget. CoRR, abs\/1712.03942 , 2017 . Michael Tschannen, Aran Khanna, and Anima Anandkumar. Strassennets: Deep learning with a multiplication budget. CoRR, abs\/1712.03942, 2017."},{"key":"e_1_3_2_1_16_1","volume-title":"Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011","author":"Vanhoucke Vincent","year":"2011","unstructured":"Vincent Vanhoucke , Andrew Senior , and Mark Z. Mao . Improving the speed of neural networks on cpus . In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011 , 2011 . Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011."},{"key":"e_1_3_2_1_17_1","volume-title":"Learning to skim text. CoRR, abs\/1704.06877","author":"Yu Adams Wei","year":"2017","unstructured":"Adams Wei Yu , Hongrae Lee , and Quoc V. Le . Learning to skim text. CoRR, abs\/1704.06877 , 2017 . Adams Wei Yu, Hongrae Lee, and Quoc V. Le. Learning to skim text. CoRR, abs\/1704.06877, 2017."},{"key":"e_1_3_2_1_18_1","volume-title":"Fast and accurate text classification: Skimming, rereading and early stopping","author":"Yu Keyi","year":"2018","unstructured":"Keyi Yu , Yang Liu , Alexander G. Schwing , and Jian Peng . Fast and accurate text classification: Skimming, rereading and early stopping , 2018 . Keyi Yu, Yang Liu, Alexander G. Schwing, and Jian Peng. Fast and accurate text classification: Skimming, rereading and early stopping, 2018."},{"key":"e_1_3_2_1_19_1","volume-title":"Trained ternary quantization. CoRR, abs\/1612.01064","author":"Zhu Chenzhuo","year":"2016","unstructured":"Chenzhuo Zhu , Song Han , Huizi Mao , and William J. Dally . Trained ternary quantization. CoRR, abs\/1612.01064 , 2016 . Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained ternary quantization. CoRR, abs\/1612.01064, 2016."},{"key":"e_1_3_2_1_20_1","volume-title":"October","author":"Zhu Michael","year":"2017","unstructured":"Michael Zhu and Suyog Gupta . To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv e-prints, page arXiv:1710.01878 , October 2017 . Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv e-prints, page arXiv:1710.01878, October 2017."}],"event":{"name":"SenSys '19: The 17th ACM Conference on Embedded Networked Sensor Systems","sponsor":["SIGMETRICS ACM Special Interest Group on Measurement and Evaluation","SIGCOMM ACM Special Interest Group on Data Communication","SIGMOBILE ACM Special Interest Group on Mobility of Systems, Users, Data and Computing","SIGOPS ACM Special Interest Group on Operating Systems","SIGBED ACM Special Interest Group on Embedded Systems","SIGARCH ACM Special Interest Group on Computer Architecture"],"location":"New York NY USA","acronym":"SenSys '19"},"container-title":["Proceedings of the 1st Workshop on Machine Learning on Edge in Sensor Systems"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3362743.3362965","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3362743.3362965","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:44:54Z","timestamp":1750203894000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3362743.3362965"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,11,10]]},"references-count":20,"alternative-id":["10.1145\/3362743.3362965","10.1145\/3362743"],"URL":"https:\/\/doi.org\/10.1145\/3362743.3362965","relation":{},"subject":[],"published":{"date-parts":[[2019,11,10]]},"assertion":[{"value":"2019-11-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}