{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,28]],"date-time":"2026-03-28T05:08:20Z","timestamp":1774674500495,"version":"3.50.1"},"reference-count":76,"publisher":"Association for Computing Machinery (ACM)","issue":"5s","license":[{"start":{"date-parts":[[2021,9,17]],"date-time":"2021-09-17T00:00:00Z","timestamp":1631836800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Embed. Comput. Syst."],"published-print":{"date-parts":[[2021,10,31]]},"abstract":"<jats:p>\n            As deep learning algorithms are widely adopted, an increasing number of them are positioned in embedded application domains with strict reliability constraints. The expenditure of significant resources to satisfy performance requirements in deep neural network accelerators has thinned out the margins for delivering safety in embedded deep learning applications, thus precluding the adoption of conventional fault tolerance methods. The potential of exploiting the inherent resilience characteristics of deep neural networks remains though unexplored, offering a promising low-cost path towards safety in embedded deep learning applications. This work demonstrates the possibility of such exploitation by juxtaposing the reduction of the vulnerability surface through the proper design of the quantization schemes with shaping the parameter distributions at each layer through the guidance offered by appropriate training methods, thus delivering deep neural networks of high resilience merely through algorithmic modifications. Unequaled error resilience characteristics can be thus injected into safety-critical deep learning applications to tolerate bit error rates of up to\n            <jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>\n                  \n                <\/jats:tex-math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula>\n            at absolutely zero hardware, energy, and performance costs while improving the error-free model accuracy even further.\n          <\/jats:p>","DOI":"10.1145\/3477007","type":"journal-article","created":{"date-parts":[[2021,9,17]],"date-time":"2021-09-17T18:36:51Z","timestamp":1631903811000},"page":"1-25","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":30,"title":["SNR:\n            <u>S<\/u>\n            queezing\n            <u>N<\/u>\n            umerical\n            <u>R<\/u>\n            ange Defuses Bit Error Vulnerability Surface in Deep Neural Networks"],"prefix":"10.1145","volume":"20","author":[{"given":"Elbruz","family":"Ozen","sequence":"first","affiliation":[{"name":"University of California, San Diego, USA"}]},{"given":"Alex","family":"Orailoglu","sequence":"additional","affiliation":[{"name":"University of California, San Diego, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,9,17]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/951710.951734"},{"key":"e_1_2_1_2_1","volume-title":"Language models are few-shot learners. arXiv preprint arXiv:2005.14165","author":"Brown Tom B.","year":"2020","unstructured":"Tom B. Brown , Benjamin Mann , Nick Ryder , Melanie Subbiah , Jared Kaplan , Prafulla Dhariwal , Arvind Neelakantan , Pranav Shyam , Girish Sastry , Amanda Askell , Sandhini Agarwal , Ariel Herbert-Voss , Gretchen Krueger , Tom Henighan , Rewon Child , Aditya Ramesh , Daniel M. Ziegler , Jeffrey Wu , Clemens Winter , Christopher Hesse , Mark Chen , Eric Sigler , Mateusz Litwin , Scott Gray , Benjamin Chess , Jack Clark , Christopher Berner , Sam McCandlish , Alec Radford , Ilya Sutskever , and Dario Amodei . 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 ( 2020 ). Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020)."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2019.00034"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.5555\/3130379.3130384"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/JETCAS.2019.2910232"},{"key":"e_1_2_1_6_1","volume-title":"A low-cost fault corrector for deep neural networks through range restriction. arXiv preprint arXiv:2003.13874","author":"Chen Zitao","year":"2021","unstructured":"Zitao Chen , Guanpeng Li , and Karthik Pattabiraman . 2021. A low-cost fault corrector for deep neural networks through range restriction. arXiv preprint arXiv:2003.13874 ( 2021 ). Zitao Chen, Guanpeng Li, and Karthik Pattabiraman. 2021. A low-cost fault corrector for deep neural networks through range restriction. arXiv preprint arXiv:2003.13874 (2021)."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3295500.3356177"},{"key":"e_1_2_1_8_1","volume-title":"Proceedings of the 2nd SysML Conference.","author":"Choi Jungwook","year":"2019","unstructured":"Jungwook Choi , Swagath Venkataramani , Vijayalakshmi Srinivasan , Kailash Gopalakrishnan , Zhuo Wang , and Pierce Chuang . 2019 . Accurate and efficient 2-bit quantized neural networks . In Proceedings of the 2nd SysML Conference. Jungwook Choi, Swagath Venkataramani, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan, Zhuo Wang, and Pierce Chuang. 2019. Accurate and efficient 2-bit quantized neural networks. In Proceedings of the 2nd SysML Conference."},{"key":"e_1_2_1_9_1","volume-title":"Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan.","author":"Choi Jungwook","year":"2018","unstructured":"Jungwook Choi , Zhuo Wang , Swagath Venkataramani , Pierce I-Jen Chuang , Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. 2018 . PACT : Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085 (2018). Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. 2018. PACT: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085 (2018)."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3316781.3317908"},{"key":"e_1_2_1_11_1","volume-title":"Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024","author":"Courbariaux Matthieu","year":"2014","unstructured":"Matthieu Courbariaux , Yoshua Bengio , and Jean-Pierre David . 2014. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024 ( 2014 ). Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2014. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024 (2014)."},{"key":"e_1_2_1_12_1","volume-title":"BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/TR.2018.2878387"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2018.00015"},{"key":"e_1_2_1_15_1","volume-title":"A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630","author":"Gholami Amir","year":"2021","unstructured":"Amir Gholami , Sehoon Kim , Zhen Dong , Zhewei Yao , Michael W. Mahoney , and Kurt Keutzer . 2021. A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630 ( 2021 ). Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021. A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630 (2021)."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISCAS.2019.8702382"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1002\/j.1538-7305.1950.tb00463.x"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.5555\/2969239.2969366"},{"key":"e_1_2_1_19_1","volume-title":"Keckler","author":"Sastry Hari Siva Kumar","year":"2021","unstructured":"Siva Kumar Sastry Hari , Michael Sullivan , Timothy Tsai , and Stephen W . Keckler . 2021 . Making convolutions resilient via algorithm-based error detection techniques. IEEE Transactions on Dependable and Secure Computing ( 2021). Siva Kumar Sastry Hari, Michael Sullivan, Timothy Tsai, and Stephen W. Keckler. 2021. Making convolutions resilient via algorithm-based error detection techniques. IEEE Transactions on Dependable and Secure Computing (2021)."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00447"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3316781.3317870"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/AICAS.2019.8771544"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.5555\/3408352.3408634"},{"key":"e_1_2_1_25_1","volume-title":"MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861","author":"Howard Andrew G.","year":"2017","unstructured":"Andrew G. Howard , Menglong Zhu , Bo Chen , Dmitry Kalenichenko , Weijun Wang , Tobias Weyand , Marco Andreetto , and Hartwig Adam . 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 ( 2017 ). Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)."},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.5555\/3157382.3157557"},{"key":"e_1_2_1_27_1","volume-title":"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv preprint arXiv:1602.07360","author":"Iandola Forrest N.","year":"2016","unstructured":"Forrest N. Iandola , Song Han , Matthew W. Moskewicz , Khalid Ashraf , William J. Dally , and Kurt Keutzer . 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv preprint arXiv:1602.07360 ( 2016 ). Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016)."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.5555\/3199700.3199830"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2019.8851966"},{"key":"e_1_2_1_30_1","unstructured":"Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. http:\/\/citeseerx.ist.psu.edu\/viewdoc\/summary?doi=10.1.1.222.9220. (2009).  Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. http:\/\/citeseerx.ist.psu.edu\/viewdoc\/summary?doi=10.1.1.222.9220. (2009)."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2014.6854560"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3126908.3126964"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISSREW.2018.00024"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASP-DAC47756.2020.9045324"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3061639.3062310"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/TEST.2018.8624687"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287624.3288743"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3316781.3317742"},{"key":"e_1_2_1_40_1","volume-title":"Berg","author":"Liu Wei","year":"2016","unstructured":"Wei Liu , Dragomir Anguelov , Dumitru Erhan , Christian Szegedy , Scott Reed , Cheng-Yang Fu , and Alexander C . Berg . 2016 . SSD : Single shot multibox detector. In European Conference on Computer Vision. Springer , 21\u201337. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg. 2016. SSD: Single shot multibox detector. In European Conference on Computer Vision. Springer, 21\u201337."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.23919\/DATE.2019.8715178"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/ITC44170.2019.9000150"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/DSN-W50199.2020.00014"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2016.2524556"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2016.11.008"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/JETCAS.2020.3022920"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCD.2018.00077"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/MDAT.2019.2952336"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/ATS47505.2019.000-8"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2020.3012209"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASP-DAC47756.2020.9045662"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3400302.3415680"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10836-020-05920-2"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-67661-2_4"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.5555\/3454287.3455008"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3195970.3195997"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001165"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3316781.3317770"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICESS.2019.8782505"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-020-04969-6"},{"key":"e_1_2_1_61_1","volume-title":"Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 979\u2013984","author":"Schorn Christoph","year":"2018","unstructured":"Christoph Schorn , Andre Guntoro , and Gerd Ascheid . 2018 . Accurate neuron resilience prediction for a flexible reliability management in neural network accelerators. In Design , Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 979\u2013984 . Christoph Schorn, Andre Guntoro, and Gerd Ascheid. 2018. Accurate neuron resilience prediction for a flexible reliability management in neural network accelerators. In Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 979\u2013984."},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-99130-6_14"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/MDAT.2020.2971217"},{"key":"e_1_2_1_64_1","volume-title":"Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan and Andrew Zisserman . 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 ( 2014 ). Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)."},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2012.02.016"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.2200\/S01004ED1V01Y202004CAC050"},{"key":"e_1_2_1_67_1","volume-title":"International Conference on Machine Learning. PMLR, 6105\u20136114","author":"Tan Mingxing","year":"2019","unstructured":"Mingxing Tan and Quoc Le . 2019 . EfficientNet: Rethinking model scaling for convolutional neural networks . In International Conference on Machine Learning. PMLR, 6105\u20136114 . Mingxing Tan and Quoc Le. 2019. EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning. PMLR, 6105\u20136114."},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1109\/PACRIM.1991.160742"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2018.2855145"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.5555\/3454287.3455514"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1109\/ITC44170.2019.9000149"},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASP-DAC47756.2020.9045134"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2018.2790840"},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1145\/3195970.3196129"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1109\/VTS.2018.8368656"},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3178115"}],"container-title":["ACM Transactions on Embedded Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3477007","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3477007","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:30:46Z","timestamp":1750188646000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3477007"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,17]]},"references-count":76,"journal-issue":{"issue":"5s","published-print":{"date-parts":[[2021,10,31]]}},"alternative-id":["10.1145\/3477007"],"URL":"https:\/\/doi.org\/10.1145\/3477007","relation":{},"ISSN":["1539-9087","1558-3465"],"issn-type":[{"value":"1539-9087","type":"print"},{"value":"1558-3465","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,9,17]]},"assertion":[{"value":"2021-04-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-07-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-09-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}