{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:32:18Z","timestamp":1750221138497,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":19,"publisher":"ACM","license":[{"start":{"date-parts":[[2019,1,21]],"date-time":"2019-01-21T00:00:00Z","timestamp":1548028800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Natural Science Foundation of China","award":["No. (61522406, 61834006,61532017, 61521092)"],"award-info":[{"award-number":["No. (61522406, 61834006,61532017, 61521092)"]}]},{"name":"Strategic Priority Research Program of the Chinese Academy of Sciences","award":["XDPB12"],"award-info":[{"award-number":["XDPB12"]}]},{"name":"Beijing Municipal Science & Technology Commission","award":["Z171100000117019, Z181100008918006"],"award-info":[{"award-number":["Z171100000117019, Z181100008918006"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2019,1,21]]},"DOI":"10.1145\/3287624.3287628","type":"proceedings-article","created":{"date-parts":[[2019,1,18]],"date-time":"2019-01-18T21:45:18Z","timestamp":1547847918000},"page":"323-328","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Simulate-the-hardware"],"prefix":"10.1145","author":[{"given":"Jiajun","family":"Li","sequence":"first","affiliation":[{"name":"Chinese Academy of Sciences, Beijing, P.R. China and University of Chinese Academy of Sciences, Beijing, P.R. China"}]},{"given":"Ying","family":"Wang","sequence":"additional","affiliation":[{"name":"Chinese Academy of Sciences, Beijing, P.R. China"}]},{"given":"Bosheng","family":"Liu","sequence":"additional","affiliation":[{"name":"Chinese Academy of Sciences, Beijing, P.R. China and University of Chinese Academy of Sciences, Beijing, P.R. China"}]},{"given":"Yinhe","family":"Han","sequence":"additional","affiliation":[{"name":"Chinese Academy of Sciences, Beijing, P.R. China"}]},{"given":"Xiaowei","family":"Li","sequence":"additional","affiliation":[{"name":"Chinese Academy of Sciences, Beijing, P.R. China and University of Chinese Academy of Sciences, Beijing, P.R. China"}]}],"member":"320","published-online":{"date-parts":[[2019,1,21]]},"reference":[{"volume-title":"Query-by-example keyword spotting using long short-term memory networks","author":"Chen Guoguo","key":"e_1_3_2_1_1_1","unstructured":"Guoguo Chen , Carolina Parada , and Tara N Sainath . 2015. Query-by-example keyword spotting using long short-term memory networks . In ICASSP. IEEE , 5236--5240. Guoguo Chen, Carolina Parada, and Tara N Sainath. 2015. Query-by-example keyword spotting using long short-term memory networks. In ICASSP. IEEE, 5236--5240."},{"key":"e_1_3_2_1_2_1","volume-title":"Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems. 3123--3131.","author":"Courbariaux Matthieu","year":"2015","unstructured":"Matthieu Courbariaux , Yoshua Bengio , and Jean-Pierre David . 2015 . Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems. 3123--3131. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems. 3123--3131."},{"key":"e_1_3_2_1_3_1","volume-title":"Binarized neural networks: Training deep neural networks with weights and activations constrained to + 1 or -1. arXiv preprint arXiv:1602.02830","author":"Courbariaux Matthieu","year":"2016","unstructured":"Matthieu Courbariaux , Itay Hubara , Daniel Soudry , Ran El-Yaniv , and Yoshua Bengio . 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to + 1 or -1. arXiv preprint arXiv:1602.02830 ( 2016 ). Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to + 1 or -1. arXiv preprint arXiv:1602.02830 (2016)."},{"key":"e_1_3_2_1_4_1","unstructured":"Xavier Glorot Antoine Bordes and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In AISTATS. 315--323.  Xavier Glorot Antoine Bordes and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In AISTATS. 315--323."},{"key":"e_1_3_2_1_5_1","unstructured":"Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770--778.  Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770--778."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_2_1_7_1","volume-title":"From hashing to CNNs: Training Binary Weight networks via hashing. arXiv preprint arXiv:1802.02733","author":"Hu Qinghao","year":"2018","unstructured":"Qinghao Hu , Peisong Wang , and Jian Cheng . 2018. From hashing to CNNs: Training Binary Weight networks via hashing. arXiv preprint arXiv:1802.02733 ( 2018 ). Qinghao Hu, Peisong Wang, and Jian Cheng. 2018. From hashing to CNNs: Training Binary Weight networks via hashing. arXiv preprint arXiv:1802.02733 (2018)."},{"key":"e_1_3_2_1_8_1","volume-title":"Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167","author":"Ioffe Sergey","year":"2015","unstructured":"Sergey Ioffe and Christian Szegedy . 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 ( 2015 ). Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)."},{"key":"e_1_3_2_1_9_1","unstructured":"Alex Krizhevsky Vinod Nair and Geoffrey Hinton. 2014. The CIFAR-10 dataset. (2014).  Alex Krizhevsky Vinod Nair and Geoffrey Hinton. 2014. The CIFAR-10 dataset. (2014)."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3195970.3196020"},{"key":"e_1_3_2_1_11_1","volume-title":"Xnor-net: Imagenet classification using binary convolutional neural networks","author":"Rastegari Mohammad","year":"2016","unstructured":"Mohammad Rastegari , Vicente Ordonez , Joseph Redmon , and Ali Farhadi . 2016 . Xnor-net: Imagenet classification using binary convolutional neural networks . In ECCV. Springer , 525--542. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV. Springer, 525--542."},{"key":"e_1_3_2_1_12_1","volume-title":"Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan and Andrew Zisserman . 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 ( 2014 ). Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)."},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897937.2897995"},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"crossref","unstructured":"Christian Szegedy Wei Liu Yangqing Jia Pierre Sermanet Scott Reed Dragomir Anguelov Dumitru Erhan Vincent Vanhoucke etal 2015. Going deeper with convolutions. In CVPR.  Christian Szegedy Wei Liu Yangqing Jia Pierre Sermanet Scott Reed Dragomir Anguelov Dumitru Erhan Vincent Vanhoucke et al. 2015. Going deeper with convolutions. In CVPR.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Wei Tang Gang Hua and Liang Wang. 2017. How to train a compact binary neural network with high accuracy?. In AAAI. 2625--2631.   Wei Tang Gang Hua and Liang Wang. 2017. How to train a compact binary neural network with high accuracy?. In AAAI. 2625--2631.","DOI":"10.1609\/aaai.v31i1.10862"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897937.2898003"},{"key":"e_1_3_2_1_17_1","volume-title":"Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. arXiv preprint arXiv:1804.03209","author":"Warden Pete","year":"2018","unstructured":"Pete Warden . 2018 . Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. arXiv preprint arXiv:1804.03209 (2018). Pete Warden. 2018. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. arXiv preprint arXiv:1804.03209 (2018)."},{"key":"e_1_3_2_1_18_1","volume-title":"Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044","author":"Zhou Aojun","year":"2017","unstructured":"Aojun Zhou , Anbang Yao , Yiwen Guo , Lin Xu , and Yurong Chen . 2017. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044 ( 2017 ). Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044 (2017)."},{"key":"e_1_3_2_1_19_1","volume-title":"DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160","author":"Zhou Shuchang","year":"2016","unstructured":"Shuchang Zhou , Yuxin Wu , Zekun Ni , Xinyu Zhou , He Wen , and Yuheng Zou . 2016. DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 ( 2016 ). Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. 2016. DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 (2016)."}],"event":{"name":"ASPDAC '19: 24th Asia and South Pacific Design Automation Conference","sponsor":["SIGDA ACM Special Interest Group on Design Automation","IEICE ESS Institute of Electronics, Information and Communication Engineers, Engineering Sciences Society","IEEE CAS","IEEE CEDA","IPSJ SIG-SLDM Information Processing Society of Japan, SIG System LSI Design Methodology"],"location":"Tokyo Japan","acronym":"ASPDAC '19"},"container-title":["Proceedings of the 24th Asia and South Pacific Design Automation Conference"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3287624.3287628","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3287624.3287628","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T01:08:04Z","timestamp":1750208884000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3287624.3287628"}},"subtitle":["training accurate binarized neural networks for low-precision neural accelerators"],"short-title":[],"issued":{"date-parts":[[2019,1,21]]},"references-count":19,"alternative-id":["10.1145\/3287624.3287628","10.1145\/3287624"],"URL":"https:\/\/doi.org\/10.1145\/3287624.3287628","relation":{},"subject":[],"published":{"date-parts":[[2019,1,21]]},"assertion":[{"value":"2019-01-21","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}