{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,29]],"date-time":"2026-01-29T05:25:11Z","timestamp":1769664311171,"version":"3.49.0"},"reference-count":7,"publisher":"Wiley","license":[{"start":{"date-parts":[[2020,2,18]],"date-time":"2020-02-18T00:00:00Z","timestamp":1581984000000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61602494"],"award-info":[{"award-number":["61602494"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004735","name":"Natural Science Foundation of Hunan Province","doi-asserted-by":"publisher","award":["61602494"],"award-info":[{"award-number":["61602494"]}],"id":[{"id":"10.13039\/501100004735","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Computational Intelligence and Neuroscience"],"published-print":{"date-parts":[[2020,2,18]]},"abstract":"<jats:p>The increase in sophistication of neural network models in recent years has exponentially expanded memory consumption and computational cost, thereby hindering their applications on ASIC, FPGA, and other mobile devices. Therefore, compressing and accelerating the neural networks are necessary. In this study, we introduce a novel strategy to train low-bit networks with weights and activations quantized by several bits and address two corresponding fundamental issues. One is to approximate activations through low-bit discretization for decreasing network computational cost and dot-product memory. The other is to specify weight quantization and update mechanism for discrete weights to avoid gradient mismatch. With quantized low-bit weights and activations, the costly full-precision operation will be replaced by shift operation. We evaluate the proposed method on common datasets, and results show that this method can dramatically compress the neural network with slight accuracy loss.<\/jats:p>","DOI":"10.1155\/2020\/7839064","type":"journal-article","created":{"date-parts":[[2020,2,18]],"date-time":"2020-02-18T18:32:49Z","timestamp":1582050769000},"page":"1-7","source":"Crossref","is-referenced-by-count":11,"title":["A Novel Low-Bit Quantization Strategy for Compressing Deep Neural Networks"],"prefix":"10.1155","volume":"2020","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3338-3223","authenticated-orcid":true,"given":"Xin","family":"Long","sequence":"first","affiliation":[{"name":"College of Systems Engineering, National University of Defense Technology, Changsha 410073, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4247-9684","authenticated-orcid":true,"given":"XiangRong","family":"Zeng","sequence":"additional","affiliation":[{"name":"College of Systems Engineering, National University of Defense Technology, Changsha 410073, China"}]},{"given":"Zongcheng","family":"Ben","sequence":"additional","affiliation":[{"name":"College of Systems Engineering, National University of Defense Technology, Changsha 410073, China"},{"name":"College of Computer, National University of Defense Technology, Changsha 410073, China"}]},{"given":"Dianle","family":"Zhou","sequence":"additional","affiliation":[{"name":"College of Intelligent Science, National University of Defense Technology, Changsha 410073, China"}]},{"given":"Maojun","family":"Zhang","sequence":"additional","affiliation":[{"name":"College of Systems Engineering, National University of Defense Technology, Changsha 410073, China"}]}],"member":"311","reference":[{"key":"1","volume":"264","year":"2012"},{"key":"4","doi-asserted-by":"publisher","DOI":"10.1145\/3065386"},{"key":"9","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001163"},{"key":"14","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01237-3_12"},{"key":"16","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2911536"},{"key":"25","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.1982.1056489"},{"key":"26","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2018.01.010"}],"container-title":["Computational Intelligence and Neuroscience"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/cin\/2020\/7839064.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/cin\/2020\/7839064.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/cin\/2020\/7839064.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2020,2,18]],"date-time":"2020-02-18T18:32:50Z","timestamp":1582050770000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/cin\/2020\/7839064\/"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,2,18]]},"references-count":7,"alternative-id":["7839064","7839064"],"URL":"https:\/\/doi.org\/10.1155\/2020\/7839064","relation":{},"ISSN":["1687-5265","1687-5273"],"issn-type":[{"value":"1687-5265","type":"print"},{"value":"1687-5273","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,2,18]]}}}