{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T23:37:14Z","timestamp":1761176234995,"version":"build-2065373602"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686318","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:00:00Z","timestamp":1761004800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,10,21]]},"abstract":"<jats:p>Large Language Models (LLMs) have demonstrated remarkable performance across various natural language processing (NLP) tasks. However, their deployment is challenging due to the substantial computational resources required. Power-of-two (PoT) quantization is a general tool to counteract this difficulty. Albeit previous works on PoT quantization can be efficiently dequantized on CPUs using fixed-point addition, it showed less effectiveness on GPUs. The reason is entanglement of the sign bit and sequential bit manipulations needed for dequantization. We propose a novel POT quantization framework for LLM weights that (i) outperforms state-of-the-art accuracy in extremely low-precision number formats, and (ii) enables faster inference through more efficient dequantization. To maintain the accuracy of the quantized model, we introduce a two-step post-training algorithm: (i) initialize the quantization scales with a robust starting point, and (ii) refine these scales using a minimal calibration set. The performance of our PoT post-training algorithm surpasses the current state-of-the-art in integer quantization, particularly at low precisions such as 2- and 3-bit formats. Our PoT quantization accelerates the dequantization step required for the floating point inference and leads to 3.67\u00d7 speed up on a NVIDIA V100, and 1.63\u00d7 on a NVIDIA RTX 4090, compared to uniform integer dequantization.<\/jats:p>","DOI":"10.3233\/faia251188","type":"book-chapter","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:54:01Z","timestamp":1761126841000},"source":"Crossref","is-referenced-by-count":0,"title":["PoT-PTQ:Two-Step Power-of-Two Post-Training for LLMs"],"prefix":"10.3233","author":[{"given":"Xinyu","family":"Wang","sequence":"first","affiliation":[{"name":"McGill University, Canada"}]},{"given":"Vahid","family":"Partovi Nia","sequence":"additional","affiliation":[{"name":"Noah\u2019s Ark Lab, Canada"}]},{"given":"Peng","family":"Lu","sequence":"additional","affiliation":[{"name":"Universit\u00e9 de Montr\u00e9al, Canada"}]},{"given":"Jerry","family":"Huang","sequence":"additional","affiliation":[{"name":"Universit\u00e9 de Montr\u00e9al, Canada"},{"name":"Mila \u2013 Quebec AI Institute, Canada"}]},{"given":"Xiao-Wen","family":"Chang","sequence":"additional","affiliation":[{"name":"McGill University, Canada"}]},{"given":"Boxing","family":"Chen","sequence":"additional","affiliation":[{"name":"Noah\u2019s Ark Lab, Canada"}]},{"given":"Yufei","family":"Cui","sequence":"additional","affiliation":[{"name":"Noah\u2019s Ark Lab, Canada"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2025"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA251188","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:54:01Z","timestamp":1761126841000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA251188"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"ISBN":["9781643686318"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia251188","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]}}}