{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T00:58:58Z","timestamp":1773277138794,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":20,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,8,10]],"date-time":"2020-08-10T00:00:00Z","timestamp":1597017600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,8,10]]},"DOI":"10.1145\/3370748.3406567","type":"proceedings-article","created":{"date-parts":[[2020,8,7]],"date-time":"2020-08-07T16:10:32Z","timestamp":1596816632000},"page":"175-180","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":186,"title":["FTRANS"],"prefix":"10.1145","author":[{"given":"Bingbing","family":"Li","sequence":"first","affiliation":[{"name":"University of Connecticut"}]},{"given":"Santosh","family":"Pandey","sequence":"additional","affiliation":[{"name":"Stevens Institute of Technology"}]},{"given":"Haowen","family":"Fang","sequence":"additional","affiliation":[{"name":"Syracuse University"}]},{"given":"Yanjun","family":"Lyv","sequence":"additional","affiliation":[{"name":"University of Connecticut"}]},{"given":"Ji","family":"Li","sequence":"additional","affiliation":[{"name":"Microsoft Corporation"}]},{"given":"Jieyang","family":"Chen","sequence":"additional","affiliation":[{"name":"Oak Ridge National Laboratory"}]},{"given":"Mimi","family":"Xie","sequence":"additional","affiliation":[{"name":"University of Texas at San Antonio"}]},{"given":"Lipeng","family":"Wan","sequence":"additional","affiliation":[{"name":"Oak Ridge National Laboratory"}]},{"given":"Hang","family":"Liu","sequence":"additional","affiliation":[{"name":"Stevens Institute of Technology"}]},{"given":"Caiwen","family":"Ding","sequence":"additional","affiliation":[{"name":"University of Connecticut"}]}],"member":"320","published-online":{"date-parts":[[2020,8,10]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W17-4303"},{"key":"e_1_3_2_2_2_1","volume-title":"Dzmitry Bahdanau, and Yoshua Bengio.","author":"Cho Kyunghyun","year":"2014","unstructured":"Kyunghyun Cho , Bart Van Merri\u00ebnboer , Dzmitry Bahdanau, and Yoshua Bengio. 2014 . On the properties of neural machine translation: encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: encoder-decoder approaches. arXiv preprint arXiv:1409.1259."},{"key":"e_1_3_2_2_3_1","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2018. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.  Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2018. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805."},{"key":"e_1_3_2_2_4_1","unstructured":"Caiwen Ding Siyu Liao Yanzhi Wang Zhe Li etal 2017. CirCNN: Accelerating and Compressing Deep Neural Networks using Block-circulant Weight Matrices. In MICRO. ACM 395--408.  Caiwen Ding Siyu Liao Yanzhi Wang Zhe Li et al. 2017. CirCNN: Accelerating and Compressing Deep Neural Networks using Block-circulant Weight Matrices. In MICRO. ACM 395--408."},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"crossref","unstructured":"Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9 8 1735--1780.  Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9 8 1735--1780.","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/0167-8191(92)90066-G"},{"key":"e_1_3_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1106"},{"key":"e_1_3_2_2_8_1","unstructured":"Yoon Kim Carl Denton Luong Hoang and Alexander M Rush. 2017. Structured attention networks. arXiv preprint arXiv:1702.00887.  Yoon Kim Carl Denton Luong Hoang and Alexander M Rush. 2017. Structured attention networks. arXiv preprint arXiv:1702.00887."},{"key":"e_1_3_2_2_9_1","unstructured":"Yinhan Liu et al. 2019. Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.  Yinhan Liu et al. 2019. Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692."},{"key":"e_1_3_2_2_10_1","volume-title":"Learning word vectors for sentiment analysis","author":"Maas Andrew L","unstructured":"Andrew L Maas , Raymond E Daly , Peter T Pham , Dan Huang , Andrew Y Ng , and Christopher Potts . 2011. Learning word vectors for sentiment analysis . In ACL. Association for Computational Linguistics , 142--150. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL. Association for Computational Linguistics, 142--150."},{"key":"e_1_3_2_2_11_1","volume-title":"5th International Conference on Learning Representations, ICLR 2017","author":"Merity Stephen","year":"2017","unstructured":"Stephen Merity , Caiming Xiong , James Bradbury , and Richard Socher . [n. d.] Pointer sentinel mixture models . In 5th International Conference on Learning Representations, ICLR 2017 , Toulon, France , April 24-26, 2017 . Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. [n. d.] Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017."},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-4009"},{"key":"e_1_3_2_2_13_1","volume-title":"Structured matrices and polynomials: unified superfast algorithms","author":"Pan Victor","unstructured":"Victor Pan . 2012. Structured matrices and polynomials: unified superfast algorithms . Springer Science & Business Media . Victor Pan. 2012. Structured matrices and polynomials: unified superfast algorithms. Springer Science & Business Media."},{"key":"e_1_3_2_2_14_1","unstructured":"Adam Paszke et al. 2017. Automatic differentiation in pytorch.  Adam Paszke et al. 2017. Automatic differentiation in pytorch."},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"crossref","unstructured":"Peng Shi Jinfeng Rao and Jimmy Lin. 2018. Simple attention-based representation learning for ranking short social media posts. arXiv preprint arXiv:1811.01013.  Peng Shi Jinfeng Rao and Jimmy Lin. 2018. Simple attention-based representation learning for ranking short social media posts. arXiv preprint arXiv:1811.01013.","DOI":"10.18653\/v1\/N19-1229"},{"key":"e_1_3_2_2_16_1","unstructured":"Julius Orion Smith. 2007. Mathematics of the discrete Fourier transform (DFT): with audio applications. Julius Smith.  Julius Orion Smith. 2007. Mathematics of the discrete Fourier transform (DFT): with audio applications. Julius Smith."},{"key":"e_1_3_2_2_17_1","unstructured":"2019. Supercharge Your AI and Database Applications with Xilinx's HBM-Enabled UltraScale+ Devices Featuring Samsung HBM2. Xilinx white paper WP508 (v1.1.2).  2019. Supercharge Your AI and Database Applications with Xilinx's HBM-Enabled UltraScale+ Devices Featuring Samsung HBM2. Xilinx white paper WP508 (v1.1.2)."},{"key":"e_1_3_2_2_18_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems 5998--6008.  Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems 5998--6008."},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3174243.3174253"},{"key":"e_1_3_2_2_20_1","volume-title":"International Conference on Machine Learning, 4082--4090","author":"Zhao Liang","year":"2017","unstructured":"Liang Zhao , Siyu Liao , Yanzhi Wang , Zhe Li , Jian Tang , and Bo Yuan . 2017 . Theoretical properties for neural networks with weight matrices of low displacement rank . In International Conference on Machine Learning, 4082--4090 . Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, and Bo Yuan. 2017. Theoretical properties for neural networks with weight matrices of low displacement rank. In International Conference on Machine Learning, 4082--4090."}],"event":{"name":"ISLPED '20: ACM\/IEEE International Symposium on Low Power Electronics and Design","location":"Boston Massachusetts","acronym":"ISLPED '20","sponsor":["SIGDA ACM Special Interest Group on Design Automation","IEEE CAS"]},"container-title":["Proceedings of the ACM\/IEEE International Symposium on Low Power Electronics and Design"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3370748.3406567","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3370748.3406567","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:23:35Z","timestamp":1750202615000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3370748.3406567"}},"subtitle":["energy-efficient acceleration of transformers using FPGA"],"short-title":[],"issued":{"date-parts":[[2020,8,10]]},"references-count":20,"alternative-id":["10.1145\/3370748.3406567","10.1145\/3370748"],"URL":"https:\/\/doi.org\/10.1145\/3370748.3406567","relation":{},"subject":[],"published":{"date-parts":[[2020,8,10]]},"assertion":[{"value":"2020-08-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}