{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,20]],"date-time":"2026-03-20T15:37:26Z","timestamp":1774021046584,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":82,"publisher":"ACM","funder":[{"name":"National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT)","award":["RS-2024-00347090"],"award-info":[{"award-number":["RS-2024-00347090"]}]},{"name":"RISM and CoCoSys, centers in JUMP 2.0, an SRC program sponsored by DARPA","award":["434690"],"award-info":[{"award-number":["434690"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,6,21]]},"DOI":"10.1145\/3695053.3731109","type":"proceedings-article","created":{"date-parts":[[2025,6,20]],"date-time":"2025-06-20T16:46:17Z","timestamp":1750437977000},"page":"1155-1170","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Hybrid SLC-MLC RRAM Mixed-Signal Processing-in-Memory Architecture for Transformer Acceleration via Gradient Redistribution"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0002-4235-6243","authenticated-orcid":false,"given":"Chang Eun","family":"Song","sequence":"first","affiliation":[{"name":"University of California, San Diego, La Jolla, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-1926-2078","authenticated-orcid":false,"given":"Priyansh","family":"Bhatnagar","sequence":"additional","affiliation":[{"name":"University of California, San Diego, La Jolla, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5409-321X","authenticated-orcid":false,"given":"Zihan","family":"Xia","sequence":"additional","affiliation":[{"name":"University of California, San Diego, La Jolla, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0442-5634","authenticated-orcid":false,"given":"Nam Sung","family":"Kim","sequence":"additional","affiliation":[{"name":"UIUC, Champaign, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6954-997X","authenticated-orcid":false,"given":"Tajana S","family":"Rosing","sequence":"additional","affiliation":[{"name":"University of California, San Diego, La Jolla, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8104-5136","authenticated-orcid":false,"given":"Mingu","family":"Kang","sequence":"additional","affiliation":[{"name":"University of California, San Diego, La Jolla, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,6,20]]},"reference":[{"key":"e_1_3_3_2_2_2","unstructured":"Herv\u00e9 Abdi. 2007. Singular value decomposition (SVD) and generalized singular value decomposition. Encyclopedia of measurement and statistics 907 912 (2007) 44."},{"key":"e_1_3_3_2_3_2","doi-asserted-by":"crossref","unstructured":"Amogh Agrawal Akhilesh Jaiswal Deboleena Roy Bing Han Gopalakrishnan Srinivasan Aayush Ankit and Kaushik Roy. 2019. Xcel-RAM: Accelerating binary neural networks in high-throughput SRAM compute arrays. IEEE Transactions on Circuits and Systems I: Regular Papers 66 8 (2019) 3064\u20133076.","DOI":"10.1109\/TCSI.2019.2907488"},{"key":"e_1_3_3_2_4_2","unstructured":"Meta AI. 2024. Llama 3.2: Multilingual Large Language Models. https:\/\/www.llama.com."},{"key":"e_1_3_3_2_5_2","doi-asserted-by":"crossref","unstructured":"Mustafa\u00a0F Ali Akhilesh Jaiswal and Kaushik Roy. 2019. In-memory low-cost bit-serial addition using commodity DRAM technology. IEEE Transactions on Circuits and Systems I: Regular Papers 67 1 (2019) 155\u2013165.","DOI":"10.1109\/TCSI.2019.2945617"},{"key":"e_1_3_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3579371.3589062"},{"key":"e_1_3_3_2_7_2","unstructured":"ARM. 2021. Artisan Memory Compilers. https:\/\/developer.arm.com\/ip-products\/physical-ip\/embedded-memory. Accessed: 2021-11-08."},{"key":"e_1_3_3_2_8_2","unstructured":"Iz Beltagy Matthew\u00a0E Peters and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2004.05150 (2020)."},{"key":"e_1_3_3_2_9_2","first-page":"2206","volume-title":"International conference on machine learning","author":"Borgeaud Sebastian","year":"2022","unstructured":"Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George\u00a0Bm Van Den\u00a0Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et\u00a0al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning. PMLR, 2206\u20132240."},{"key":"e_1_3_3_2_10_2","unstructured":"Weidong Cao Yilong Zhao Adith Boloor Yinhe Han Xuan Zhang and Li Jiang. 2021. Neural-pim: Efficient processing-in-memory with neural approximation of peripherals. IEEE Trans. Comput. 71 9 (2021) 2142\u20132155."},{"key":"e_1_3_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP48485.2024.10447756"},{"key":"e_1_3_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001140"},{"key":"e_1_3_3_2_13_2","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1810.04805 (2018)."},{"key":"e_1_3_3_2_14_2","doi-asserted-by":"publisher","unstructured":"Xiangyu Dong Cong Xu Yuan Xie and Norman\u00a0P. Jouppi. 2012. NVSim: A Circuit-Level Performance Energy and Area Model for Emerging Nonvolatile Memory. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 31 7 (2012) 994\u20131007. 10.1109\/TCAD.2012.2185930","DOI":"10.1109\/TCAD.2012.2185930"},{"key":"e_1_3_3_2_15_2","volume-title":"International Conference on Learning Representations","author":"Dosovitskiy Alexey","year":"2021","unstructured":"Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations."},{"key":"e_1_3_3_2_16_2","unstructured":"Keming Fan Wei-Chen Chen Sumukh Pinge H.\u00a0S.\u00a0Philip Wong and Tajana Rosing. 2024. Efficient Open Modification Spectral Library Searching in High-Dimensional Space with Multi-Level-Cell Memory. arxiv:https:\/\/arXiv.org\/abs\/2405.02756\u00a0[cs.AR] https:\/\/arxiv.org\/abs\/2405.02756"},{"key":"e_1_3_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3352460.3358260"},{"key":"e_1_3_3_2_18_2","doi-asserted-by":"crossref","unstructured":"Sujan\u00a0K Gonugondla Mingu Kang and Naresh\u00a0R Shanbhag. 2018. A variation-tolerant in-memory machine learning classifier via on-chip training. JSSC 53 11 (Nov. 2018) 3163\u20133173.","DOI":"10.1109\/JSSC.2018.2867275"},{"key":"e_1_3_3_2_19_2","doi-asserted-by":"publisher","unstructured":"Alessandro Grossi Elisa Vianello Mohamed\u00a0M. Sabry Marios Barlas Laurent Grenouillet Jean Coignus Edith Beigne Tony Wu Binh\u00a0Q. Le Mary\u00a0K. Wootters Cristian Zambelli Etienne Nowak and Subhasish Mitra. 2019. Resistive RAM Endurance: Array-Level Characterization and Correction Techniques Targeting Deep Learning Applications. IEEE Transactions on Electron Devices 66 3 (2019) 1281\u20131288. 10.1109\/TED.2019.2894387","DOI":"10.1109\/TED.2019.2894387"},{"key":"e_1_3_3_2_20_2","unstructured":"Albert Gu and Tri Dao. 2023. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2312.00752 (2023)."},{"key":"e_1_3_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSCC42615.2023.10067610"},{"key":"e_1_3_3_2_22_2","doi-asserted-by":"crossref","unstructured":"Je-Min Hung Cheng-Xin Xue Hui-Yao Kao Yen-Hsiang Huang Fu-Chun Chang Sheng-Po Huang Ta-Wei Liu Chuan-Jia Jhang Chin-I Su Win-San Khwa et\u00a0al. 2021. A four-megabit compute-in-memory macro with eight-bit precision based on CMOS and resistive random-access memory for AI edge devices. Nature Electronics 4 12 (2021) 921\u2013930.","DOI":"10.1038\/s41928-021-00676-9"},{"key":"e_1_3_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3307650.3322237"},{"key":"e_1_3_3_2_24_2","doi-asserted-by":"crossref","unstructured":"Yuning Jiang Peng Huang Zheng Zhou and Jinfeng Kang. 2019. Circuit design of RRAM-based neuromorphic hardware systems for classification and modified Hebbian learning. Science China Information Sciences 62 (2019) 1\u201319.","DOI":"10.1007\/s11432-018-9863-6"},{"key":"e_1_3_3_2_25_2","doi-asserted-by":"crossref","unstructured":"Hai Jin Cong Liu Haikun Liu Ruikun Luo Jiahong Xu Fubing Mao and Xiaofei Liao. 2021. ReHy: A ReRAM-based digital\/analog hybrid PIM architecture for accelerating CNN training. IEEE Transactions on Parallel and Distributed Systems 33 11 (2021) 2872\u20132884.","DOI":"10.1109\/TPDS.2021.3138087"},{"key":"e_1_3_3_2_26_2","doi-asserted-by":"publisher","unstructured":"Hai Jin Cong Liu Haikun Liu Ruikun Luo Jiahong Xu Fubing Mao and Xiaofei Liao. 2022. ReHy: A ReRAM-Based Digital\/Analog Hybrid PIM Architecture for Accelerating CNN Training. IEEE Transactions on Parallel and Distributed Systems 33 11 (2022) 2872\u20132884. 10.1109\/TPDS.2021.3138087","DOI":"10.1109\/TPDS.2021.3138087"},{"key":"e_1_3_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA52012.2021.00010"},{"key":"e_1_3_3_2_28_2","doi-asserted-by":"crossref","unstructured":"Dong\u00a0Eun Kim Aayush Ankit Cheng Wang and Kaushik Roy. 2023. SAMBA: sparsity aware in-memory computing based machine learning accelerator. IEEE Trans. Comput. 72 9 (2023) 2615\u20132627.","DOI":"10.1109\/TC.2023.3257513"},{"key":"e_1_3_3_2_29_2","volume-title":"Learning multiple layers of features from tiny images","author":"Krizhevsky Alex","year":"2009","unstructured":"Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical Report."},{"key":"e_1_3_3_2_30_2","doi-asserted-by":"crossref","unstructured":"Lukas Kull Thomas Toifl Martin Schmatz Pier\u00a0Andrea Francese Christian Menolfi Matthias Braendli Marcel Kossel Thomas Morf Toke\u00a0Meyer Andersen and Yusuf Leblebici. 2013. A 3.1 mW 8b 1.2 GS\/s single-channel asynchronous SAR ADC with alternate comparators for enhanced speed in 32 nm digital SOI CMOS. IEEE Journal of Solid-State Circuits 48 12 (2013) 3049\u20133058.","DOI":"10.1109\/JSSC.2013.2279571"},{"key":"e_1_3_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSCC42613.2021.9365862"},{"key":"e_1_3_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA57654.2024.00065"},{"key":"e_1_3_3_2_33_2","unstructured":"Shiwei Liu Chen Mu Hao Jiang Yunzhengmao Wang Jinshan Zhang Feng Lin Keji Zhou Qi Liu and Chixiao Chen. 2023. Hardsea: Hybrid analog-reram clustering and digital-sram in-memory computing accelerator for dynamic sparse self-attention in transformer. IEEE Transactions on Very Large Scale Integration (VLSI) Systems (2023)."},{"key":"e_1_3_3_2_34_2","doi-asserted-by":"crossref","unstructured":"Shuang Liu JJ Wang JT Zhou SG Hu Qi Yu TP Chen and Yang Liu. 2023. An area-and energy-efficient spiking neural network with spike-time-dependent plasticity realized with SRAM processing-in-memory macro and on-chip unsupervised learning. IEEE Transactions on Biomedical Circuits and Systems 17 1 (2023) 92\u2013104.","DOI":"10.1109\/TBCAS.2023.3242413"},{"key":"e_1_3_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/SOCC49529.2020.9524802"},{"key":"e_1_3_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-acl.656"},{"key":"e_1_3_3_2_37_2","unstructured":"Mitchell\u00a0P. Marcus Beatrice Santorini and Mary\u00a0Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics 19 2 (1993) 313\u2013330. https:\/\/aclanthology.org\/J93-2004\/"},{"key":"e_1_3_3_2_38_2","unstructured":"Stephen Merity. 2021. The WikiText Long Term Dependency Language Modeling Dataset. https:\/\/blog.salesforceairesearch.com\/the-wikitext-long-term-dependency-language-modeling-dataset\/. Accessed: 2021-11-08."},{"key":"e_1_3_3_2_39_2","unstructured":"Stephen Merity Caiming Xiong James Bradbury and Richard Socher. 2016. Pointer Sentinel Mixture Models. arxiv:https:\/\/arXiv.org\/abs\/1609.07843\u00a0[cs.CL]"},{"key":"e_1_3_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/IMW59701.2024.10536980"},{"key":"e_1_3_3_2_41_2","unstructured":"R Mohan et\u00a0al. 2023. Processing-in-Memory (PIM) Based Defect Prediction of Metal Surfaces Using Spiking Neural Network. \u4e2d\u570b\u6a5f\u68b0\u5de5\u7a0b\u5b78\u520a 44 4 (2023) 379\u2013388."},{"key":"e_1_3_3_2_42_2","unstructured":"Tsendsuren Munkhdalai Manaal Faruqui and Siddharth Gopal. 2024. Leave no context behind: Efficient infinite context transformers with infini-attention. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2404.07143 (2024)."},{"key":"e_1_3_3_2_43_2","unstructured":"Boris Murmann. [n. d.]. ADC Performance Survey 1997-2024. [Online]. Available: https:\/\/github.com\/bmurmann\/ADC-survey."},{"key":"e_1_3_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3123939.3124545"},{"key":"e_1_3_3_2_45_2","doi-asserted-by":"crossref","unstructured":"Mirko Prezioso Farnood Merrikh-Bayat Brian\u00a0D Hoskins Gina\u00a0C Adam Konstantin\u00a0K Likharev and Dmitri\u00a0B Strukov. 2015. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521 7550 (2015) 61\u201364.","DOI":"10.1038\/nature14441"},{"key":"e_1_3_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3579371.3589057"},{"key":"e_1_3_3_2_47_2","unstructured":"A. Radford Jeffrey Wu R. Child David Luan Dario Amodei and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners."},{"key":"e_1_3_3_2_48_2","unstructured":"Alec Radford Jeffrey Wu Rewon Child David Luan Dario Amodei Ilya Sutskever et\u00a0al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1 8 (2019) 9."},{"key":"e_1_3_3_2_49_2","doi-asserted-by":"crossref","unstructured":"Misbah Ramadan Nicol\u00e1s Wainstein Ran Ginosar and Shahar Kvatinsky. 2019. Adaptive programming in multi-level cell ReRAM. Microelectronics Journal 90 (2019) 169\u2013180.","DOI":"10.1016\/j.mejo.2019.06.004"},{"key":"e_1_3_3_2_50_2","doi-asserted-by":"publisher","unstructured":"Misbah Ramadan Nicol\u00e1s Wainstein Ran Ginosar and Shahar Kvatinsky. 2019. Adaptive programming in multi-level cell ReRAM. Microelectronics Journal 90 (2019) 169\u2013180. 10.1016\/j.mejo.2019.06.004","DOI":"10.1016\/j.mejo.2019.06.004"},{"key":"e_1_3_3_2_51_2","doi-asserted-by":"crossref","unstructured":"Yiming Ren Bobo Tian Mengge Yan Guangdi Feng Bin Gao Fangyu Yue Hui Peng Xiaodong Tang Qiuxiang Zhu Junhao Chu et\u00a0al. 2023. Associative learning of a three-terminal memristor network for digits recognition. Science China Information Sciences 66 2 (2023) 122403.","DOI":"10.1007\/s11432-022-3503-4"},{"key":"e_1_3_3_2_52_2","doi-asserted-by":"crossref","unstructured":"Mehdi Saberi Reza Lotfi Khalil Mafinezhad and Wouter\u00a0A Serdijn. 2011. Analysis of power consumption and linearity in capacitive digital-to-analog converters used in successive approximation ADCs. IEEE Transactions on Circuits and Systems I: Regular Papers 58 8 (2011) 1736\u20131748.","DOI":"10.1109\/TCSI.2011.2107214"},{"key":"e_1_3_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001139"},{"key":"e_1_3_3_2_54_2","doi-asserted-by":"publisher","unstructured":"Ali Shafiee Anirban Nag Naveen Muralimanohar Rajeev Balasubramonian John\u00a0Paul Strachan Miao Hu R.\u00a0Stanley Williams and Vivek Srikumar. 2016. ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. SIGARCH Comput. Archit. News 44 3 (jun 2016) 14\u201326. 10.1145\/3007787.3001139","DOI":"10.1145\/3007787.3001139"},{"key":"e_1_3_3_2_55_2","doi-asserted-by":"crossref","unstructured":"Ali Shafiee Anirban Nag Naveen Muralimanohar Rajeev Balasubramonian John\u00a0Paul Strachan Miao Hu R\u00a0Stanley Williams and Vivek Srikumar. 2016. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Computer Architecture News 44 3 (2016) 14\u201326.","DOI":"10.1145\/3007787.3001139"},{"key":"e_1_3_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/HOTI51249.2020.00016"},{"key":"e_1_3_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.5555\/2014698.2014895"},{"key":"e_1_3_3_2_58_2","first-page":"1","volume-title":"2024 IEEE Custom Integrated Circuits Conference (CICC)","author":"Song Chang\u00a0Eun","year":"2024","unstructured":"Chang\u00a0Eun Song, Yidong Li, Amardeep Ramnani, Pulkit Agrawal, Purvi Agrawal, Sung-Joon Jang, Sang-Seol Lee, Tajana Rosing, and Mingu Kang. 2024. 52.5 TOPS\/W 1.7 GHz Reconfigurable XGBoost Inference Accelerator Based on Modular-Unit-Tree with Dynamic Data and Compute Gating. In 2024 IEEE Custom Integrated Circuits Conference (CICC). IEEE, 1\u20132."},{"key":"e_1_3_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.1145\/3665314.3670798"},{"key":"e_1_3_3_2_60_2","doi-asserted-by":"crossref","unstructured":"Aaron Stillmaker and Bevan Baas. 2017. Scaling equations for the accurate prediction of CMOS device performance from 180 nm to 7 nm. Integration 58 (2017) 74\u201381.","DOI":"10.1016\/j.vlsi.2017.02.002"},{"key":"e_1_3_3_2_61_2","doi-asserted-by":"crossref","unstructured":"Nishil Talati Saransh Gupta Pravin Mane and Shahar Kvatinsky. 2016. Logic design within memristive memories using memristor-aided loGIC (MAGIC). IEEE Transactions on Nanotechnology 15 4 (2016) 635\u2013650.","DOI":"10.1109\/TNANO.2016.2570248"},{"key":"e_1_3_3_2_62_2","unstructured":"M\u00a0Onat Topal Anil Bas and Imke van Heerden. 2021. Exploring transformers in natural language generation: Gpt bert and xlnet. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2102.08036 (2021)."},{"key":"e_1_3_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3466752.3480071"},{"key":"e_1_3_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSCC19947.2020.9062979"},{"key":"e_1_3_3_2_65_2","unstructured":"Alex Wang. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1804.07461 (2018)."},{"key":"e_1_3_3_2_66_2","doi-asserted-by":"crossref","unstructured":"Alex Wang Amanpreet Singh Julian Michael Felix Hill Omer Levy and Samuel\u00a0R. Bowman. 2019. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. arxiv:https:\/\/arXiv.org\/abs\/1804.07461\u00a0[cs.CL]","DOI":"10.18653\/v1\/W18-5446"},{"key":"e_1_3_3_2_67_2","doi-asserted-by":"crossref","unstructured":"Hanrui Wang Zhanghao Wu Zhijian Liu Han Cai Ligeng Zhu Chuang Gan and Song Han. 2020. Hat: Hardware-aware transformers for efficient natural language processing. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2005.14187 (2020).","DOI":"10.18653\/v1\/2020.acl-main.686"},{"key":"e_1_3_3_2_68_2","unstructured":"Weizhi Wang Li Dong Hao Cheng Xiaodong Liu Xifeng Yan Jianfeng Gao and Furu Wei. 2024. Augmenting language models with long-term memory. Advances in Neural Information Processing Systems 36 (2024)."},{"key":"e_1_3_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA57654.2024.00033"},{"key":"e_1_3_3_2_70_2","doi-asserted-by":"crossref","unstructured":"H-S\u00a0Philip Wong Simone Raoux SangBum Kim Jiale Liang John\u00a0P Reifenberg Bipin Rajendran Mehdi Asheghi and Kenneth\u00a0E Goodson. 2010. Phase change memory. Proc. IEEE 98 12 (2010) 2201\u20132227.","DOI":"10.1109\/JPROC.2010.2070050"},{"key":"e_1_3_3_2_71_2","doi-asserted-by":"crossref","unstructured":"Yuting Wu Ziyu Wang and Wei\u00a0D Lu. 2024. PIM GPT a hybrid process in memory accelerator for autoregressive transformers. npj Unconventional Computing 1 1 (2024) 4.","DOI":"10.1038\/s44335-024-00004-2"},{"key":"e_1_3_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSCC42613.2021.9365769"},{"key":"e_1_3_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISLPED58423.2023.10244409"},{"key":"e_1_3_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/ESSERC62670.2024.10719453"},{"key":"e_1_3_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3400302.3415640"},{"key":"e_1_3_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3400302.3415640"},{"key":"e_1_3_3_2_77_2","doi-asserted-by":"crossref","unstructured":"Peng Yao Huaqiang Wu Bin Gao Jianshi Tang Qingtian Zhang Wenqiang Zhang J\u00a0Joshua Yang and He Qian. 2020. Fully hardware-implemented memristor convolutional neural network. Nature 577 7792 (2020) 641\u2013646.","DOI":"10.1038\/s41586-020-1942-4"},{"key":"e_1_3_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO56248.2022.00059"},{"key":"e_1_3_3_2_79_2","unstructured":"Kentaro Yoshioka. 2024. vision-transformers-cifar10: Training Vision Transformers (ViT) and related models on CIFAR-10. https:\/\/github.com\/kentaroy47\/vision-transformers-cifar10."},{"key":"e_1_3_3_2_80_2","doi-asserted-by":"crossref","unstructured":"Shimeng Yu Wonbo Shim Xiaochen Peng and Yandong Luo. 2021. RRAM for compute-in-memory: From inference to training. IEEE Transactions on Circuits and Systems I: Regular Papers 68 7 (2021) 2753\u20132765.","DOI":"10.1109\/TCSI.2021.3072200"},{"key":"e_1_3_3_2_81_2","doi-asserted-by":"crossref","unstructured":"Wenqiang Zhang Bin Gao Jianshi Tang Peng Yao Shimeng Yu Meng-Fan Chang Hoi-Jun Yoo He Qian and Huaqiang Wu. 2020. Neuro-inspired computing chips. Nature electronics 3 7 (2020) 371\u2013382.","DOI":"10.1038\/s41928-020-0435-7"},{"key":"e_1_3_3_2_82_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA53966.2022.00082"},{"key":"e_1_3_3_2_83_2","doi-asserted-by":"crossref","unstructured":"Mohammed\u00a0A Zidan John\u00a0Paul Strachan and Wei\u00a0D Lu. 2018. The future of electronics based on memristive systems. Nature electronics 1 1 (2018) 22\u201329.","DOI":"10.1038\/s41928-017-0006-8"}],"event":{"name":"ISCA '25: Proceedings of the 52nd Annual International Symposium on Computer Architecture","location":"Tokyo Japan","acronym":"SIGARCH '25","sponsor":["SIGARCH ACM Special Interest Group on Computer Architecture"]},"container-title":["Proceedings of the 52nd Annual International Symposium on Computer Architecture"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3695053.3731109","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,21]],"date-time":"2025-06-21T11:04:10Z","timestamp":1750503850000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3695053.3731109"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,20]]},"references-count":82,"alternative-id":["10.1145\/3695053.3731109","10.1145\/3695053"],"URL":"https:\/\/doi.org\/10.1145\/3695053.3731109","relation":{},"subject":[],"published":{"date-parts":[[2025,6,20]]},"assertion":[{"value":"2025-06-20","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}