{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T09:38:36Z","timestamp":1775122716680,"version":"3.50.1"},"reference-count":56,"publisher":"Association for Computing Machinery (ACM)","issue":"5","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2023,1]]},"abstract":"<jats:p>Embedding models are effective for learning high-dimensional sparse data. Traditionally, they are deployed in DRAM parameter servers (PS) for online inference access. However, the ever-increasing model capacity makes this practice suffer from both high storage costs and long recovery time. Rapidly developing Persistent Memory (PM) offers new opportunities to PSs owing to its large capacity at low costs, as well as its persistence, while the application of PM also faces two challenges including high read latency and heavy CPU burden. To provide a low-cost but still high-performance parameter service for online inferences, we introduce PetPS, the first production-deployed PM parameter server. (1) To escape with high PM latency, PetPS introduces a PM hash index tailored for embedding model workloads, to minimize PM access. (2) To alleviate the CPU burden, PetPS offloads parameter gathering to NICs, to avoid CPU stalls when accessing parameters on PM and thus improve CPU efficiency. Our evaluation shows that PetPS can boost throughput by 1.3 -- 1.7X compared to PSs that use state-of-the-art PM hash indexes, or get 2.9 -- 5.5X latency reduction with the same throughput. Since 2020, PetPS has been deployed in Kuaishou, one world-leading short video company, and successfully reduced TCO by 30% without performance degradation.<\/jats:p>","DOI":"10.14778\/3579075.3579077","type":"journal-article","created":{"date-parts":[[2023,3,6]],"date-time":"2023-03-06T17:10:26Z","timestamp":1678122626000},"page":"1013-1022","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":11,"title":["PetPS: Supporting Huge Embedding Models with Persistent Memory"],"prefix":"10.14778","volume":"16","author":[{"given":"Minhui","family":"Xie","sequence":"first","affiliation":[{"name":"Tsinghua University"}]},{"given":"Youyou","family":"Lu","sequence":"additional","affiliation":[{"name":"Tsinghua University"}]},{"given":"Qing","family":"Wang","sequence":"additional","affiliation":[{"name":"Tsinghua University"}]},{"given":"Yangyang","family":"Feng","sequence":"additional","affiliation":[{"name":"Tsinghua University"}]},{"given":"Jiaqiang","family":"Liu","sequence":"additional","affiliation":[{"name":"Kuaishou"}]},{"given":"Kai","family":"Ren","sequence":"additional","affiliation":[{"name":"Kuaishou"}]},{"given":"Jiwu","family":"Shu","sequence":"additional","affiliation":[{"name":"Tsinghua University"}]}],"member":"320","published-online":{"date-parts":[[2023,3,6]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"2022. Apache Kafka. https:\/\/kafka.apache.org\/.  2022. Apache Kafka. https:\/\/kafka.apache.org\/."},{"key":"e_1_2_1_2_1","unstructured":"2022. Apache MXNet | A flexible and efficient library for deep learning. https:\/\/mxnet.apache.org\/versions\/1.9.1\/.  2022. Apache MXNet | A flexible and efficient library for deep learning. https:\/\/mxnet.apache.org\/versions\/1.9.1\/."},{"key":"e_1_2_1_3_1","unstructured":"2022. Compute Express Link. https:\/\/www.computeexpresslink.org\/.  2022. Compute Express Link. https:\/\/www.computeexpresslink.org\/."},{"key":"e_1_2_1_4_1","unstructured":"2022. dmlc\/ps-lite: A lightweight parameter server interface. https:\/\/github.com\/dmlc\/ps-lite.  2022. dmlc\/ps-lite: A lightweight parameter server interface. https:\/\/github.com\/dmlc\/ps-lite."},{"key":"e_1_2_1_5_1","unstructured":"2022. DPDK. https:\/\/www.dpdk.org\/.  2022. DPDK. https:\/\/www.dpdk.org\/."},{"key":"e_1_2_1_6_1","unstructured":"2022. facebook\/folly. https:\/\/github.com\/facebook\/folly\/blob\/main\/folly\/container\/F14.md.  2022. facebook\/folly. https:\/\/github.com\/facebook\/folly\/blob\/main\/folly\/container\/F14.md."},{"key":"e_1_2_1_7_1","unstructured":"2022. Intel(r) Data Direct IO A Primer. https:\/\/www.intel.com\/content\/dam\/www\/public\/us\/en\/documents\/technology-briefs\/data-direct-i-o-technology-brief.pdf.  2022. Intel(r) Data Direct IO A Primer. https:\/\/www.intel.com\/content\/dam\/www\/public\/us\/en\/documents\/technology-briefs\/data-direct-i-o-technology-brief.pdf."},{"key":"e_1_2_1_8_1","unstructured":"2022. Intel\u00ae Optane\u2122 Persistent Memory. https:\/\/www.intel.com\/content\/www\/us\/en\/architecture-and-technology\/optane-dc-persistent-memory.html.  2022. Intel\u00ae Optane \u2122 Persistent Memory. https:\/\/www.intel.com\/content\/www\/us\/en\/architecture-and-technology\/optane-dc-persistent-memory.html."},{"key":"e_1_2_1_9_1","unstructured":"2022. Kuaishou: Storage Upgrade for Short Video Services. https:\/\/www.intel.com.au\/content\/www\/au\/en\/customer-spotlight\/stories\/kuaishou-customer-story.html.  2022. Kuaishou: Storage Upgrade for Short Video Services. https:\/\/www.intel.com.au\/content\/www\/au\/en\/customer-spotlight\/stories\/kuaishou-customer-story.html."},{"key":"e_1_2_1_10_1","unstructured":"2022. rdma-core\/ibverbs.h at master \u00b7 linux-rdma\/rdma-core. https:\/\/github.com\/linux-rdma\/rdma-core\/blob\/master\/libibverbs\/ibverbs.h.  2022. rdma-core\/ibverbs.h at master \u00b7 linux-rdma\/rdma-core. https:\/\/github.com\/linux-rdma\/rdma-core\/blob\/master\/libibverbs\/ibverbs.h."},{"key":"e_1_2_1_11_1","unstructured":"2022. Samsung Electronics Unveils Far-Reaching Next-Generation Memory Solutions at Flash Memory Summit 2022 - Samsung Global Newsroom. https:\/\/news.samsung.com\/global\/samsung-electronics-unveils-far-reaching-next-generation-memory-solutions-at-flash-memory-summit-2022.  2022. Samsung Electronics Unveils Far-Reaching Next-Generation Memory Solutions at Flash Memory Summit 2022 - Samsung Global Newsroom. https:\/\/news.samsung.com\/global\/samsung-electronics-unveils-far-reaching-next-generation-memory-solutions-at-flash-memory-summit-2022."},{"key":"e_1_2_1_12_1","volume-title":"Luoshang Pan, Valmiki Rampersad, Jens Axboe, Banit Agrawal, Fuxun Yu, Ansha Yu, Trung Le, et al.","author":"Ardestani Ehsan K","year":"2021","unstructured":"Ehsan K Ardestani , Changkyu Kim , Seung Jae Lee , Luoshang Pan, Valmiki Rampersad, Jens Axboe, Banit Agrawal, Fuxun Yu, Ansha Yu, Trung Le, et al. 2021 . Supporting Massive DLRM Inference Through Software Defined Memory . arXiv preprint arXiv:2110.11489 (2021). Ehsan K Ardestani, Changkyu Kim, Seung Jae Lee, Luoshang Pan, Valmiki Rampersad, Jens Axboe, Banit Agrawal, Fuxun Yu, Ansha Yu, Trung Le, et al. 2021. Supporting Massive DLRM Inference Through Software Defined Memory. arXiv preprint arXiv:2110.11489 (2021)."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.14778\/3461535.3461543"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3373376.3378515"},{"key":"e_1_2_1_15_1","volume-title":"2020 USENIX Annual Technical Conference (USENIX ATC 20)","author":"Chen Zhangyu","year":"2020","unstructured":"Zhangyu Chen , Yu Hua , Bo Ding , and Pengfei Zuo . 2020 . Lock-free concurrent level hashing for persistent memory . In 2020 USENIX Annual Technical Conference (USENIX ATC 20) . 799--812. Zhangyu Chen, Yu Hua, Bo Ding, and Pengfei Zuo. 2020. Lock-free concurrent level hashing for persistent memory. In 2020 USENIX Annual Technical Conference (USENIX ATC 20). 799--812."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2988450.2988454"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/1807128.1807152"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/2901318.2901323"},{"key":"e_1_2_1_19_1","volume-title":"Large Scale Distributed Deep Networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held","author":"Dean Jeffrey","year":"2012","unstructured":"Jeffrey Dean , Greg Corrado , Rajat Monga , Kai Chen , Matthieu Devin , Quoc V. Le , Mark Z. Mao , Marc'Aurelio Ranzato , Andrew W. Senior , Paul A. Tucker , Ke Yang , and Andrew Y. Ng . 2012 . Large Scale Distributed Deep Networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012 , Lake Tahoe, Nevada, United States, Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, L\u00e9on Bottou, and Kilian Q. Weinberger (Eds.). 1232--1240. https:\/\/proceedings.neurips.cc\/paper\/ 2012\/hash\/6aca97005c68f1206823815f66102863-Abstract.html Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc'Aurelio Ranzato, Andrew W. Senior, Paul A. Tucker, Ke Yang, and Andrew Y. Ng. 2012. Large Scale Distributed Deep Networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, L\u00e9on Bottou, and Kilian Q. Weinberger (Eds.). 1232--1240. https:\/\/proceedings.neurips.cc\/paper\/2012\/hash\/6aca97005c68f1206823815f66102863-Abstract.html"},{"key":"e_1_2_1_20_1","volume-title":"Proceedings of Machine Learning and Systems 2019","author":"Eisenman Assaf","year":"2019","unstructured":"Assaf Eisenman , Maxim Naumov , Darryl Gardner , Misha Smelyanskiy , Sergey Pupyrev , Kim M. Hazelwood , Asaf Cidon , and Sachin Katti . 2019 . Bandana: Using Non-Volatile Memory for Storing Deep Learning Models . In Proceedings of Machine Learning and Systems 2019 , MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019, Ameet Talwalkar, Virginia Smith, and Matei Zaharia (Eds.). mlsys.org. https:\/\/proceedings.mlsys.org\/book\/277.pdf Assaf Eisenman, Maxim Naumov, Darryl Gardner, Misha Smelyanskiy, Sergey Pupyrev, Kim M. Hazelwood, Asaf Cidon, and Sachin Katti. 2019. Bandana: Using Non-Volatile Memory for Storing Deep Learning Models. In Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019, Ameet Talwalkar, Virginia Smith, and Matei Zaharia (Eds.). mlsys.org. https:\/\/proceedings.mlsys.org\/book\/277.pdf"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2966884.2966918"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3224419"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2017\/239"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA47549.2020.00047"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3038912.3052569"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.5555\/3488766.3488792"},{"key":"e_1_2_1_28_1","volume-title":"2016 USENIX Annual Technical Conference (USENIX ATC 16)","author":"Kalia Anuj","year":"2016","unstructured":"Anuj Kalia , Michael Kaminsky , and David G Andersen . 2016 . Design guidelines for high performance {RDMA} systems . In 2016 USENIX Annual Technical Conference (USENIX ATC 16) . 437--450. Anuj Kalia, Michael Kaminsky, and David G Andersen. 2016. Design guidelines for high performance {RDMA} systems. In 2016 USENIX Annual Technical Conference (USENIX ATC 16). 437--450."},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359635"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2640087.2644155"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220023"},{"key":"e_1_2_1_32_1","volume-title":"Persia: A Hybrid System Scaling Deep Learning Based Recommenders up to 100 Trillion Parameters. arXiv preprint arXiv:2111.05897","author":"Lian Xiangru","year":"2021","unstructured":"Xiangru Lian , Binhang Yuan , Xuefeng Zhu , Yulong Wang , Yongjun He , Honghuan Wu , Lei Sun , Haodong Lyu , Chengjun Liu , Xing Dong , 2021 . Persia: A Hybrid System Scaling Deep Learning Based Recommenders up to 100 Trillion Parameters. arXiv preprint arXiv:2111.05897 (2021). Xiangru Lian, Binhang Yuan, Xuefeng Zhu, Yulong Wang, Yongjun He, Honghuan Wu, Lei Sun, Haodong Lyu, Chengjun Liu, Xing Dong, et al. 2021. Persia: A Hybrid System Scaling Deep Learning Based Recommenders up to 100 Trillion Parameters. arXiv preprint arXiv:2111.05897 (2021)."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.14778\/3389133.3389134"},{"key":"e_1_2_1_34_1","volume-title":"19th USENIX Conference on File and Storage Technologies (FAST 21)","author":"Ma Shaonan","year":"2021","unstructured":"Shaonan Ma , Kang Chen , Shimin Chen , Mengxing Liu , Jianglang Zhu , Hongbo Kang , and Yongwei Wu . 2021 . {ROART}: Range-query Optimized Persistent {ART} . In 19th USENIX Conference on File and Storage Technologies (FAST 21) . 1--16. Shaonan Ma, Kang Chen, Shimin Chen, Mengxing Liu, Jianglang Zhu, Hongbo Kang, and Yongwei Wu. 2021. {ROART}: Range-query Optimized Persistent {ART}. In 19th USENIX Conference on File and Storage Technologies (FAST 21). 1--16."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3514221.3517902"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.14778\/3489496.3489511"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.5555\/3323298.3323302"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/2882903.2915251"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3412744"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3514221.3517860"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3503222.3507777"},{"key":"e_1_2_1_42_1","volume-title":"Ekko: A Large-Scale Deep Learning Recommender System with Low-Latency Model Update. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22)","author":"Sima Chijun","year":"2022","unstructured":"Chijun Sima , Yao Fu , Man-Kit Sit , Liyi Guo , Xuri Gong , Feng Lin , Junyu Wu , Yongsheng Li , Haidong Rong , Pierre-Louis Aublin , and Luo Mai . 2022 . Ekko: A Large-Scale Deep Learning Recommender System with Low-Latency Model Update. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22) . USENIX Association, Carlsbad, CA, 821--839. https:\/\/www.usenix.org\/conference\/osdi22\/presentation\/sima Chijun Sima, Yao Fu, Man-Kit Sit, Liyi Guo, Xuri Gong, Feng Lin, Junyu Wu, Yongsheng Li, Haidong Rong, Pierre-Louis Aublin, and Luo Mai. 2022. Ekko: A Large-Scale Deep Learning Recommender System with Low-Latency Model Update. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). USENIX Association, Carlsbad, CA, 821--839. https:\/\/www.usenix.org\/conference\/osdi22\/presentation\/sima"},{"key":"e_1_2_1_43_1","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Sriraman Akshitha","year":"2018","unstructured":"Akshitha Sriraman and Thomas F Wenisch . 2018 . {&mu;Tune}:{Auto-Tuned} Threading for {OLDI} Microservices . In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18) . 177--194. Akshitha Sriraman and Thomas F Wenisch. 2018. {&mu;Tune}:{Auto-Tuned} Threading for {OLDI} Microservices. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 177--194."},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00778-020-00622-9"},{"key":"e_1_2_1_45_1","volume-title":"COSPlay: Leveraging Task-Level Parallelism for High-Throughput Synchronous Persistence. In MICRO-54: 54th Annual IEEE\/ACM International Symposium on Microarchitecture. 86--99","author":"Vemmou Marina","year":"2021","unstructured":"Marina Vemmou and Alexandros Daglis . 2021 . COSPlay: Leveraging Task-Level Parallelism for High-Throughput Synchronous Persistence. In MICRO-54: 54th Annual IEEE\/ACM International Symposium on Microarchitecture. 86--99 . Marina Vemmou and Alexandros Daglis. 2021. COSPlay: Leveraging Task-Level Parallelism for High-Throughput Synchronous Persistence. In MICRO-54: 54th Annual IEEE\/ACM International Symposium on Microarchitecture. 86--99."},{"key":"e_1_2_1_46_1","volume-title":"2022 USENIX Annual Technical Conference (USENIX ATC 22)","author":"Wang Jing","year":"2022","unstructured":"Jing Wang , Youyou Lu , Qing Wang , Minhui Xie , Keji Huang , and Jiwu Shu . 2022 . Pacman: An Efficient Compaction Approach for {Log-Structured}{Key-Value} Store on Persistent Memory . In 2022 USENIX Annual Technical Conference (USENIX ATC 22) . 773--788. Jing Wang, Youyou Lu, Qing Wang, Minhui Xie, Keji Huang, and Jiwu Shu. 2022. Pacman: An Efficient Compaction Approach for {Log-Structured}{Key-Value} Store on Persistent Memory. In 2022 USENIX Annual Technical Conference (USENIX ATC 22). 773--788."},{"key":"e_1_2_1_47_1","volume-title":"15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21)","author":"Wang Qing","year":"2021","unstructured":"Qing Wang , Youyou Lu , Junru Li , and Jiwu Shu . 2021 . Nap: A {Black-Box} Approach to {NUMA-Aware} Persistent Memory Indexes . In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21) . 93--111. Qing Wang, Youyou Lu, Junru Li, and Jiwu Shu. 2021. Nap: A {Black-Box} Approach to {NUMA-Aware} Persistent Memory Indexes. In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21). 93--111."},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/3124749.3124754"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/SC41405.2020.00025"},{"key":"e_1_2_1_50_1","volume-title":"2017 USENIX Annual Technical Conference (USENIX ATC 17)","author":"Zhang Hao","year":"2017","unstructured":"Hao Zhang , Zeyu Zheng , Shizhen Xu , Wei Dai , Qirong Ho , Xiaodan Liang , Zhiting Hu , Jinliang Wei , Pengtao Xie , and Eric P Xing . 2017 . Poseidon: An efficient communication architecture for distributed deep learning on {GPU} clusters . In 2017 USENIX Annual Technical Conference (USENIX ATC 17) . 181--193. Hao Zhang, Zeyu Zheng, Shizhen Xu, Wei Dai, Qirong Ho, Xiaodan Liang, Zhiting Hu, Jinliang Wei, Pengtao Xie, and Eric P Xing. 2017. Poseidon: An efficient communication architecture for distributed deep learning on {GPU} clusters. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). 181--193."},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447786.3456237"},{"key":"e_1_2_1_52_1","first-page":"412","article-title":"Distributed hierarchical gpu parameter server for massive scale deep learning ads systems","volume":"2","author":"Zhao Weijie","year":"2020","unstructured":"Weijie Zhao , Deping Xie , Ronglai Jia , Yulei Qian , Ruiquan Ding , Mingming Sun , and Ping Li . 2020 . Distributed hierarchical gpu parameter server for massive scale deep learning ads systems . Proceedings of Machine Learning and Systems 2 (2020), 412 -- 428 . Weijie Zhao, Deping Xie, Ronglai Jia, Yulei Qian, Ruiquan Ding, Mingming Sun, and Ping Li. 2020. Distributed hierarchical gpu parameter server for massive scale deep learning ads systems. Proceedings of Machine Learning and Systems 2 (2020), 412--428.","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3358045"},{"key":"e_1_2_1_54_1","unstructured":"Guorui Zhou Weijie Bian Kailun Wu Lejian Ren Qi Pi Yujing Zhang Can Xiao Xiang-Rong Sheng Na Mou Xinchen Luo etal 2020. CAN: revisiting feature co-action for click-through rate prediction. arXiv preprint arXiv:2011.05625 (2020).  Guorui Zhou Weijie Bian Kailun Wu Lejian Ren Qi Pi Yujing Zhang Can Xiao Xiang-Rong Sheng Na Mou Xinchen Luo et al. 2020. CAN: revisiting feature co-action for click-through rate prediction. arXiv preprint arXiv:2011.05625 (2020)."},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33015941"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219823"},{"key":"e_1_2_1_57_1","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Zuo Pengfei","year":"2018","unstructured":"Pengfei Zuo , Yu Hua , and Jie Wu . 2018 . {Write-Optimized} and {High-Performance} Hashing Index Scheme for Persistent Memory . In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18) . 461--476. Pengfei Zuo, Yu Hua, and Jie Wu. 2018. {Write-Optimized} and {High-Performance} Hashing Index Scheme for Persistent Memory. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 461--476."}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3579075.3579077","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,3,6]],"date-time":"2023-03-06T17:14:33Z","timestamp":1678122873000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3579075.3579077"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1]]},"references-count":56,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2023,1]]}},"alternative-id":["10.14778\/3579075.3579077"],"URL":"https:\/\/doi.org\/10.14778\/3579075.3579077","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2023,1]]},"assertion":[{"value":"2023-03-06","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}