{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T04:47:11Z","timestamp":1750308431977,"version":"3.41.0"},"reference-count":71,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2022,9,16]],"date-time":"2022-09-16T00:00:00Z","timestamp":1663286400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Archit. Code Optim."],"published-print":{"date-parts":[[2022,12,31]]},"abstract":"<jats:p>Deep Neural Networks\u00a0(DNNs) tend to go deeper and wider, which poses a significant challenge to the training of DNNs, due to the limited memory capacity of DNN accelerators. Existing solutions for memory-efficient DNN training are densely coupled with the application features of DNN workloads, e.g., layer structures or computational graphs of DNNs are necessary for these solutions. This would result in weak versatility for DNNs with sophisticated layer structures or complicated computation graphs. These schemes usually need to be re-implemented or re-adapted due to the new layer structures or the unusual operators in the computational graphs introduced by these DNNs.<\/jats:p>\n          <jats:p>\n            In this article, we review the memory pressure issues of DNN training from the perspective of runtime systems and model the memory access behaviors of DNN workloads. We identify the\n            <jats:italic>iterative, regularity<\/jats:italic>\n            , and\n            <jats:italic>extremalization<\/jats:italic>\n            properties of memory access patterns for DNN workloads. Based on these observations, we propose AppObMem, an application-oblivious memory scheduling system. AppObMem automatically traces the memory behaviors of DNN workloads and schedules the memory swapping to reduce the memory pressure of the device accelerators without the perception of high-level information of layer structures or computation graphs. Evaluations on a variety of DNN models show that, AppObMem obtains 40\u201360% memory savings with acceptable performance loss. AppObMem is also competitive with other open sourced SOTA schemes.\n          <\/jats:p>\n          <jats:p\/>","DOI":"10.1145\/3535355","type":"journal-article","created":{"date-parts":[[2022,5,9]],"date-time":"2022-05-09T12:33:14Z","timestamp":1652099594000},"page":"1-26","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["An Application-oblivious Memory Scheduling System for DNN Accelerators"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2924-5189","authenticated-orcid":false,"given":"Jiansong","family":"Li","sequence":"first","affiliation":[{"name":"Huawei Galois Lab, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7835-113X","authenticated-orcid":false,"given":"Xueying","family":"Wang","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Shijingshan District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3156-7221","authenticated-orcid":false,"given":"Xiaobing","family":"Chen","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Shijingshan District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9738-261X","authenticated-orcid":false,"given":"Guangli","family":"Li","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Shijingshan District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3593-2249","authenticated-orcid":false,"given":"Xiao","family":"Dong","sequence":"additional","affiliation":[{"name":"NVIDIA Corporation, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4668-1852","authenticated-orcid":false,"given":"Peng","family":"Zhao","sequence":"additional","affiliation":[{"name":"Huawei 2012 Lab, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1497-5525","authenticated-orcid":false,"given":"Xianzhi","family":"Yu","sequence":"additional","affiliation":[{"name":"Huawei 2012 Lab, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1258-1351","authenticated-orcid":false,"given":"Yongxin","family":"Yang","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Shijingshan District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1620-2419","authenticated-orcid":false,"given":"Wei","family":"Cao","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Shijingshan District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5063-2864","authenticated-orcid":false,"given":"Lei","family":"Liu","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Shijingshan District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2909-7750","authenticated-orcid":false,"given":"Xiaobing","family":"Feng","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Shijingshan District, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2022,9,16]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"Apache Singa Team.2020. Singa: A Distributed Deep Learning Library. Retrieved September 20 2020 from http:\/\/singa.apache.org\/."},{"key":"e_1_3_2_3_2","unstructured":"Javier Artiles and Satoshi Sekine. 2008. Tagged and Cleaned Wikipedia (TC Wikipedia) and Its Ngram. Retrieved September 20 2020 from https:\/\/nlp.cs.nyu.edu\/wikipedia-data\/."},{"key":"e_1_3_2_4_2","doi-asserted-by":"crossref","unstructured":"Olivier Beaumont Lionel Eyraud-Dubois and Alena Shilova. 2019. Optimal GPU-CPU Offloading Strategies for Deep Neural Network Training. Retrieved from https:\/\/hal.inria.fr\/hal-02316266.","DOI":"10.1007\/978-3-030-57675-2_10"},{"key":"e_1_3_2_5_2","volume-title":"Machine Learning Algorithms: A Reference Guide to Popular Algorithms for Data Science and Machine Learning","author":"Bonaccorso Giuseppe","year":"2017","unstructured":"Giuseppe Bonaccorso. 2017. Machine Learning Algorithms: A Reference Guide to Popular Algorithms for Data Science and Machine Learning. Packt Publishing."},{"key":"e_1_3_2_6_2","unstructured":"Cambricon. 2020. Cambricon BANG C Developer Guide. Retrieved September 20 2020 from http:\/\/www.cambricon.com\/docs\/bangc\/developer_guide_html\/."},{"key":"e_1_3_2_7_2","article-title":"AdderNet: Do we really need multiplications in deep learning? In","author":"Chen Hanting","year":"2020","unstructured":"Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, and Chang Xu. 2020. AdderNet: Do we really need multiplications in deep learning? In Proceedings of the IEEE\/CVF Computer Vision and Pattern Recognition Conference (CVPR\u201920).","journal-title":"Proceedings of the IEEE\/CVF Computer Vision and Pattern Recognition Conference (CVPR\u201920)"},{"key":"e_1_3_2_8_2","unstructured":"Tianqi Chen Mu Li Yutian Li Min Lin Naiyan Wang Minjie Wang Tianjun Xiao Bing Xu Chiyuan Zhang and Zheng Zhang. 2015. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv:1512.01274. Retrieved from http:\/\/arxiv.org\/abs\/1512.01274."},{"key":"e_1_3_2_9_2","article-title":"Training deep nets with sublinear memory cost","volume":"1604","author":"Chen Tianqi","year":"2016","unstructured":"Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublinear memory cost. CoRR abs\/1604.06174 (2016). arXiv:1604.06174 http:\/\/arxiv.org\/abs\/1604.06174.","journal-title":"CoRR"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.23919\/DATE.2018.8341972"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2014.58"},{"key":"e_1_3_2_12_2","unstructured":"Yu Cheng Duo Wang Pan Zhou and Tao Zhang. 2017. A survey of model compression and acceleration for deep neural networks. Retrieved from http:\/\/dblp.uni-trier.de\/db\/journals\/corr\/corr1710.html#abs-1710-09282."},{"key":"e_1_3_2_13_2","unstructured":"Sharan Chetlur Cliff Woolley Philippe Vandermersch Jonathan Cohen John Tran Bryan Catanzaro and Evan Shelhamer. 2014. cuDNN: Efficient primitives for deep learning. Retrieved from http:\/\/dblp.uni-trier.de\/db\/journals\/corr\/corr1410.html#ChetlurWVCTCS14."},{"key":"e_1_3_2_14_2","first-page":"3123","volume-title":"Advances in Neural Information Processing Systems 28","author":"Courbariaux Matthieu","year":"2015","unstructured":"Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. BinaryConnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.). Curran Associates, Inc., 3123\u20133131."},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.1967.1053964"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/2901318.2901323"},{"key":"e_1_3_2_17_2","first-page":"1223","volume-title":"Advances in Neural Information Processing Systems 25","author":"Dean Jeffrey","year":"2012","unstructured":"Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. 2012. Large scale distributed deep networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 1223\u20131231."},{"key":"e_1_3_2_18_2","first-page":"2148","volume-title":"Advances in Neural Information Processing Systems 26","author":"Denil Misha","year":"2013","unstructured":"Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, and Nando de Freitas. 2013. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 2148\u20132156."},{"key":"e_1_3_2_19_2","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Retrieved from http:\/\/arxiv.org\/abs\/1810.04805."},{"key":"e_1_3_2_20_2","unstructured":"Amir Gholami Zhewei Yao Sehoon Kim Michael W. Mahoney and Kurt Keutzer. 2021. AI and Memory Wall. Retrieved from https:\/\/medium.com\/riselab\/ai-and-memory-wall-2cb4265cb0b8."},{"key":"e_1_3_2_21_2","first-page":"4125","volume-title":"Advances in Neural Information Processing Systems","author":"Gruslys Audrunas","year":"2016","unstructured":"Audrunas Gruslys, Remi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. 2016. Memory-efficient backpropagation through time. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.), Vol. 29. Curran Associates, Inc., 4125\u20134133."},{"key":"e_1_3_2_22_2","volume-title":"Proceedings of the 4th International Conference on Learning Representations (ICLR\u201916)","author":"Han Song","year":"2016","unstructured":"Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In Proceedings of the 4th International Conference on Learning Representations (ICLR\u201916), Yoshua Bengio and Yann LeCun (Eds.)."},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/0893-6080(89)90020-8"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373376.3378530"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.243"},{"key":"e_1_3_2_27_2","unstructured":"Huawei. 2020. Atlas 300T Training Card (Model: 9000). Retrieved September 20 2020 from https:\/\/e.huawei.com\/en\/products\/cloud-computing-dc\/atlas\/atlas-300t-training-9000."},{"key":"e_1_3_2_28_2","unstructured":"Huawei. 2020. Huawei Ascend Programming Guide. Retrieved September 20 2020 from https:\/\/www.huaweicloud.com\/intl\/en-us\/ascend\/doc\/."},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.5555\/3122009.3242044"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359630"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3140659.3080246"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v30i1.10362"},{"key":"e_1_3_2_33_2","unstructured":"Alex Krizhevsky. 2014. One weird trick for parallelizing convolutional neural networks. arXiv:1404.5997. Retrieved from http:\/\/arxiv.org\/abs\/1404.5997."},{"key":"e_1_3_2_34_2","first-page":"265","volume-title":"Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML\u201911)","author":"Le Quoc V.","year":"2011","unstructured":"Quoc V. Le, Jiquan Ngiam, Adam Coates, Abhik Lahiri, Bobby Prochnow, and Andrew Y. Ng. 2011. On optimization methods for deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML\u201911). Omnipress, Madison, WI, 265\u2013272."},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/3315573.3329984"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3472456.3472464"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2020.3013050"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.sysarc.2022.102431"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3503222.3507752"},{"key":"e_1_3_2_41_2","first-page":"281","volume-title":"Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability","volume":"1","author":"MacQueen J. B.","year":"1967","unstructured":"J. B. MacQueen. 1967. Some methods for classification and analysis of MultiVariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, L. M. Le Cam and J. Neyman (Eds.), Vol. 1. University of California Press, 281\u2013297."},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1093\/oxfordjournals.oep.a041050"},{"key":"e_1_3_2_43_2","doi-asserted-by":"crossref","unstructured":"Deepak Mittal Shweta Bhardwaj Mitesh M. Khapra and Balaraman Ravindran. 2018. Recovering from random pruning: On the plasticity of deep convolutional neural networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) . 848\u2013857.","DOI":"10.1109\/WACV.2018.00098"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373376.3378534"},{"key":"e_1_3_2_45_2","unstructured":"Nvidia.2020. CNMeM Library: Simple Library to Help the Deep Learning Frameworks Manage CUDA Memory. Retrieved September 20 2020 from https:\/\/github.com\/NVIDIA\/cnmem."},{"key":"e_1_3_2_46_2","unstructured":"Nvidia. 2020. CUDA Runtime API: CUDA Toolkit Documentation. Retrieved September 20 2020 from https:\/\/docs.nvidia.com\/cuda\/cuda-runtime-api."},{"key":"e_1_3_2_47_2","unstructured":"NVIDIA.2020. CUDA Samples. Retrieved September 20 2020 from https:\/\/docs.nvidia.com\/cuda\/cuda-samples\/index.html."},{"key":"e_1_3_2_48_2","unstructured":"NVIDIA. 2020. NVIDIA Ampere Architecture. Retrieved September 20 2020 from https:\/\/www.nvidia.com\/en-us\/data-center\/nvidia-ampere-gpu-architecture\/."},{"key":"e_1_3_2_49_2","unstructured":"OpenAI. 2020. AI and Compute. Retrieved September 20 2020 from https:\/\/openai.com\/blog\/ai-and-compute."},{"key":"e_1_3_2_50_2","unstructured":"PCI-SIG.2020. 5.0 out of 5 Stars: PCI-SIG Member Companies Announce Support for the PCI Express 5.0 Specification. Retrieved September 20 2020 from https:\/\/pcisig.com\/50-out-5-stars-pci-sig%C2%AE-member-companies-announce-support-pci-express%C2%AE-50-specification."},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373376.3378505"},{"key":"e_1_3_2_52_2","unstructured":"Yury Pisarchyk and Juhyun Lee. 2020. Efficient Memory Management for Deep Neural Net Inference. arXiv:2001.03288 [cs.LG]. Retrieved https:\/\/arxiv.org\/abs\/2001.03288."},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-71273-4_44"},{"key":"e_1_3_2_54_2","volume-title":"Proceedings of the 4th International Conference on Learning Representations (ICLR\u201916)","author":"Radford Alec","year":"2016","unstructured":"Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the 4th International Conference on Learning Representations (ICLR\u201916), Yoshua Bengio and Yann LeCun (Eds.)."},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D16-1264"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2016.7783721"},{"key":"e_1_3_2_57_2","doi-asserted-by":"crossref","unstructured":"S. B. Shriram Anshuj Garg and Purushottam Kulkarni. 2019. Dynamic memory management for gpu-based training of deep neural networks. In IEEE International Parallel and Distributed Processing Symposium (IPDPS\u201919) . 200\u2013209.","DOI":"10.1109\/IPDPS.2019.00030"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/78.650093"},{"key":"e_1_3_2_59_2","doi-asserted-by":"crossref","unstructured":"Alex Sherstinsky. 2020. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena 404 (2020) 13230.","DOI":"10.1016\/j.physd.2019.132306"},{"key":"e_1_3_2_60_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Simonyan Karen","year":"2015","unstructured":"Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2020-1156"},{"key":"e_1_3_2_64_2","first-page":"63","volume-title":"Optimization","author":"Topa A.","year":"1998","unstructured":"A. Topa, L. N. Vicente, C. R. Paiva, and A. Barbosa. 1998. Application of the least squares method to the analysis of planar waveguides for integrated optics. In Optimization, Vol. 1. 63\u201363."},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.5555\/3433701.3433726"},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3178487.3178491"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-57675-2_14"},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1109\/SWAT.1973.13"},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2021.3064966"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/3369583.3392684"},{"key":"e_1_3_2_71_2","unstructured":"Junzhe Zhang Sai-Ho Yeung Yao Shu Bingsheng He and Wei Wang. 2019. Efficient memory management for GPU-based deep learning systems. arXiv:1903.06631. Retrieved from http:\/\/arxiv.org\/abs\/1903.06631."},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00907"}],"container-title":["ACM Transactions on Architecture and Code Optimization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3535355","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3535355","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T17:49:47Z","timestamp":1750268987000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3535355"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,16]]},"references-count":71,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2022,12,31]]}},"alternative-id":["10.1145\/3535355"],"URL":"https:\/\/doi.org\/10.1145\/3535355","relation":{},"ISSN":["1544-3566","1544-3973"],"issn-type":[{"type":"print","value":"1544-3566"},{"type":"electronic","value":"1544-3973"}],"subject":[],"published":{"date-parts":[[2022,9,16]]},"assertion":[{"value":"2021-06-08","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-05-02","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-09-16","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}