{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,8]],"date-time":"2026-02-08T10:20:06Z","timestamp":1770546006008,"version":"3.49.0"},"reference-count":39,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2020,12,30]],"date-time":"2020-12-30T00:00:00Z","timestamp":1609286400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"NSF","doi-asserted-by":"publisher","award":["1533737"],"award-info":[{"award-number":["1533737"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Archit. Code Optim."],"published-print":{"date-parts":[[2021,3,31]]},"abstract":"<jats:p>State-of-the-art machine learning frameworks support a wide variety of design features to enable a flexible machine learning programming interface and to ease the programmability burden on machine learning developers. Identifying and using a performance-optimal setting in feature-rich frameworks, however, involves a non-trivial amount of performance profiling efforts and often relies on domain-specific knowledge. This article takes a deep dive into analyzing the performance impact of key design features in a machine learning framework and quantifies the role of parallelism. The observations and insights distill into a simple set of guidelines that one can use to achieve much higher training and inference speedup. Across a diverse set of real-world deep learning models, the evaluation results show that the proposed performance tuning guidelines outperform the Intel and TensorFlow recommended settings by 1.30\u00d7 and 1.38\u00d7, respectively.<\/jats:p>","DOI":"10.1145\/3431388","type":"journal-article","created":{"date-parts":[[2020,12,30]],"date-time":"2020-12-30T12:30:51Z","timestamp":1609331451000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":17,"title":["Exploiting Parallelism Opportunities with Deep Learning Frameworks"],"prefix":"10.1145","volume":"18","author":[{"given":"Yu Emma","family":"Wang","sequence":"first","affiliation":[{"name":"Harvard University, Cambridge, MA"}]},{"given":"Carole-Jean","family":"Wu","sequence":"additional","affiliation":[{"name":"Facebook AI"}]},{"given":"Xiaodong","family":"Wang","sequence":"additional","affiliation":[{"name":"Facebook AI"}]},{"given":"Kim","family":"Hazelwood","sequence":"additional","affiliation":[{"name":"Facebook AI"}]},{"given":"David","family":"Brooks","sequence":"additional","affiliation":[{"name":"Harvard University, Cambridge, MA"}]}],"member":"320","published-online":{"date-parts":[[2020,12,30]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Folly: Facebook Open-source Library.","year":"2019","unstructured":"Facebook. 2019 . Folly: Facebook Open-source Library. Retrieved from https:\/\/github.com\/facebook\/fol Facebook. 2019. Folly: Facebook Open-source Library. Retrieved from https:\/\/github.com\/facebook\/fol"},{"key":"e_1_2_1_2_1","volume-title":"Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916)","volume":"16","author":"Abadi Mart\u00edn","year":"2016","unstructured":"Mart\u00edn Abadi , Paul Barham , Jianmin Chen , Zhifeng Chen , Andy Davis , Jeffrey Dean , Matthieu Devin , Sanjay Ghemawat , Geoffrey Irving , Michael Isard , et\u00a0al. 2016 . TensorFlow: A system for large-scale machine learning . In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916) , Vol. 16 . 265--283. Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et\u00a0al. 2016. TensorFlow: A system for large-scale machine learning. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916), Vol. 16. 265--283."},{"key":"e_1_2_1_3_1","volume-title":"Tips to improve performance for popular deep learning frameworks on CPUs. Intel Dev.Zone","author":"Anju P.","year":"2018","unstructured":"P. Anju . 2018. Tips to improve performance for popular deep learning frameworks on CPUs. Intel Dev.Zone ( 2018 ). https:\/\/software.intel.com\/content\/www\/us\/en\/develop\/articles\/tips-to-improve-performance-for-popular-deep-learning-frameworks-on-multi-core-cpus.html. P. Anju. 2018. Tips to improve performance for popular deep learning frameworks on CPUs. Intel Dev.Zone (2018). https:\/\/software.intel.com\/content\/www\/us\/en\/develop\/articles\/tips-to-improve-performance-for-popular-deep-learning-frameworks-on-multi-core-cpus.html."},{"key":"e_1_2_1_4_1","unstructured":"Soheil Bahrampour Naveen Ramakrishnan Lukas Schott and Mohak Shah. 2015. Comparative study of Caffe Neon Theano and Torch for deep learning. arXiv preprint arXiv:1511.06435.  Soheil Bahrampour Naveen Ramakrishnan Lukas Schott and Mohak Shah. 2015. Comparative study of Caffe Neon Theano and Torch for deep learning. arXiv preprint arXiv:1511.06435."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1137\/141000671"},{"key":"e_1_2_1_6_1","unstructured":"Ashraf Bhuiyan Mahmoud Abuzaina Niranjan Hasabnis Niroop Ammbashankar Faijul Amin Sheng Fu and Bhavani Subramanian. [n.d.]. Improving TensorFlow inference performance on Intel Xeon processors. Intel AI Blog ([n.d.]). intel.com. https:\/\/www.intel.com\/content\/www\/us\/en\/artificial-intelligence\/posts\/improving-tensorflow-inference-performance-on-intel-xeon-processors.html.  Ashraf Bhuiyan Mahmoud Abuzaina Niranjan Hasabnis Niroop Ammbashankar Faijul Amin Sheng Fu and Bhavani Subramanian. [n.d.]. Improving TensorFlow inference performance on Intel Xeon processors. Intel AI Blog ([n.d.]). intel.com. https:\/\/www.intel.com\/content\/www\/us\/en\/artificial-intelligence\/posts\/improving-tensorflow-inference-performance-on-intel-xeon-processors.html."},{"key":"e_1_2_1_7_1","unstructured":"Google AI Blog. 2019. Introducing GPipe an open source library for efficiently training large-scale neural network models. Retrieved from https:\/\/ai.googleblog.com\/2019\/03\/introducing-gpipe-open-source-library.html.  Google AI Blog. 2019. Introducing GPipe an open source library for efficiently training large-scale neural network models. Retrieved from https:\/\/ai.googleblog.com\/2019\/03\/introducing-gpipe-open-source-library.html."},{"key":"e_1_2_1_8_1","volume-title":"MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274","author":"Chen Tianqi","year":"2015","unstructured":"Tianqi Chen , Mu Li , Yutian Li , Min Lin , Naiyan Wang , Minjie Wang , Tianjun Xiao , Bing Xu , Chiyuan Zhang , and Zheng Zhang . 2015. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274 ( 2015 ). Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274 (2015)."},{"key":"e_1_2_1_9_1","volume-title":"Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201918)","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen , Thierry Moreau , Ziheng Jiang , Lianmin Zheng , Eddie Yan , Haichen Shen , Meghan Cowan , Leyuan Wang , Yuwei Hu , Luis Ceze et \u00a0al . 2018 . TVM: An Automated end-to-end optimizing compiler for deep learning . In Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201918) . 578--594. Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze et\u00a0al. 2018. TVM: An Automated end-to-end optimizing compiler for deep learning. In Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201918). 578--594."},{"key":"e_1_2_1_10_1","unstructured":"Heng-Tze Cheng. 2016. Wide and deep learning: Better together with TensorFlow. Google AI Blog. Retrieved from https:\/\/ai.googleblog.com\/2016\/06\/wide-deep-learning-better-together-with.html.  Heng-Tze Cheng. 2016. Wide and deep learning: Better together with TensorFlow. Google AI Blog. Retrieved from https:\/\/ai.googleblog.com\/2016\/06\/wide-deep-learning-better-together-with.html."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2959100.2959190"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_2_1_13_1","unstructured":"Eigen. 2019. Eigen thread pool. (2019). Retrieved from https:\/\/bitbucket.org\/eigen\/eigen\/src\/default\/unsupported\/Eigen\/CXX11\/src\/ThreadPool\/.  Eigen. 2019. Eigen thread pool. (2019). Retrieved from https:\/\/bitbucket.org\/eigen\/eigen\/src\/default\/unsupported\/Eigen\/CXX11\/src\/ThreadPool\/."},{"key":"e_1_2_1_14_1","volume-title":"Bandana: Using non-volatile memory for storing deep learning models. arXiv preprint arXiv:1811.05922","author":"Eisenman Assaf","year":"2018","unstructured":"Assaf Eisenman , Maxim Naumov , Darryl Gardner , Misha Smelyanskiy , Sergey Pupyrev , Kim Hazelwood , Asaf Cidon , and Sachin Katti . 2018 . Bandana: Using non-volatile memory for storing deep learning models. arXiv preprint arXiv:1811.05922 (2018). Assaf Eisenman, Maxim Naumov, Darryl Gardner, Misha Smelyanskiy, Sergey Pupyrev, Kim Hazelwood, Asaf Cidon, and Sachin Katti. 2018. Bandana: Using non-volatile memory for storing deep learning models. arXiv preprint arXiv:1811.05922 (2018)."},{"key":"e_1_2_1_15_1","unstructured":"Google. 2019. TensorFlow Performance Guide. Retrieved from https:\/\/docs.w3cub.com\/tensorflow~guide\/performance\/performance_guide\/#general_best_practices.  Google. 2019. TensorFlow Performance Guide. Retrieved from https:\/\/docs.w3cub.com\/tensorflow~guide\/performance\/performance_guide\/#general_best_practices."},{"key":"e_1_2_1_16_1","volume-title":"The architectural implications of Facebook\u2019s DNN-based personalized recommendation. arXiv preprint arXiv:1906.03109","author":"Gupta Udit","year":"2019","unstructured":"Udit Gupta , Xiaodong Wang , Maxim Naumov , Carole-Jean Wu , Brandon Reagen , David Brooks , Bradford Cottel , Kim Hazelwood , Bill Jia , Hsien-Hsin S. Lee , Andrey Malevich , Dheevatsa Mudigere , Mikhail Smelyanskiy , Liang Xiong , and Xuan Zhang . 2019. The architectural implications of Facebook\u2019s DNN-based personalized recommendation. arXiv preprint arXiv:1906.03109 ( 2019 ). Udit Gupta, Xiaodong Wang, Maxim Naumov, Carole-Jean Wu, Brandon Reagen, David Brooks, Bradford Cottel, Kim Hazelwood, Bill Jia, Hsien-Hsin S. Lee, Andrey Malevich, Dheevatsa Mudigere, Mikhail Smelyanskiy, Liang Xiong, and Xuan Zhang. 2019. The architectural implications of Facebook\u2019s DNN-based personalized recommendation. arXiv preprint arXiv:1906.03109 (2019)."},{"key":"e_1_2_1_17_1","volume-title":"Auto-tuning TensorFlow threading model for CPU backend. arXiv preprint arXiv:1812.01665","author":"Hasabnis Niranjan","year":"2018","unstructured":"Niranjan Hasabnis . 2018. Auto-tuning TensorFlow threading model for CPU backend. arXiv preprint arXiv:1812.01665 ( 2018 ). Niranjan Hasabnis. 2018. Auto-tuning TensorFlow threading model for CPU backend. arXiv preprint arXiv:1812.01665 (2018)."},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2018.00059"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3038912.3052569"},{"key":"e_1_2_1_21_1","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4700--4708","author":"Huang Gao","unstructured":"Gao Huang , Zhuang Liu , Laurens Van Der Maaten , and Kilian Q. Weinberger . 2017. Densely connected convolutional networks . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4700--4708 . Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4700--4708."},{"key":"e_1_2_1_22_1","volume-title":"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt;0.5 MB model size. arXiv preprint arXiv:1602.07360","author":"Iandola Forrest N.","year":"2016","unstructured":"Forrest N. Iandola , Song Han , Matthew W. Moskewicz , Khalid Ashraf , William J. Dally , and Kurt Keutzer . 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt;0.5 MB model size. arXiv preprint arXiv:1602.07360 ( 2016 ). Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and &lt;0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016)."},{"key":"e_1_2_1_23_1","volume-title":"Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER\u201919)","author":"Jain Arpan","unstructured":"Arpan Jain , Ammar Ahmad Awan , Quentin Anthony , Hari Subramoni , and Dhableswar K. D. K. Panda. 2019. Performance characterization of DNN training using Tensorflow and PyTorch on modern clusters . In Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER\u201919) . IEEE, 1--11. Arpan Jain, Ammar Ahmad Awan, Quentin Anthony, Hari Subramoni, and Dhableswar K. D. K. Panda. 2019. Performance characterization of DNN training using Tensorflow and PyTorch on modern clusters. In Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER\u201919). IEEE, 1--11."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2647868.2654889"},{"key":"e_1_2_1_25_1","volume-title":"Deep Learning with Python","author":"Ketkar Nikhil","unstructured":"Nikhil Ketkar . 2017. Introduction to PyTorch . In Deep Learning with Python . Springer , 195--208. Nikhil Ketkar. 2017. Introduction to PyTorch. In Deep Learning with Python. Springer, 195--208."},{"key":"e_1_2_1_26_1","unstructured":"Primate Labs. 2019. GeekBench v4. Retrieved from https:\/\/www.geekbench.com\/.  Primate Labs. 2019. GeekBench v4. Retrieved from https:\/\/www.geekbench.com\/."},{"key":"e_1_2_1_27_1","volume-title":"Introduction to Intel advanced vector extensions. Intel White Paper","author":"Lomont Chris","year":"2011","unstructured":"Chris Lomont . 2011. Introduction to Intel advanced vector extensions. Intel White Paper ( 2011 ), 1--21. https:\/\/software.intel.com\/content\/dam\/develop\/external\/us\/en\/documents\/intro-to-intel-avx-183287.pdf. Chris Lomont. 2011. Introduction to Intel advanced vector extensions. Intel White Paper (2011), 1--21. https:\/\/software.intel.com\/content\/dam\/develop\/external\/us\/en\/documents\/intro-to-intel-avx-183287.pdf."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/MM.2020.2974843"},{"key":"e_1_2_1_29_1","unstructured":"Maxim Naumov Dheevatsa Mudigere Hao-Jun Michael Shi Jianyu Huang Narayanan Sundaraman Jongsoo Park Xiaodong Wang Udit Gupta Carole-Jean Wu Alisson G. Azzolini Dmytro Dzhulgakov Andrey Mallevich Ilia Cherniavskii Yinghai Lu Raghuraman Krishnamoorthi Ansha Yu Volodymyr Kondratenko Stephanie Pereira Xianjie Chen Wenlin Chen Vijay Rao Bill Jia Liang Xiong and Misha Smelyanskiy. 2019. Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:1906.00091 (2019).  Maxim Naumov Dheevatsa Mudigere Hao-Jun Michael Shi Jianyu Huang Narayanan Sundaraman Jongsoo Park Xiaodong Wang Udit Gupta Carole-Jean Wu Alisson G. Azzolini Dmytro Dzhulgakov Andrey Mallevich Ilia Cherniavskii Yinghai Lu Raghuraman Krishnamoorthi Ansha Yu Volodymyr Kondratenko Stephanie Pereira Xianjie Chen Wenlin Chen Vijay Rao Bill Jia Liang Xiong and Misha Smelyanskiy. 2019. Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:1906.00091 (2019)."},{"key":"e_1_2_1_30_1","volume-title":"et\u00a0al","author":"Reddi Vijay Janapa","year":"2019","unstructured":"Vijay Janapa Reddi , Christine Cheng , David Kanter , Peter Mattson , Guenther Schmuelling , Carole-Jean Wu , Brian Anderson , Maximilien Breughe , Mark Charlebois , William Chou , et\u00a0al . 2019 . MLPerf inference benchmark. arXiv preprint arXiv:1911.02549 (2019). Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, et\u00a0al. 2019. MLPerf inference benchmark. arXiv preprint arXiv:1911.02549 (2019)."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/CCBD.2016.029"},{"key":"e_1_2_1_32_1","volume-title":"Proceedings of the 46th International Symposium on Computer Architecture. ACM, 513--526","author":"Sriraman Akshitha","unstructured":"Akshitha Sriraman , Abhishek Dhanotia , and Thomas F. Wenisch . 2019. SoftSKU: Optimizing server architectures for microservice diversity at scale . In Proceedings of the 46th International Symposium on Computer Architecture. ACM, 513--526 . Akshitha Sriraman, Abhishek Dhanotia, and Thomas F. Wenisch. 2019. SoftSKU: Optimizing server architectures for microservice diversity at scale. In Proceedings of the 46th International Symposium on Computer Architecture. ACM, 513--526."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.308"},{"key":"e_1_2_1_35_1","volume-title":"Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730","author":"Vasilache Nicolas","year":"2018","unstructured":"Nicolas Vasilache , Oleksandr Zinenko , Theodoros Theodoridis , Priya Goyal , Zachary DeVito , William S. Moses , Sven Verdoolaege , Andrew Adams , and Albert Cohen . 2018. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730 ( 2018 ). Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S. Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. 2018. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730 (2018)."},{"key":"e_1_2_1_36_1","volume-title":"Proceedings of the International Conference on Advances in Neural Information Processing Systems. 5998--6008","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N. Gomez , \u0141ukasz Kaiser , and Illia Polosukhin . 2017 . Attention is all you need . In Proceedings of the International Conference on Advances in Neural Information Processing Systems. 5998--6008 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the International Conference on Advances in Neural Information Processing Systems. 5998--6008."},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2019.00048"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.634"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISPASS.2014.6844459"}],"container-title":["ACM Transactions on Architecture and Code Optimization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3431388","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3431388","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3431388","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:24:46Z","timestamp":1750195486000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3431388"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,12,30]]},"references-count":39,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,3,31]]}},"alternative-id":["10.1145\/3431388"],"URL":"https:\/\/doi.org\/10.1145\/3431388","relation":{},"ISSN":["1544-3566","1544-3973"],"issn-type":[{"value":"1544-3566","type":"print"},{"value":"1544-3973","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,12,30]]},"assertion":[{"value":"2020-05-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-10-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-12-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}