{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,17]],"date-time":"2025-11-17T03:01:48Z","timestamp":1763348508945,"version":"3.41.0"},"reference-count":50,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,4,10]],"date-time":"2023-04-10T00:00:00Z","timestamp":1681084800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61972162"],"award-info":[{"award-number":["61972162"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Guangdong Natural Science Funds for Distinguished Young Scholar","award":["2023B1515020097"],"award-info":[{"award-number":["2023B1515020097"]}]},{"name":"Guangdong International Science and Technology Cooperation Project","award":["2021A0505030009"],"award-info":[{"award-number":["2021A0505030009"]}]},{"DOI":"10.13039\/501100003453","name":"Guangdong Natural Science Foundation","doi-asserted-by":"crossref","award":["2021A1515012625"],"award-info":[{"award-number":["2021A1515012625"]}],"id":[{"id":"10.13039\/501100003453","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Guangzhou Basic and Applied Research Project","award":["202102021074"],"award-info":[{"award-number":["202102021074"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2023,6,30]]},"abstract":"<jats:p>\n            Anime is an abstract art form that is substantially different from the human portrait, leading to a challenging misaligned image translation problem that is beyond the capability of existing methods. This can be boiled down to a highly ambiguous unconstrained translation between two domains. To this end, we design a new anime translation framework by deriving the prior knowledge of a pre-trained StyleGAN model. We introduce disentangled encoders to separately embed structure and appearance information into the same latent code, governed by four tailored losses. Moreover, we develop a FaceBank aggregation method that leverages the generated data of the StyleGAN, anchoring the prediction to produce in-domain animes. To empower our model and promote the research of anime translation, we propose the first anime portrait parsing dataset,\n            <jats:italic>Danbooru-Parsing<\/jats:italic>\n            , containing 4,921 densely labeled images across 17 classes. This dataset connects the face semantics with appearances, enabling our new constrained translation setting. We further show the editability of our results, and extend our method to manga images, by generating the first manga parsing pseudo data. Extensive experiments demonstrate the values of our new dataset and method, resulting in the first feasible solution on anime translation.\n          <\/jats:p>","DOI":"10.1145\/3585002","type":"journal-article","created":{"date-parts":[[2023,2,21]],"date-time":"2023-02-21T11:26:49Z","timestamp":1676978809000},"page":"1-14","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":9,"title":["Parsing-Conditioned Anime Translation: A New Dataset and Method"],"prefix":"10.1145","volume":"42","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7077-7796","authenticated-orcid":false,"given":"Zhansheng","family":"Li","sequence":"first","affiliation":[{"name":"South China University of Technology, China and Singapore Management University, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3383-4349","authenticated-orcid":false,"given":"Yangyang","family":"Xu","sequence":"additional","affiliation":[{"name":"The University of Hong Kong, China and South China University of Technology, Guangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4007-2776","authenticated-orcid":false,"given":"Nanxuan","family":"Zhao","sequence":"additional","affiliation":[{"name":"Adobe Research, San Jose, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5708-0959","authenticated-orcid":false,"given":"Yang","family":"Zhou","sequence":"additional","affiliation":[{"name":"South China University of Technology, China and Singapore Management University, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5953-2771","authenticated-orcid":false,"given":"Yongtuo","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Amsterdam, Netherlands and South China University of Technology, Guangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8865-7896","authenticated-orcid":false,"given":"Dahua","family":"Lin","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3802-4644","authenticated-orcid":false,"given":"Shengfeng","family":"He","sequence":"additional","affiliation":[{"name":"Singapore Management University, Singapore"}]}],"member":"320","published-online":{"date-parts":[[2023,4,10]]},"reference":[{"key":"e_1_3_3_2_1","first-page":"4432","volume-title":"ICCV","author":"Abdal Rameen","year":"2019","unstructured":"Rameen Abdal, Yipeng Qin, and Peter Wonka. 2019. Image2StyleGAN: How to embed images into the StyleGAN latent space?. In ICCV. 4432\u20134441."},{"key":"e_1_3_3_3_1","first-page":"6711","volume-title":"ICCV","author":"Alaluf Yuval","year":"2021","unstructured":"Yuval Alaluf, Or Patashnik, and Daniel Cohen-Or. 2021. ReStyle: A residual-based StyleGAN encoder via iterative refinement. In ICCV. 6711\u20136720."},{"key":"e_1_3_3_4_1","article-title":"Danbooru2020: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset","year":"2021","unstructured":"Anonymous, Danbooru community, and Gwern Branwen. 2021. Danbooru2020: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset. (January2021). https:\/\/www.gwern.net\/Danbooru2020.","journal-title":"https:\/\/www.gwern.net\/Danbooru2020"},{"key":"e_1_3_3_5_1","volume-title":"ICLR","author":"Brock Andrew","year":"2019","unstructured":"Andrew Brock, Jeff Donahue, and Karen Simonyan. 2019. Large scale GAN training for high fidelity natural image synthesis. In ICLR."},{"issue":"6","key":"e_1_3_3_6_1","first-page":"1","article-title":"CariGANs: Unpaired photo-to-caricature translation","volume":"37","author":"Cao Kaidi","year":"2018","unstructured":"Kaidi Cao, Jing Liao, and Lu Yuan. 2018. CariGANs: Unpaired photo-to-caricature translation. ACM TOG 37, 6 (2018), 1\u201314.","journal-title":"ACM TOG"},{"key":"e_1_3_3_7_1","first-page":"242","volume-title":"ISICA","author":"Chen Jie","year":"2019","unstructured":"Jie Chen, Gang Liu, and Xin Chen. 2019. AnimeGAN: A novel lightweight GAN for photo animation. In ISICA. 242\u2013256."},{"issue":"4","key":"e_1_3_3_8_1","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs","volume":"40","author":"Chen Liang-Chieh","year":"2017","unstructured":"Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. 2017. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE TPAMI 40, 4 (2017), 834\u2013848.","journal-title":"IEEE TPAMI"},{"key":"e_1_3_3_9_1","first-page":"801","volume-title":"ECCV","author":"Chen Liang-Chieh","year":"2018","unstructured":"Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV. 801\u2013818."},{"issue":"4","key":"e_1_3_3_10_1","article-title":"DeepFaceDrawing: Deep generation of face images from sketches","volume":"39","author":"Chen Shu-Yu","year":"2020","unstructured":"Shu-Yu Chen, Wanchao Su, Lin Gao, Shihong Xia, and Hongbo Fu. 2020. DeepFaceDrawing: Deep generation of face images from sketches. ACM TOG 39, 4 (2020), 72\u20131.","journal-title":"ACM TOG"},{"key":"e_1_3_3_11_1","first-page":"3146","volume-title":"CVPR","author":"Fu Jun","year":"2019","unstructured":"Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. 2019. Dual attention network for scene segmentation. In CVPR. 3146\u20133154."},{"key":"e_1_3_3_12_1","article-title":"StyleGAN-NADA: Clip-guided domain adaptation of image generators","author":"Gal Rinon","year":"2021","unstructured":"Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, and Daniel Cohen-Or. 2021. StyleGAN-NADA: Clip-guided domain adaptation of image generators. arXiv preprint arXiv:2108.00946 (2021).","journal-title":"arXiv preprint arXiv:2108.00946"},{"key":"e_1_3_3_13_1","unstructured":"Rafael C. Gonzalez Richard E. Woods and Barry R. Masters. 2009. Digital image processing. (2009)."},{"key":"e_1_3_3_14_1","first-page":"6629","volume-title":"NeurIPS","author":"Heusel Martin","year":"2017","unstructured":"Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In NeurIPS. 6629\u20136640."},{"key":"e_1_3_3_15_1","article-title":"Unsupervised image-to-image translation via pre-trained StyleGAN2 network","author":"Huang Jialu","year":"2021","unstructured":"Jialu Huang, Sam Kwong, and Jing Liao. 2021. Unsupervised image-to-image translation via pre-trained StyleGAN2 network. IEEE TMM (2021).","journal-title":"IEEE TMM"},{"key":"e_1_3_3_16_1","first-page":"172","volume-title":"ECCV","author":"Huang Xun","year":"2018","unstructured":"Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. 2018. Multimodal unsupervised image-to-image translation. In ECCV. 172\u2013189."},{"key":"e_1_3_3_17_1","first-page":"1125","volume-title":"CVPR","author":"Isola Phillip","year":"2017","unstructured":"Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In CVPR. 1125\u20131134."},{"key":"e_1_3_3_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459860"},{"key":"e_1_3_3_19_1","volume-title":"ICLR","author":"Karras Tero","year":"2018","unstructured":"Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive growing of GANs for improved quality, stability, and variation. In ICLR."},{"key":"e_1_3_3_20_1","first-page":"4401","volume-title":"CVPR","author":"Karras Tero","year":"2019","unstructured":"Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In CVPR. 4401\u20134410."},{"key":"e_1_3_3_21_1","first-page":"8110","volume-title":"CVPR","author":"Karras Tero","year":"2020","unstructured":"Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and improving the image quality of StyleGAN. In CVPR. 8110\u20138119."},{"key":"e_1_3_3_22_1","volume-title":"ICLR","author":"Kim Junho","year":"2020","unstructured":"Junho Kim, Minjae Kim, Hyeonwoo Kang, and Kwang Hee Lee. 2020. U-GAT-IT: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. In ICLR."},{"key":"e_1_3_3_23_1","volume-title":"ICLR","author":"Kingma Diederik P.","year":"2015","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR."},{"key":"e_1_3_3_24_1","first-page":"5549","volume-title":"CVPR","author":"Lee Cheng-Han","year":"2020","unstructured":"Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. 2020. MaskGAN: Towards diverse and interactive facial image manipulation. In CVPR. 5549\u20135558."},{"key":"e_1_3_3_25_1","first-page":"35","volume-title":"ECCV","author":"Lee Hsin-Ying","year":"2018","unstructured":"Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Kumar Singh, and Ming-Hsuan Yang. 2018. Diverse image-to-image translation via disentangled representations. In ECCV. 35\u201351."},{"issue":"10","key":"e_1_3_3_26_1","first-page":"2402","article-title":"Drit++: Diverse image-to-image translation via disentangled representations","volume":"128","author":"Lee Hsin-Ying","year":"2020","unstructured":"Hsin-Ying Lee, Hung-Yu Tseng, Qi Mao, Jia-Bin Huang, Yu-Ding Lu, Maneesh Singh, and Ming-Hsuan Yang. 2020. Drit++: Diverse image-to-image translation via disentangled representations. IJCV 128, 10 (2020), 2402\u20132417.","journal-title":"IJCV"},{"key":"e_1_3_3_27_1","article-title":"AniGAN: Style-guided generative adversarial networks for unsupervised anime face generation","author":"Li Bing","year":"2021","unstructured":"Bing Li, Yuanlue Zhu, Yitong Wang, Chia-Wen Lin, Bernard Ghanem, and Linlin Shen. 2021. AniGAN: Style-guided generative adversarial networks for unsupervised anime face generation. IEEE TMM (2021).","journal-title":"IEEE TMM"},{"key":"e_1_3_3_28_1","first-page":"645","volume-title":"ACM MM","author":"Li Tingting","year":"2018","unstructured":"Tingting Li, Ruihe Qian, Chao Dong, Si Liu, Qiong Yan, Wenwu Zhu, and Liang Lin. 2018. BeautyGAN: Instance-level facial makeup transfer with deep generative adversarial network. In ACM MM. 645\u2013653."},{"key":"e_1_3_3_29_1","first-page":"700","volume-title":"NeurIPS","author":"Liu Ming-Yu","year":"2017","unstructured":"Ming-Yu Liu, Thomas Breuel, and Jan Kautz. 2017. Unsupervised image-to-image translation networks. In NeurIPS. 700\u2013708."},{"key":"e_1_3_3_30_1","first-page":"10551","volume-title":"ICCV","author":"Liu Ming-Yu","year":"2019","unstructured":"Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. 2019. Few-shot unsupervised image-to-image translation. In ICCV. 10551\u201310560."},{"key":"e_1_3_3_31_1","first-page":"3431","volume-title":"CVPR","author":"Long Jonathan","year":"2015","unstructured":"Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In CVPR. 3431\u20133440."},{"issue":"9","key":"e_1_3_3_32_1","doi-asserted-by":"crossref","first-page":"3135","DOI":"10.3390\/app10093135","article-title":"EHANet: An effective hierarchical aggregation network for face parsing","volume":"10","author":"Luo Ling","year":"2020","unstructured":"Ling Luo, Dingyu Xue, and Xinglong Feng. 2020. EHANet: An effective hierarchical aggregation network for face parsing. Applied Sciences 10, 9 (2020), 3135.","journal-title":"Applied Sciences"},{"key":"e_1_3_3_33_1","article-title":"lbpcascade animeface","year":"2014","unstructured":"Nagadomi and Youkaichao. 2014. lbpcascade animeface. (2014). https:\/\/github.com\/nagadomi\/lbpcascade_animeface\/.","journal-title":"https:\/\/github.com\/nagadomi\/lbpcascade_animeface\/"},{"key":"e_1_3_3_34_1","first-page":"7860","volume-title":"CVPR","author":"Nizan Ori","year":"2020","unstructured":"Ori Nizan and Ayellet Tal. 2020. Breaking the cycle-colleagues are all you need. In CVPR. 7860\u20137869."},{"key":"e_1_3_3_35_1","first-page":"7198","article-title":"Swapping autoencoder for deep image manipulation","volume":"33","author":"Park Taesung","year":"2020","unstructured":"Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei Efros, and Richard Zhang. 2020. Swapping autoencoder for deep image manipulation. NeurIPS 33 (2020), 7198\u20137211.","journal-title":"NeurIPS"},{"key":"e_1_3_3_36_1","volume-title":"NeurIPS Workshop","author":"Pinkney Justin N. M.","year":"2020","unstructured":"Justin N. M. Pinkney and Doron Adler. 2020. Resolution dependent GAN interpolation for controllable image synthesis between domains. In NeurIPS Workshop."},{"key":"e_1_3_3_37_1","first-page":"8748","volume-title":"ICML","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et\u00a0al. 2021. Learning transferable visual models from natural language supervision. In ICML. 8748\u20138763."},{"key":"e_1_3_3_38_1","first-page":"2287","volume-title":"CVPR","author":"Richardson Elad","year":"2021","unstructured":"Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. 2021. Encoding in style: A StyleGAN encoder for image-to-image translation. In CVPR. 2287\u20132296."},{"issue":"1","key":"e_1_3_3_39_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3544777","article-title":"Pivotal tuning for latent-based editing of real images","volume":"42","author":"Roich Daniel","year":"2022","unstructured":"Daniel Roich, Ron Mokady, Amit H. Bermano, and Daniel Cohen-Or. 2022. Pivotal tuning for latent-based editing of real images. ACM TOG 42, 1 (2022), 1\u201313.","journal-title":"ACM TOG"},{"key":"e_1_3_3_40_1","first-page":"10762","volume-title":"CVPR","author":"Shi Yichun","year":"2019","unstructured":"Yichun Shi, Debayan Deb, and Anil K. Jain. 2019. Warpgan: Automatic caricature generation. In CVPR. 10762\u201310771."},{"key":"e_1_3_3_41_1","volume-title":"CVPR","author":"Siyao Li","year":"2021","unstructured":"Li Siyao, Shiyu Zhao, Weijiang Yu, Wenxiu Sun, Dimitris Metaxas, Chen Change Loy, and Ziwei Liu. 2021. Deep animation video interpolation in the wild. In CVPR."},{"key":"e_1_3_3_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459771"},{"issue":"4","key":"e_1_3_3_43_1","article-title":"MichiGAN: Multi-input-conditioned hair image generation for portrait editing","volume":"39","author":"Tan Zhentao","year":"2020","unstructured":"Zhentao Tan, Menglei Chai, Dongdong Chen, Jing Liao, Qi Chu, Lu Yuan, Sergey Tulyakov, and Nenghai Yu. 2020. MichiGAN: Multi-input-conditioned hair image generation for portrait editing. ACM TOG 39, 4 (2020), 95\u20131.","journal-title":"ACM TOG"},{"key":"e_1_3_3_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459838"},{"key":"e_1_3_3_45_1","first-page":"8798","volume-title":"CVPR","author":"Wang Ting-Chun","year":"2018","unstructured":"Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. High-resolution image synthesis and semantic manipulation with conditional GANs. In CVPR. 8798\u20138807."},{"issue":"11","key":"e_1_3_3_46_1","first-page":"1","article-title":"iHairRecolorer: Deep image-to-video hair color transfer","volume":"64","author":"Wu Keyu","year":"2021","unstructured":"Keyu Wu, Lingchen Yang, Hongbo Fu, and Youyi Zheng. 2021. iHairRecolorer: Deep image-to-video hair color transfer. Science China Information Sciences 64, 11 (2021), 1\u201315.","journal-title":"Science China Information Sciences"},{"key":"e_1_3_3_47_1","first-page":"325","volume-title":"ECCV","author":"Yu Changqian","year":"2018","unstructured":"Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. 2018. BiseNet: Bilateral segmentation network for real-time semantic segmentation. In ECCV. 325\u2013341."},{"key":"e_1_3_3_48_1","first-page":"586","volume-title":"CVPR","author":"Zhang Richard","year":"2018","unstructured":"Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR. 586\u2013595."},{"key":"e_1_3_3_49_1","first-page":"800","volume-title":"ECCV","author":"Zhao Yihao","year":"2020","unstructured":"Yihao Zhao, Ruihai Wu, and Hao Dong. 2020. Unpaired image-to-image translation using adversarial consistency loss. In ECCV. 800\u2013815."},{"key":"e_1_3_3_50_1","first-page":"633","volume-title":"CVPR","author":"Zhou Bolei","year":"2017","unstructured":"Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2017. Scene parsing through ade20k dataset. In CVPR. 633\u2013641."},{"key":"e_1_3_3_51_1","first-page":"2223","volume-title":"ICCV","author":"Zhu Jun-Yan","year":"2017","unstructured":"Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV. 2223\u20132232."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3585002","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3585002","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:07Z","timestamp":1750178227000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3585002"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,10]]},"references-count":50,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,6,30]]}},"alternative-id":["10.1145\/3585002"],"URL":"https:\/\/doi.org\/10.1145\/3585002","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2023,4,10]]},"assertion":[{"value":"2021-09-19","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-02-13","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-04-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}