{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,28]],"date-time":"2026-02-28T16:13:45Z","timestamp":1772295225607,"version":"3.50.1"},"reference-count":68,"publisher":"Association for Computing Machinery (ACM)","issue":"2s","license":[{"start":{"date-parts":[[2023,2,17]],"date-time":"2023-02-17T00:00:00Z","timestamp":1676592000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62101326, 61831015, U1908210, 61927809, and 61771305"],"award-info":[{"award-number":["62101326, 61831015, U1908210, 61927809, and 61771305"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2021YFF0900503, 2019YFB1405900, and 2019YFB1405902"],"award-info":[{"award-number":["2021YFF0900503, 2019YFB1405900, and 2019YFB1405902"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"crossref","award":["2022M712090"],"award-info":[{"award-number":["2022M712090"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2023,6,30]]},"abstract":"<jats:p>Augmented reality (AR) overlays digital content onto reality. In an AR system, correct and precise estimations of user visual fixations and head movements can enhance the quality of experience by allocating more computational resources for analyzing, rendering, and 3D registration on the areas of interest. However, there is inadequate research to help in understanding the visual explorations of the users when using an AR system or modeling AR visual attention. To bridge the gap between the saliency prediction on real-world scenes and on scenes augmented by virtual information, we construct the ARVR saliency dataset. The virtual reality (VR) technique is employed to simulate the real-world. Annotations of object recognition and tracking as augmented contents are blended into omnidirectional videos. The saliency annotations of head and eye movements for both original and augmented videos are collected and together constitute the ARVR dataset. We also design a model that is capable of solving the saliency prediction problem in AR. Local block images are extracted to simulate the viewport and offset the projection distortion. Conspicuous visual cues in the local block images are extracted to constitute the spatial features. The optical flow information is estimated as an important temporal feature. We also consider the interplay between virtual information and reality. The composition of the augmentation information is distinguished, and the joint effects of adversarial augmentation and complementary augmentation are estimated. The Markov chain is constructed with block images as graph nodes. In the determination of the edge weights, both the characteristics of the viewing behaviors and the visual saliency mechanisms are considered. The order of importance for block images is estimated through the state of equilibrium of the Markov chain. Extensive experiments are conducted to demonstrate the effectiveness of the proposed method.<\/jats:p>","DOI":"10.1145\/3565024","type":"journal-article","created":{"date-parts":[[2022,9,29]],"date-time":"2022-09-29T11:48:36Z","timestamp":1664452116000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Toward Visual Behavior and Attention Understanding for Augmented 360 Degree Videos"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3069-060X","authenticated-orcid":false,"given":"Yucheng","family":"Zhu","sequence":"first","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5693-0416","authenticated-orcid":false,"given":"Xiongkuo","family":"Min","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0329-6321","authenticated-orcid":false,"given":"Dandan","family":"Zhu","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8165-9322","authenticated-orcid":false,"given":"Guangtao","family":"Zhai","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4029-3322","authenticated-orcid":false,"given":"Xiaokang","family":"Yang","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8799-1182","authenticated-orcid":false,"given":"Wenjun","family":"Zhang","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5540-3235","authenticated-orcid":false,"given":"Ke","family":"Gu","sequence":"additional","affiliation":[{"name":"Beijing University of Technology, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6015-2618","authenticated-orcid":false,"given":"Jiantao","family":"Zhou","sequence":"additional","affiliation":[{"name":"University of Macau, Macau, China"}]}],"member":"320","published-online":{"date-parts":[[2023,2,17]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/159544.159581"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.20870\/IJVR.2010.9.2.2767"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.5555\/2051760"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/514236.514265"},{"key":"e_1_3_1_6_2","first-page":"3","volume-title":"Proceedings of the IEEE International Symposium on Mixed and Augmented Reality","author":"Kruijff E.","year":"2010","unstructured":"E. Kruijff, J. E. Swan, and S. Feiner. 2010. Perceptual issues in augmented reality revisited. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality. 3\u201312."},{"key":"e_1_3_1_7_2","unstructured":"L. Itti and A. Borji. 2015. Computational models: Bottom-up and top-down aspects. Retrieved from https:\/\/arXiv:cs.CV\/1510.07748."},{"issue":"3","key":"e_1_3_1_8_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3337066","article-title":"Visual attention analysis and prediction on human faces for children with autism spectrum disorder","volume":"15","author":"Duan Huiyu","year":"2019","unstructured":"Huiyu Duan, Xiongkuo Min, Yi Fang, Lei Fan, Xiaokang Yang, and Guangtao Zhai. 2019. Visual attention analysis and prediction on human faces for children with autism spectrum disorder. ACM Trans. Multimedia Comput. Commun. Appl. 15, 3s (2019), 1\u201323.","journal-title":"ACM Trans. Multimedia Comput. Commun. Appl."},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2017.2690452"},{"key":"e_1_3_1_10_2","first-page":"1","volume-title":"Proceedings of the International Conference on Quality of Multimedia Experience","author":"Rai Yashas","year":"2017","unstructured":"Yashas Rai, Patrick Le Callet, and Philippe Guillotel. 2017. Which saliency weighting for omni directional image quality assessment? In Proceedings of the International Conference on Quality of Multimedia Experience. IEEE, 1\u20136."},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.03.013"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.03.008"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.03.007"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.03.006"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.05.010"},{"key":"e_1_3_1_16_2","first-page":"1420","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Cheng Hsien-Tzu","year":"2018","unstructured":"Hsien-Tzu Cheng, Chun-Hung Chao, Jin-Dong Dong, Hao-Kai Wen, Tyng-Luh Liu, and Min Sun. 2018. Cube padding for weakly supervised saliency prediction in 360 videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1420\u20131429."},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00559"},{"issue":"1","key":"e_1_3_1_18_2","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1146\/annurev-neuro-072116-031526","article-title":"Toward a rational and mechanistic account of mental effort","volume":"40","author":"Shenhav Amitai","year":"2017","unstructured":"Amitai Shenhav, Sebastian Musslick, Falk Lieder, Wouter Kool, Thomas L. Griffiths, Jonathan D. Cohen, Matthew M. Botvinick, et\u00a0al. 2017. Toward a rational and mechanistic account of mental effort. Annu. Rev. Neurosci. 40, 1 (2017), 99\u2013124.","journal-title":"Annu. Rev. Neurosci."},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2018.2889376"},{"key":"e_1_3_1_20_2","first-page":"545","volume-title":"Advances in Neural Information Processing Systems","author":"Harel Jonathan","year":"2007","unstructured":"Jonathan Harel, Christof Koch, and Pietro Perona. 2007. Graph-based visual saliency. In Advances in Neural Information Processing Systems. MIT Press, 545\u2013552."},{"key":"e_1_3_1_21_2","unstructured":"Dataset. 2017. Large-scale scene understanding (LSUN) database. Retrieved from http:\/\/salicon.net\/challenge-2017\/."},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.316"},{"key":"e_1_3_1_23_2","first-page":"1","volume-title":"Proceedings of the 9th International Conference on Quality of Multimedia Experience","author":"Abreu Ana De","year":"2017","unstructured":"Ana De Abreu, Cagri Ozcinar, and Aljosa Smolic. 2017. Look around you: Saliency maps for omnidirectional images in VR applications. In Proceedings of the 9th International Conference on Quality of Multimedia Experience. IEEE, 1\u20136."},{"key":"e_1_3_1_24_2","article-title":"The prediction of saliency map for head and eye movements in 360 degree images","author":"Zhu Yucheng","year":"2019","unstructured":"Yucheng Zhu, Guangtao Zhai, Xiongkuo Min, and Jiantao Zhou. 2019. The prediction of saliency map for head and eye movements in 360 degree images. IEEE Trans. Multimedia 22, 9 (2019), 2331\u20132344.","journal-title":"IEEE Trans. Multimedia"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2851672"},{"key":"e_1_3_1_26_2","doi-asserted-by":"crossref","first-page":"103887","DOI":"10.1016\/j.imavis.2020.103887","article-title":"Eml-net: An expandable multi-layer network for saliency prediction","volume":"95","author":"Jia Sen","year":"2020","unstructured":"Sen Jia and Neil D. B. Bruce. 2020. Eml-net: An expandable multi-layer network for saliency prediction. Image Vision Comput. 95 (2020), 103887.","journal-title":"Image Vision Comput."},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00514"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2936112"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2017.2750671"},{"key":"e_1_3_1_30_2","first-page":"1396","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Hu Hou Ning","year":"2017","unstructured":"Hou Ning Hu, Yen Chen Lin, Ming Yu Liu, Hsien Tzu Cheng, Yung Ju Chang, and Min Sun. 2017. Deep 360 pilot: Learning a deep agent for piloting through 360 sports video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1396\u20131405."},{"issue":"4","key":"e_1_3_1_31_2","first-page":"1","article-title":"Learning a deep agent to predict head movement in 360-degree images","volume":"16","author":"Zhu Yucheng","year":"2020","unstructured":"Yucheng Zhu, Guangtao Zhai, Xiongkuo Min, and Jiantao Zhou. 2020. Learning a deep agent to predict head movement in 360-degree images. ACM Trans. Multimedia Comput. Commun. Appl. 16, 4 (2020), 1\u201323.","journal-title":"ACM Trans. Multimedia Comput. Commun. Appl."},{"key":"e_1_3_1_32_2","first-page":"6","volume-title":"Proceedings of the IEEE Workshop on Multimedia Signal Processing","author":"Chao Fang-Yi","year":"2021","unstructured":"Fang-Yi Chao, Cagri Ozcinar, and Aljosa Smolic. 2021. Transformer-based long-term viewport prediction in \\(360^\\circ\\) video: Scanpath is all you need. In Proceedings of the IEEE Workshop on Multimedia Signal Processing. 6\u20138."},{"key":"e_1_3_1_33_2","first-page":"529","volume-title":"Advances in Neural Information Processing Systems","author":"Su Yu-Chuan","year":"2017","unstructured":"Yu-Chuan Su and Kristen Grauman. 2017. Learning spherical convolution for fast features from \\(360^\\circ\\) imagery. In Advances in Neural Information Processing Systems. MIT Press, 529\u2013539."},{"key":"e_1_3_1_34_2","unstructured":"Taco S. Cohen Mario Geiger Jonas K\u00f6hler and Max Welling. 2018. Spherical CNNs. Retrieved from https:\/\/arxiv.org\/abs\/1801.10130."},{"key":"e_1_3_1_35_2","first-page":"3742","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision","author":"Li Yunhao","year":"2021","unstructured":"Yunhao Li, Wei Shen, Zhongpai Gao, Yucheng Zhu, Guangtao Zhai, and Guodong Guo. 2021. Looking here or there? Gaze following in 360-degree images. In Proceedings of the IEEE\/CVF International Conference on Computer Vision. 3742\u20133751."},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2022.3150502"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3511603"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3050861"},{"key":"e_1_3_1_39_2","first-page":"1","volume-title":"Proceedings of the International Conference on Visual Communications and Image Processing (VCIP\u201921)","author":"Yang Yiwei","year":"2021","unstructured":"Yiwei Yang, Yucheng Zhu, Zhongpai Gao, and Guangtao Zhai. 2021. SalGFCN: Graph based fully convolutional network for panoramic saliency prediction. In Proceedings of the International Conference on Visual Communications and Image Processing (VCIP\u201921). IEEE, 1\u20135."},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1201\/b16191"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.05.003"},{"key":"e_1_3_1_42_2","unstructured":"Christopher Carlson. [n.d.]. How I Made Wine Glasses from Sunflowers. Retrieved from http:\/\/blog.wolfram.com\/2011\/07\/28\/how-i-made-wine-glasses-from-sunflowers\/."},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2017.2777665"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/2996463"},{"key":"e_1_3_1_46_2","volume-title":"International Encyclopedia of Ergonomics and Human Factors","author":"Karwowski Waldemar","year":"2006","unstructured":"Waldemar Karwowski. 2006. International Encyclopedia of Ergonomics and Human Factors. CRC Press, Boca Raton, FL."},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF00234474"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.4310\/CMS.2008.v6.n2.a12"},{"key":"e_1_3_1_49_2","unstructured":"Vive Pro. 2019. VIVE Pro Eye: HMD with Precise Eye Tracking. Retrieved from https:\/\/enterprise.vive.com\/us\/product\/vive-pro-eye\/."},{"key":"e_1_3_1_50_2","first-page":"3","volume-title":"Eye Movement Research","author":"Rayner Keith","year":"1995","unstructured":"Keith Rayner. 1995. Eye movements and cognitive processes in reading, visual search, and scene perception. In Eye Movement Research, Vol. 6. North-Holland, 3\u201322."},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/3304109.3325820"},{"key":"e_1_3_1_52_2","unstructured":"Zoya Bylinskii Tilke Judd Ali Borji Laurent Itti Fr\u00e9do Durand Aude Oliva and Antonio Torralba. [n.d.]. MIT Saliency Benchmark. Retrieved from http:\/\/saliency.mit.edu\/."},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1142\/S1793351X19400142"},{"key":"e_1_3_1_54_2","unstructured":"Junting Pan Cristian Canton Ferrer Kevin McGuinness Noel E. O\u2019Connor Jordi Torres Elisa Sayrol and Xavier Giro-i Nieto. 2017. SalGAN: Visual saliency prediction with generative adversarial networks. Retrieved from https:\/\/arxiv.org\/abs\/1701.01081."},{"key":"e_1_3_1_55_2","first-page":"153","volume-title":"Proceedings of the IEEE International Conference on Computer Vision","author":"Zhang Jianming","year":"2013","unstructured":"Jianming Zhang and Stan Sclaroff. 2013. Saliency detection: A boolean map approach. In Proceedings of the IEEE International Conference on Computer Vision. 153\u2013160."},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/34.730558"},{"key":"e_1_3_1_57_2","first-page":"3488","volume-title":"Proceedings of the International Conference on Pattern Recognition","author":"Cornia Marcella","year":"2016","unstructured":"Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. 2016. A deep multi-level network for saliency prediction. In Proceedings of the International Conference on Pattern Recognition. IEEE, 3488\u20133493."},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298710"},{"key":"e_1_3_1_59_2","first-page":"1","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Guo Chenlei","year":"2008","unstructured":"Chenlei Guo, Qi Ma, and Liming Zhang. 2008. Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1\u20138."},{"key":"e_1_3_1_60_2","doi-asserted-by":"publisher","DOI":"10.1167\/9.12.15"},{"key":"e_1_3_1_61_2","unstructured":"Panagiotis Linardos Eva Mohedano Juan Jose Nieto Noel E. O\u2019Connor Xavier Giro-i Nieto and Kevin McGuinness. 2019. Simple vs. complex temporal recurrences for video saliency prediction. Retrieved from https:\/\/arXiv:1907.01869."},{"key":"e_1_3_1_62_2","first-page":"2394","volume-title":"Proceedings of the IEEE International Conference on Computer Vision","author":"Min Kyle","year":"2019","unstructured":"Kyle Min and Jason J. Corso. 2019. TASED-net: Temporally aggregating spatial encoder-decoder network for video saliency detection. In Proceedings of the IEEE International Conference on Computer Vision. 2394\u20132403."},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.05.005"},{"key":"e_1_3_1_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2018.2883305"},{"key":"e_1_3_1_65_2","unstructured":"Tilke Judd Fr\u00e9do Durand and Antonio Torralba. 2012. A Benchmark of Computational Models of Saliency to Predict Human Fxations . MIT tech report Tech. Rep. http:\/\/hdl.handle.net\/1721.1\/68590."},{"key":"e_1_3_1_66_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.visres.2005.03.019"},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.visres.2004.09.017"},{"key":"e_1_3_1_68_2","doi-asserted-by":"publisher","DOI":"10.5555\/1120076.1649158"},{"key":"e_1_3_1_69_2","first-page":"3750","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision","author":"Djilali Yasser Abdelaziz Dahou","year":"2021","unstructured":"Yasser Abdelaziz Dahou Djilali, Kevin McGuinness, and Noel E. O\u2019Connor. 2021. Simple baselines can fool 360deg saliency metrics. In Proceedings of the IEEE\/CVF International Conference on Computer Vision. 3750\u20133756."}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3565024","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3565024","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:02:52Z","timestamp":1750186972000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3565024"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,17]]},"references-count":68,"journal-issue":{"issue":"2s","published-print":{"date-parts":[[2023,6,30]]}},"alternative-id":["10.1145\/3565024"],"URL":"https:\/\/doi.org\/10.1145\/3565024","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"value":"1551-6857","type":"print"},{"value":"1551-6865","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,17]]},"assertion":[{"value":"2022-03-30","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-09-25","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-02-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}