{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T18:10:24Z","timestamp":1765303824375,"version":"3.41.0"},"reference-count":54,"publisher":"Association for Computing Machinery (ACM)","issue":"3-4","license":[{"start":{"date-parts":[[2021,9,3]],"date-time":"2021-09-03T00:00:00Z","timestamp":1630627200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000185","name":"Defense Advanced Research Projects Agency","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100000185","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100000038","name":"Natural Sciences and Engineering Research Council of Canada","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100000038","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Interact. Intell. Syst."],"published-print":{"date-parts":[[2021,12,31]]},"abstract":"<jats:p>While the computer vision problem of searching for activities in videos is usually addressed by using discriminative models, their decisions tend to be opaque and difficult for people to understand. We propose a case study of a novel machine learning approach for generative searching and ranking of motion capture activities with visual explanation. Instead of directly ranking videos in the database given a text query, our approach uses a variant of Generative Adversarial Networks (GANs) to generate exemplars based on the query and uses them to search for the activity of interest in a large database. Our model is able to achieve comparable results to its discriminative counterpart, while being able to dynamically generate visual explanations. In addition to our searching and ranking method, we present an explanation interface that enables the user to successfully explore the model\u2019s explanations and its confidence by revealing query-based, model-generated motion capture clips that contributed to the model\u2019s decision. Finally, we conducted a user study with 44 participants to show that by using our model and interface, participants benefit from a deeper understanding of the model\u2019s conceptualization of the search query. We discovered that the XAI system yielded a comparable level of efficiency, accuracy, and user-machine synchronization as its black-box counterpart, if the user exhibited a high level of trust for AI explanation.<\/jats:p>","DOI":"10.1145\/3465407","type":"journal-article","created":{"date-parts":[[2021,9,3]],"date-time":"2021-09-03T19:30:07Z","timestamp":1630697407000},"page":"1-34","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine Learning"],"prefix":"10.1145","volume":"11","author":[{"given":"Chris","family":"Kim","sequence":"first","affiliation":[{"name":"Ontario Tech University, Oshawa, Canada"}]},{"given":"Xiao","family":"Lin","sequence":"additional","affiliation":[{"name":"SRI International, Princeton, NJ, USA"}]},{"given":"Christopher","family":"Collins","sequence":"additional","affiliation":[{"name":"Ontario Tech University, Oshawa, Canada"}]},{"given":"Graham W.","family":"Taylor","sequence":"additional","affiliation":[{"name":"University of Guelph and Vector Institute for AI, Guelph, Canada"}]},{"given":"Mohamed R.","family":"Amer","sequence":"additional","affiliation":[{"name":"SRI International, Princeton, NJ, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,9,3]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"CMU. 2018. CMU Graphics Lab Motion Capture Database. Retrieved from http:\/\/mocap.cs.cmu.edu\/.  CMU. 2018. CMU Graphics Lab Motion Capture Database. Retrieved from http:\/\/mocap.cs.cmu.edu\/."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA.2018.8460608"},{"key":"e_1_2_1_3_1","unstructured":"Martin Arjovsky Soumith Chintala and L\u00e9on Bottou. 2017. Wasserstein GAN. Retrieved from https:\/\/arXiv:1701.07875.  Martin Arjovsky Soumith Chintala and L\u00e9on Bottou. 2017. Wasserstein GAN. Retrieved from https:\/\/arXiv:1701.07875."},{"key":"e_1_2_1_4_1","volume-title":"TRECVID 2017: Evaluating ad-hoc and instance video search, events detection, video captioning and hyperlinking. In Proceedings of the Annual TREC Video Retrieval Evaluation (TRECVID\u201917)","author":"Awad George","year":"2017","unstructured":"George Awad , Asad Butt , Jonathan Fiscus , David Joy , Andrew Delgado , Martial Michel , Alan F. Smeaton , Yvette Graham , Wessel Kraaij , Georges Quenot , Maria Eskevich , Roeland Ordelman , Gareth J. F. Jones , and Benoit Huet . 2017 . TRECVID 2017: Evaluating ad-hoc and instance video search, events detection, video captioning and hyperlinking. In Proceedings of the Annual TREC Video Retrieval Evaluation (TRECVID\u201917) . NIST. George Awad, Asad Butt, Jonathan Fiscus, David Joy, Andrew Delgado, Martial Michel, Alan F. Smeaton, Yvette Graham, Wessel Kraaij, Georges Quenot, Maria Eskevich, Roeland Ordelman, Gareth J. F. Jones, and Benoit Huet. 2017. TRECVID 2017: Evaluating ad-hoc and instance video search, events detection, video captioning and hyperlinking. In Proceedings of the Annual TREC Video Retrieval Evaluation (TRECVID\u201917). NIST."},{"key":"e_1_2_1_5_1","doi-asserted-by":"crossref","unstructured":"Emad Barsoum John Kender and Zicheng Liu. 2017. HP-GAN: Probabilistic 3D human motion prediction via GAN. Retrieved from https:\/\/abs\/1711.09561.  Emad Barsoum John Kender and Zicheng Liu. 2017. HP-GAN: Probabilistic 3D human motion prediction via GAN. Retrieved from https:\/\/abs\/1711.09561.","DOI":"10.1109\/CVPRW.2018.00191"},{"key":"e_1_2_1_6_1","volume-title":"MINE: Mutual information neural estimation.","author":"Belghazi Ishmael","year":"2018","unstructured":"Ishmael Belghazi , Sai Rajeswar , Aristide Baratin , R. Devon Hjelm , and Aaron Courville . 2018 . MINE: Mutual information neural estimation. Retrieved from https:\/\/arXiv:1801.04062. Ishmael Belghazi, Sai Rajeswar, Aristide Baratin, R. Devon Hjelm, and Aaron Courville. 2018. MINE: Mutual information neural estimation. Retrieved from https:\/\/arXiv:1801.04062."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302289"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.143"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2018\/84"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/MCG.2018.042731661"},{"volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.","author":"Chuang J.","key":"e_1_2_1_11_1","unstructured":"J. Chuang , D. Ramage , C. Manning , and J. Heer . 2012. Interpretation and trust: Designing model-driven visualizations for text analysis . In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. J. Chuang, D. Ramage, C. Manning, and J. Heer. 2012. Interpretation and trust: Designing model-driven visualizations for text analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems."},{"key":"e_1_2_1_12_1","volume-title":"Proceedings of the American Association for Artificial Intelligence National Conference. 183\u2013188","author":"de Kleer Johan","year":"1987","unstructured":"Johan de Kleer and Raymond Reiter . 1987 . Foundations for assumption-based truth maintenance systems: Preliminary report . In Proceedings of the American Association for Artificial Intelligence National Conference. 183\u2013188 . Johan de Kleer and Raymond Reiter. 1987. Foundations for assumption-based truth maintenance systems: Preliminary report. In Proceedings of the American Association for Artificial Intelligence National Conference. 183\u2013188."},{"key":"e_1_2_1_13_1","doi-asserted-by":"crossref","unstructured":"Jonathan Dodge Q. Vera Liao Yunfeng Zhang Rachel K. E. Bellamy and Casey Dugan. 2019. Explaining models: An empirical study of how explanations impact fairness judgment. Retrieved from https:\/\/arXiv:1901.07694.  Jonathan Dodge Q. Vera Liao Yunfeng Zhang Rachel K. E. Bellamy and Casey Dugan. 2019. Explaining models: An empirical study of how explanations impact fairness judgment. Retrieved from https:\/\/arXiv:1901.07694.","DOI":"10.1145\/3301275.3302310"},{"key":"e_1_2_1_14_1","doi-asserted-by":"crossref","unstructured":"Upol Ehsan Pradyumna Tambwekar Larry Chan Brent Harrison and Mark Riedl. 2019. Automated rationale generation: A technique for explainable AI and its effects on human perceptions. Retrieved from https:\/\/arXiv:1901.03729.  Upol Ehsan Pradyumna Tambwekar Larry Chan Brent Harrison and Mark Riedl. 2019. Automated rationale generation: A technique for explainable AI and its effects on human perceptions. Retrieved from https:\/\/arXiv:1901.03729.","DOI":"10.1145\/3301275.3302316"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302262"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.5555\/3071534.3071580"},{"volume-title":"Proceedings of the International Workshop on Intelligent Visual Interfaces for Text Analysis.","author":"D. Gotz","key":"e_1_2_1_17_1","unstructured":"D. Gotz et al.2010. HARVEST : An intelligent visual analytic tool for the masses . In Proceedings of the International Workshop on Intelligent Visual Interfaces for Text Analysis. D. Gotz et al.2010. HARVEST: An intelligent visual analytic tool for the masses. In Proceedings of the International Workshop on Intelligent Visual Interfaces for Text Analysis."},{"key":"e_1_2_1_18_1","volume-title":"Jamie Ryan Kiros, and Sanja Fidler","author":"Faghri Fartash","year":"2017","unstructured":"Fartash Faghri , David J. Fleet , Jamie Ryan Kiros, and Sanja Fidler . 2017 . VSE++: Improved visual-semantic embeddings. Retrieved from https:\/\/arXiv:1707.05612. Fartash Faghri, David J. Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2017. VSE++: Improved visual-semantic embeddings. Retrieved from https:\/\/arXiv:1707.05612."},{"key":"e_1_2_1_19_1","volume-title":"Proceedings of the Conference on Computer Vision and Pattern Recognition.","author":"Fragkiadaki Katerina","year":"2015","unstructured":"Katerina Fragkiadaki , Sergey Levine , and Jitendra Malik . 2015 . Recurrent network models for kinematic tracking . In Proceedings of the Conference on Computer Vision and Pattern Recognition. Katerina Fragkiadaki, Sergey Levine, and Jitendra Malik. 2015. Recurrent network models for kinematic tracking. In Proceedings of the Conference on Computer Vision and Pattern Recognition."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/3DV.2017.00059"},{"key":"e_1_2_1_22_1","unstructured":"Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672\u20132680.  Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672\u20132680."},{"key":"e_1_2_1_23_1","volume-title":"Courville","author":"Gulrajani Ishaan","year":"2017","unstructured":"Ishaan Gulrajani , Faruk Ahmed , Mart\u00edn Arjovsky , Vincent Dumoulin , and Aaron C . Courville . 2017 . Improved training of Wasserstein GANS. In Advances in Neural Information Processing Systems . 5767\u20135777. Ishaan Gulrajani, Faruk Ahmed, Mart\u00edn Arjovsky, Vincent Dumoulin, and Aaron C. Courville. 2017. Improved training of Wasserstein GANS. In Advances in Neural Information Processing Systems. 5767\u20135777."},{"key":"e_1_2_1_25_1","unstructured":"Bruce Hahn. Accessed 2018. CMU Graphics Lab Motion Capture Database Motionbuilder-friendly BVH conversion. Retrieved from https:\/\/sites.google.com\/a\/cgspeed.com\/cgspeed\/motion-capture\/cmu-bvh-conversion.  Bruce Hahn. Accessed 2018. CMU Graphics Lab Motion Capture Database Motionbuilder-friendly BVH conversion. Retrieved from https:\/\/sites.google.com\/a\/cgspeed.com\/cgspeed\/motion-capture\/cmu-bvh-conversion."},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2009.11.010"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46493-0_1"},{"key":"e_1_2_1_28_1","volume-title":"Proceedings of the International Conference on Learning Representations.","author":"Higgins Irina","year":"2017","unstructured":"Irina Higgins , Arka Pal Loic Matthey , Christopher Burgess , Xavier Glorot , Matthew Botvinick , Shakir Mohamed , and Alexander Lerchner . 2017 . Beta-VAE: Learning basic visual concepts with a constrained variational framework . In Proceedings of the International Conference on Learning Representations. Irina Higgins, Arka Pal Loic Matthey, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. Beta-VAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.573"},{"volume-title":"International Conference on Learning Representations.","author":"Karpathy A.","key":"e_1_2_1_30_1","unstructured":"A. Karpathy , J. Johnson , and L. Fei-Fei . 2016. Visualizing and understanding recurrent networks . In International Conference on Learning Representations. A. Karpathy, J. Johnson, and L. Fei-Fei. 2016. Visualizing and understanding recurrent networks. In International Conference on Learning Representations."},{"volume-title":"International Conference on Learning Representations.","author":"Diederik","key":"e_1_2_1_31_1","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization . In International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations."},{"key":"e_1_2_1_32_1","unstructured":"Ryan Kiros Yukun Zhu Ruslan R Salakhutdinov Richard Zemel Raquel Urtasun Antonio Torralba and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems. 3294\u20133302.  Ryan Kiros Yukun Zhu Ruslan R Salakhutdinov Richard Zemel Raquel Urtasun Antonio Torralba and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems. 3294\u20133302."},{"key":"e_1_2_1_33_1","volume-title":"Proceedings of the International Conference on Machine Learning.","author":"Koh Pang Wei","year":"2017","unstructured":"Pang Wei Koh and Percy Liang . 2017 . Understanding black-box predictions via influence functions . In Proceedings of the International Conference on Machine Learning. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302306"},{"volume-title":"Proceedings of the International Conference on Intelligent User Interfaces.","author":"Kulesza T.","key":"e_1_2_1_35_1","unstructured":"T. Kulesza , M. Burnett , W. K. Wong , and S. Stumpf . 2015. Principles of explanatory debugging to personalize interactive machine learning . In Proceedings of the International Conference on Intelligent User Interfaces. T. Kulesza, M. Burnett, W. K. Wong, and S. Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the International Conference on Intelligent User Interfaces."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00960"},{"key":"e_1_2_1_37_1","volume-title":"Amer","author":"Lin Xiao","year":"2018","unstructured":"Xiao Lin and Mohamed R . Amer . 2018 . Human motion modeling using DVGANs. Retrieved from https:\/\/arXiv:1804.10652. Xiao Lin and Mohamed R. Amer. 2018. Human motion modeling using DVGANs. Retrieved from https:\/\/arXiv:1804.10652."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2016.2598831"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.497"},{"key":"e_1_2_1_40_1","unstructured":"Christoph Molnar. 2019. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Retrieved from https:\/\/christophm.github.io\/interpretable-ml-book\/.  Christoph Molnar. 2019. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Retrieved from https:\/\/christophm.github.io\/interpretable-ml-book\/."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.2010.2068870"},{"key":"e_1_2_1_42_1","doi-asserted-by":"crossref","unstructured":"Chris Olah Arvind Satyanarayan Ian Johnson Shan Carter Ludwig Schubert Katherine Ye and Alexander Mordvintsev. 2018. The building blocks of interpretability. In Distill Publication. Retrieved from https:\/\/distill.pub\/2018\/building-blocks\/.  Chris Olah Arvind Satyanarayan Ian Johnson Shan Carter Ludwig Schubert Katherine Ye and Alexander Mordvintsev. 2018. The building blocks of interpretability. In Distill Publication. Retrieved from https:\/\/distill.pub\/2018\/building-blocks\/.","DOI":"10.23915\/distill.00010"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3172944.3172946"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.690"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2015.2467591"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302308"},{"volume-title":"Proceedings of the International Joint Conference on Artificial Intelligence. 5868\u20135870","author":"Sokol Kacper","key":"e_1_2_1_48_1","unstructured":"Kacper Sokol and Peter A. Flach . 2018. Glass-Box: Explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant . In Proceedings of the International Joint Conference on Artificial Intelligence. 5868\u20135870 . Kacper Sokol and Peter A. Flach. 2018. Glass-Box: Explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant. In Proceedings of the International Joint Conference on Artificial Intelligence. 5868\u20135870."},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.220"},{"key":"e_1_2_1_50_1","first-page":"2579","article-title":"Visualizing data using t-SNE","volume":"9","author":"van der Maaten L.","year":"2008","unstructured":"L. van der Maaten and G. Hinton . 2008 . Visualizing data using t-SNE . J. Mach. Learn. Res. 9 (2008), 2579 \u2013 2605 . L. van der Maaten and G. Hinton. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9 (2008), 2579\u20132605.","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_2_1_51_1","volume-title":"Proceedings of the Conference on Computer Vision and Pattern Recognition.","author":"Vinyals Oriol","year":"2014","unstructured":"Oriol Vinyals , Alexander Toshev , Samy Bengio , and Dumitru Erhan . 2014 . Show and tell: A neural image caption generator . In Proceedings of the Conference on Computer Vision and Pattern Recognition. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2014. Show and tell: A neural image caption generator. In Proceedings of the Conference on Computer Vision and Pattern Recognition."},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/BigData.2016.7840748"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302277"},{"key":"e_1_2_1_54_1","volume-title":"Proceedings of the International Conference on Machine Learning.","author":"Zahavy Tom","year":"2016","unstructured":"Tom Zahavy , Nir Ben Zrihem , and Shie Mannor . 2016 . Graying the black box: Understanding DQNs . In Proceedings of the International Conference on Machine Learning. Tom Zahavy, Nir Ben Zrihem, and Shie Mannor. 2016. Graying the black box: Understanding DQNs. In Proceedings of the International Conference on Machine Learning."},{"volume-title":"Proceedings of the European Conference on Computer Vision.","author":"Zeiler M. D.","key":"e_1_2_1_55_1","unstructured":"M. D. Zeiler and R. Fergus . 2014. Visualizing and understanding convolutional networks . In Proceedings of the European Conference on Computer Vision. M. D. Zeiler and R. Fergus. 2014. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2013.167"}],"container-title":["ACM Transactions on Interactive Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3465407","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3465407","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:30Z","timestamp":1750191510000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3465407"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,3]]},"references-count":54,"journal-issue":{"issue":"3-4","published-print":{"date-parts":[[2021,12,31]]}},"alternative-id":["10.1145\/3465407"],"URL":"https:\/\/doi.org\/10.1145\/3465407","relation":{},"ISSN":["2160-6455","2160-6463"],"issn-type":[{"type":"print","value":"2160-6455"},{"type":"electronic","value":"2160-6463"}],"subject":[],"published":{"date-parts":[[2021,9,3]]},"assertion":[{"value":"2019-11-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-05-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-09-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}