{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,7]],"date-time":"2025-10-07T05:29:58Z","timestamp":1759814998566,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":50,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T00:00:00Z","timestamp":1691107200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,8,6]]},"DOI":"10.1145\/3580305.3599818","type":"proceedings-article","created":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T18:13:58Z","timestamp":1691172838000},"page":"5016-5027","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Experimentation Platforms Meet Reinforcement Learning: Bayesian Sequential Decision-Making for Continuous Monitoring"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0000-7820-4271","authenticated-orcid":false,"given":"Runzhe","family":"Wan","sequence":"first","affiliation":[{"name":"Amazon, Seattle, WA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2450-1964","authenticated-orcid":false,"given":"Yu","family":"Liu","sequence":"additional","affiliation":[{"name":"Amazon, Seattle, WA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-6636-8488","authenticated-orcid":false,"given":"James","family":"McQueen","sequence":"additional","affiliation":[{"name":"Amazon, Seattle, WA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-6413-1969","authenticated-orcid":false,"given":"Doug","family":"Hains","sequence":"additional","affiliation":[{"name":"Amazon, Seattle, WA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1875-2115","authenticated-orcid":false,"given":"Rui","family":"Song","sequence":"additional","affiliation":[{"name":"Amazon, Seattle, WA, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,8,4]]},"reference":[{"volume-title":"Thomas Cook and Oscar A. Perez. 2022-02","author":"Casper Charlie","key":"e_1_3_2_2_1_1","unstructured":"Charlie Casper , Thomas Cook and Oscar A. Perez. 2022-02 . An R Package for Group Sequential Boundaries Using Alpha Spending Functions . https:\/\/cran.r- project.org\/web\/packages\/ldbounds\/index.html. Charlie Casper, Thomas Cook and Oscar A. Perez. 2022-02. An R Package for Group Sequential Boundaries Using Alpha Spending Functions. https:\/\/cran.r- project.org\/web\/packages\/ldbounds\/index.html."},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.18438\/B8F918"},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1177\/0278364919887447"},{"key":"e_1_3_2_2_4_1","volume-title":"Reinforcement learning with long short-term memory. Advances in neural information processing systems 14","author":"Bakker Bram","year":"2001","unstructured":"Bram Bakker . 2001. Reinforcement learning with long short-term memory. Advances in neural information processing systems 14 ( 2001 ). Bram Bakker. 2001. Reinforcement learning with long short-term memory. Advances in neural information processing systems 14 (2001)."},{"key":"e_1_3_2_2_5_1","volume-title":"CARL: A benchmark for contextual and adaptive reinforcement learning. arXiv preprint arXiv:2110.02102","author":"Benjamins Carolin","year":"2021","unstructured":"Carolin Benjamins , Theresa Eimer , Frederik Schubert , Andr\u00e9 Biedenkapp , Bodo Rosenhahn , Frank Hutter , and Marius Lindauer . 2021 . CARL: A benchmark for contextual and adaptive reinforcement learning. arXiv preprint arXiv:2110.02102 (2021). Carolin Benjamins, Theresa Eimer, Frederik Schubert, Andr\u00e9 Biedenkapp, Bodo Rosenhahn, Frank Hutter, and Marius Lindauer. 2021. CARL: A benchmark for contextual and adaptive reinforcement learning. arXiv preprint arXiv:2110.02102 (2021)."},{"volume-title":"Statistical decision theory and Bayesian analysis","author":"Berger James O","key":"e_1_3_2_2_6_1","unstructured":"James O Berger . 2013. Statistical decision theory and Bayesian analysis . Springer Science & Business Media . James O Berger. 2013. Statistical decision theory and Bayesian analysis. Springer Science & Business Media."},{"key":"e_1_3_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/65.3.625"},{"key":"e_1_3_2_2_8_1","unstructured":"George Casella and Roger L Berger. 2021. Statistical inference. Cengage Learning. George Casella and Roger L Berger. 2021. Statistical inference. Cengage Learning."},{"key":"e_1_3_2_2_9_1","volume-title":"Bayesian experimental design: A review. Statistical science","author":"Chaloner Kathryn","year":"1995","unstructured":"Kathryn Chaloner and Isabella Verdinelli . 1995. Bayesian experimental design: A review. Statistical science ( 1995 ), 273--304. Kathryn Chaloner and Isabella Verdinelli. 1995. Bayesian experimental design: A review. Statistical science (1995), 273--304."},{"key":"e_1_3_2_2_10_1","volume-title":"Interim analysis: the alpha spending function approach. Statistics in medicine 13, 13--14","author":"Demets David L","year":"1994","unstructured":"David L Demets and KK Gordon Lan . 1994. Interim analysis: the alpha spending function approach. Statistics in medicine 13, 13--14 ( 1994 ), 1341--1352. David L Demets and KK Gordon Lan. 1994. Interim analysis: the alpha spending function approach. Statistics in medicine 13, 13--14 (1994), 1341--1352."},{"key":"e_1_3_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/DSAA.2016.33"},{"key":"e_1_3_2_2_12_1","unstructured":"Theresa Eimer Andr\u00e9 Biedenkapp Frank Hutter and Marius Lindauer. [n. d.]. Towards Self-Paced Context Evaluation for Contextual Reinforcement Learning. ([n. d.]). Theresa Eimer Andr\u00e9 Biedenkapp Frank Hutter and Marius Lindauer. [n. d.]. Towards Self-Paced Context Evaluation for Contextual Reinforcement Learning. ([n. d.])."},{"key":"e_1_3_2_2_13_1","volume-title":"International Conference on Machine Learning. PMLR, 3384--3395","author":"Foster Adam","year":"2021","unstructured":"Adam Foster , Desi R Ivanova , Ilyas Malik , and Tom Rainforth . 2021 . Deep adaptive design: Amortizing sequential bayesian experimental design . In International Conference on Machine Learning. PMLR, 3384--3395 . Adam Foster, Desi R Ivanova, Ilyas Malik, and Tom Rainforth. 2021. Deep adaptive design: Amortizing sequential bayesian experimental design. In International Conference on Machine Learning. PMLR, 3384--3395."},{"volume-title":"Bayesian data analysis","author":"Gelman Andrew","key":"e_1_3_2_2_14_1","unstructured":"Andrew Gelman , John B Carlin , Hal S Stern , and Donald B Rubin . 1995. Bayesian data analysis . Chapman and Hall\/CRC. Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. 1995. Bayesian data analysis. Chapman and Hall\/CRC."},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/70.3.659"},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3331651.3331655"},{"key":"e_1_3_2_2_17_1","volume-title":"International conference on machine learning. PMLR","author":"Haarnoja Tuomas","year":"2018","unstructured":"Tuomas Haarnoja , Aurick Zhou , Pieter Abbeel , and Sergey Levine . 2018 . Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor . In International conference on machine learning. PMLR , 1861--1870. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning. PMLR, 1861--1870."},{"key":"e_1_3_2_2_18_1","volume-title":"Dotan Di Castro, and Shie Mannor","author":"Hallak Assaf","year":"2015","unstructured":"Assaf Hallak , Dotan Di Castro, and Shie Mannor . 2015 . Contextual markov decision processes. arXiv preprint arXiv:1502.02259 (2015). Assaf Hallak, Dotan Di Castro, and Shie Mannor. 2015. Contextual markov decision processes. arXiv preprint arXiv:1502.02259 (2015)."},{"key":"e_1_3_2_2_19_1","volume-title":"Sequential Bayesian optimal experimental design via approximate dynamic programming. arXiv preprint arXiv:1604.08320","author":"Huan Xun","year":"2016","unstructured":"Xun Huan and Youssef M Marzouk . 2016. Sequential Bayesian optimal experimental design via approximate dynamic programming. arXiv preprint arXiv:1604.08320 ( 2016 ). Xun Huan and Youssef M Marzouk. 2016. Sequential Bayesian optimal experimental design via approximate dynamic programming. arXiv preprint arXiv:1604.08320 (2016)."},{"key":"e_1_3_2_2_20_1","first-page":"25785","article-title":"Implicit deep adaptive design: policy-based experimental design without likelihoods","volume":"34","author":"Ivanova Desi R","year":"2021","unstructured":"Desi R Ivanova , Adam Foster , Steven Kleinegesse , Michael U Gutmann , and Thomas Rainforth . 2021 . Implicit deep adaptive design: policy-based experimental design without likelihoods . Advances in Neural Information Processing Systems 34 (2021), 25785 -- 25798 . Desi R Ivanova, Adam Foster, Steven Kleinegesse, Michael U Gutmann, and Thomas Rainforth. 2021. Implicit deep adaptive design: policy-based experimental design without likelihoods. Advances in Neural Information Processing Systems 34 (2021), 25785--25798.","journal-title":"Advances in Neural Information Processing Systems"},{"volume-title":"The theory of probability","author":"Jeffreys Harold","key":"e_1_3_2_2_21_1","unstructured":"Harold Jeffreys . 1961. The theory of probability . Oxford University Press . Harold Jeffreys. 1961. The theory of probability. Oxford University Press."},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3097992"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1287\/opre.2021.2135"},{"key":"e_1_3_2_2_24_1","volume-title":"Always valid inference: Bringing sequential analysis to A\/B testing. arXiv preprint arXiv:1512.04922","author":"Johari Ramesh","year":"2015","unstructured":"Ramesh Johari , Leo Pekelis , and David J Walsh . 2015. Always valid inference: Bringing sequential analysis to A\/B testing. arXiv preprint arXiv:1512.04922 ( 2015 ). Ramesh Johari, Leo Pekelis, and David J Walsh. 2015. Always valid inference: Bringing sequential analysis to A\/B testing. arXiv preprint arXiv:1512.04922 (2015)."},{"key":"e_1_3_2_2_25_1","volume-title":"International journal of Ayurveda research 1, 1","author":"Kadam Prashant","year":"2010","unstructured":"Prashant Kadam and Supriya Bhalerao . 2010. Sample size calculation . International journal of Ayurveda research 1, 1 ( 2010 ), 55. Prashant Kadam and Supriya Bhalerao. 2010. Sample size calculation. International journal of Ayurveda research 1, 1 (2010), 55."},{"key":"e_1_3_2_2_26_1","unstructured":"Madan Gopal Kundu Sandipan Samanta and Shoubhik Mondal. 2021. Conditional power predictive power and probability of success in clinical trials with continuous binary and time-to-event endpoints. (2021). Madan Gopal Kundu Sandipan Samanta and Shoubhik Mondal. 2021. Conditional power predictive power and probability of success in clinical trials with continuous binary and time-to-event endpoints. (2021)."},{"key":"e_1_3_2_2_27_1","volume-title":"International Conference on Machine Learning. PMLR, 3053--3062","author":"Liang Eric","year":"2018","unstructured":"Eric Liang , Richard Liaw , Robert Nishihara , Philipp Moritz , Roy Fox , Ken Gold- berg, Joseph Gonzalez , Michael Jordan , and Ion Stoica . 2018 . RLlib: Abstractions for distributed reinforcement learning . In International Conference on Machine Learning. PMLR, 3053--3062 . Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Gold- berg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. 2018. RLlib: Abstractions for distributed reinforcement learning. In International Conference on Machine Learning. PMLR, 3053--3062."},{"key":"e_1_3_2_2_28_1","unstructured":"Winston Lin. 2013. Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's critique. (2013). Winston Lin. 2013. Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's critique. (2013)."},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"crossref","unstructured":"James K Lindsey etal 1999. Models for repeated measurements. OUP Catalogue (1999). James K Lindsey et al. 1999. Models for repeated measurements. OUP Catalogue (1999).","DOI":"10.1093\/oso\/9780198505594.001.0001"},{"volume-title":"Empirical bayes methods","author":"Maritz Johannes S","key":"e_1_3_2_2_30_1","unstructured":"Johannes S Maritz . 2018. Empirical bayes methods . Chapman and Hall\/CRC. Johannes S Maritz. 2018. Empirical bayes methods. Chapman and Hall\/CRC."},{"key":"e_1_3_2_2_31_1","volume-title":"International conference on machine learning. PMLR","author":"Mnih Volodymyr","year":"2016","unstructured":"Volodymyr Mnih , Adria Puigdomenech Badia , Mehdi Mirza , Alex Graves , Timothy Lillicrap , Tim Harley , David Silver , and Koray Kavukcuoglu . 2016 . Asynchronous methods for deep reinforcement learning . In International conference on machine learning. PMLR , 1928--1937. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In International conference on machine learning. PMLR, 1928--1937."},{"key":"e_1_3_2_2_32_1","volume-title":"et al","author":"Mnih Volodymyr","year":"2015","unstructured":"Volodymyr Mnih , Koray Kavukcuoglu , David Silver , Andrei A Rusu , Joel Veness , Marc G Bellemare , Alex Graves , Martin Riedmiller , Andreas K Fidjeland , Georg Ostrovski , et al . 2015 . Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529--533. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al . 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529--533."},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jspi.2006.05.021"},{"key":"e_1_3_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1198\/016214501753382327"},{"volume-title":"Markov decision processes: discrete stochastic dynamic programming","author":"Puterman Martin L","key":"e_1_3_2_2_35_1","unstructured":"Martin L Puterman . 2014. Markov decision processes: discrete stochastic dynamic programming . John Wiley & Sons . Martin L Puterman. 2014. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons."},{"key":"e_1_3_2_2_36_1","volume-title":"International Conference on Artificial Intelligence and Statistics. PMLR, 1775--1785","author":"Richardson Thomas S","year":"2022","unstructured":"Thomas S Richardson , Yu Liu , James McQueen , and Doug Hains . 2022 . A Bayesian Model for Online Activity Sample Sizes . In International Conference on Artificial Intelligence and Statistics. PMLR, 1775--1785 . Thomas S Richardson, Yu Liu, James McQueen, and Doug Hains. 2022. A Bayesian Model for Online Activity Sample Sizes. In International Conference on Artificial Intelligence and Statistics. PMLR, 1775--1785."},{"key":"e_1_3_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.2352\/ISSN.2470-1173.2017.19.AVM-023"},{"key":"e_1_3_2_2_38_1","volume-title":"Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological methods 22, 2","author":"Sch\u00f6nbrodt Felix D","year":"2017","unstructured":"Felix D Sch\u00f6nbrodt , Eric-Jan Wagenmakers , Michael Zehetleitner , and Marco Perugini . 2017. Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological methods 22, 2 ( 2017 ), 322. Felix D Sch\u00f6nbrodt, Eric-Jan Wagenmakers, Michael Zehetleitner, and Marco Perugini. 2017. Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological methods 22, 2 (2017), 322."},{"key":"e_1_3_2_2_39_1","volume-title":"International conference on machine learning. PMLR","author":"Schulman John","year":"2015","unstructured":"John Schulman , Sergey Levine , Pieter Abbeel , Michael Jordan , and Philipp Moritz . 2015 . Trust region policy optimization . In International conference on machine learning. PMLR , 1889--1897. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In International conference on machine learning. PMLR, 1889--1897."},{"key":"e_1_3_2_2_40_1","volume-title":"Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347","author":"Schulman John","year":"2017","unstructured":"John Schulman , Filip Wolski , Prafulla Dhariwal , Alec Radford , and Oleg Klimov . 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 ( 2017 ). John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)."},{"key":"e_1_3_2_2_41_1","volume-title":"Bayesian sequential optimal experimental design for nonlinear models using policy gradient reinforcement learning. arXiv preprint arXiv:2110.15335","author":"Shen Wanggang","year":"2021","unstructured":"Wanggang Shen and Xun Huan . 2021. Bayesian sequential optimal experimental design for nonlinear models using policy gradient reinforcement learning. arXiv preprint arXiv:2110.15335 ( 2021 ). Wanggang Shen and Xun Huan. 2021. Bayesian sequential optimal experimental design for nonlinear models using policy gradient reinforcement learning. arXiv preprint arXiv:2110.15335 (2021)."},{"key":"e_1_3_2_2_42_1","volume-title":"Multi-Task Reinforcement Learning with Context-based Representations. arXiv preprint arXiv:2102.06177","author":"Sodhani Shagun","year":"2021","unstructured":"Shagun Sodhani , Amy Zhang , and Joelle Pineau . 2021. Multi-Task Reinforcement Learning with Context-based Representations. arXiv preprint arXiv:2102.06177 ( 2021 ). Shagun Sodhani, Amy Zhang, and Joelle Pineau. 2021. Multi-Task Reinforcement Learning with Context-based Representations. arXiv preprint arXiv:2102.06177 (2021)."},{"volume-title":"Reinforcement learning: An intro- duction","author":"Sutton Richard S","key":"e_1_3_2_2_43_1","unstructured":"Richard S Sutton and Andrew G Barto . 2018. Reinforcement learning: An intro- duction . MIT press . Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An intro- duction. MIT press."},{"key":"e_1_3_2_2_44_1","volume-title":"A Comparative Tutorial of Bayesian Sequential Design and Reinforcement Learning. arXiv preprint arXiv:2205.04023","author":"Tec Mauricio","year":"2022","unstructured":"Mauricio Tec , Yunshan Duan , and Peter M\u00fcller . 2022. A Comparative Tutorial of Bayesian Sequential Design and Reinforcement Learning. arXiv preprint arXiv:2205.04023 ( 2022 ). Mauricio Tec, Yunshan Duan, and Peter M\u00fcller. 2022. A Comparative Tutorial of Bayesian Sequential Design and Reinforcement Learning. arXiv preprint arXiv:2205.04023 (2022)."},{"key":"e_1_3_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.21105\/joss.01026"},{"key":"e_1_3_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v30i1.10295"},{"key":"e_1_3_2_2_47_1","volume-title":"Bayesian reinforcement learning. Reinforcement learning","author":"Vlassis Nikos","year":"2012","unstructured":"Nikos Vlassis , Mohammad Ghavamzadeh , Shie Mannor , and Pascal Poupart . 2012. Bayesian reinforcement learning. Reinforcement learning ( 2012 ), 359--386. Nikos Vlassis, Mohammad Ghavamzadeh, Shie Mannor, and Pascal Poupart. 2012. Bayesian reinforcement learning. Reinforcement learning (2012), 359--386."},{"key":"e_1_3_2_2_48_1","volume-title":"Optimum character of the sequential probability ratio test. The Annals of Mathematical Statistics","author":"Wald Abraham","year":"1948","unstructured":"Abraham Wald and Jacob Wolfowitz . 1948. Optimum character of the sequential probability ratio test. The Annals of Mathematical Statistics ( 1948 ), 326--339. Abraham Wald and Jacob Wolfowitz. 1948. Optimum character of the sequential probability ratio test. The Annals of Mathematical Statistics (1948), 326--339."},{"key":"e_1_3_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467303"},{"key":"e_1_3_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/34.1-2.1"}],"event":{"name":"KDD '23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"],"location":"Long Beach CA USA","acronym":"KDD '23"},"container-title":["Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3580305.3599818","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3580305.3599818","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:23Z","timestamp":1750182563000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3580305.3599818"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,4]]},"references-count":50,"alternative-id":["10.1145\/3580305.3599818","10.1145\/3580305"],"URL":"https:\/\/doi.org\/10.1145\/3580305.3599818","relation":{},"subject":[],"published":{"date-parts":[[2023,8,4]]},"assertion":[{"value":"2023-08-04","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}