{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:24:29Z","timestamp":1760142269481,"version":"build-2065373602"},"publisher-location":"Cham","reference-count":47,"publisher":"Springer Nature Switzerland","isbn-type":[{"type":"print","value":"9783031986840"},{"type":"electronic","value":"9783031986857"}],"license":[{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,23]],"date-time":"2025-07-23T00:00:00Z","timestamp":1753228800000},"content-version":"vor","delay-in-days":203,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Active automata learning from membership and equivalence queries is a foundational problem with numerous applications. We propose a novel variant of the active automata learning problem: actively learn finite automata using <jats:italic>preference queries<\/jats:italic>\u2014i.e., queries about the relative position of two sequences in a total preorder\u2014instead of membership queries. Our solution is <jats:sc>Remap<\/jats:sc>, a novel algorithm which leverages a symbolic observation table along with unification and constraint solving to navigate a space of symbolic hypotheses (each representing a set of automata), and uses satisfiability-solving to construct a concrete automaton (specifically a Moore machine) from a symbolic hypothesis. <jats:sc>Remap<\/jats:sc> is guaranteed to correctly infer the minimal automaton with polynomial query complexity under exact equivalence queries, and achieves PAC\u2013identification (<jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$\\varepsilon $$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mi>\u03b5<\/mml:mi>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula>-approximate, with high probability) of the minimal automaton using sampling-based equivalence queries. Our empirical evaluations of <jats:sc>Remap<\/jats:sc> on the task of learning reward machines for two reinforcement learning domains indicate <jats:sc>Remap<\/jats:sc> scales to large automata and is effective at learning correct automata from consistent teachers, under both exact and sampling-based equivalence queries.<\/jats:p>","DOI":"10.1007\/978-3-031-98685-7_5","type":"book-chapter","created":{"date-parts":[[2025,7,22]],"date-time":"2025-07-22T03:32:39Z","timestamp":1753155159000},"page":"104-126","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Automata Learning from\u00a0Preference and\u00a0Equivalence Queries"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4188-4127","authenticated-orcid":false,"given":"Eric","family":"Hsiung","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1211-1731","authenticated-orcid":false,"given":"Joydeep","family":"Biswas","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6859-1391","authenticated-orcid":false,"given":"Swarat","family":"Chaudhuri","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,23]]},"reference":[{"key":"5_CR1","unstructured":"Aarts, F., Kuppens, H., Tretmans, J., Vaandrager, F.W., Verwer, S.: Learning and testing the bounded retransmission protocol. In: International Conference on Graphics and Interaction (2012). https:\/\/api.semanticscholar.org\/CorpusID:2641499"},{"key":"5_CR2","unstructured":"Abel, D., et al.: On the expressivity of Markov reward. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol.\u00a034, pp. 7799\u20137812, Curran Associates, Inc. (2021). https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2021\/file\/4079016d940210b4ae9ae7d41c4a2065-Paper.pdf"},{"key":"5_CR3","doi-asserted-by":"crossref","unstructured":"Almeida, M., Moreira, N., Reis, R.: Testing the equivalence of regular languages. In: Workshop on Descriptional Complexity of Formal Systems (2009). https:\/\/api.semanticscholar.org\/CorpusID:9014414","DOI":"10.4204\/EPTCS.3.4"},{"key":"5_CR4","unstructured":"Andreas, J., Klein, D., Levine, S.: Modular multitask reinforcement learning with policy sketches. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol.\u00a070, pp. 166\u2013175, PMLR (2017). https:\/\/proceedings.mlr.press\/v70\/andreas17a.html"},{"key":"5_CR5","doi-asserted-by":"publisher","unstructured":"Angluin, D.: Learning regular sets from queries and counterexamples. Inf. Comput. 75(2), 87\u2013106 (1987). https:\/\/doi.org\/10.1016\/0890-5401(87)90052-6","DOI":"10.1016\/0890-5401(87)90052-6"},{"key":"5_CR6","doi-asserted-by":"publisher","first-page":"319","DOI":"10.1023\/A:1022821128753","volume":"2","author":"D Angluin","year":"1988","unstructured":"Angluin, D.: Queries and concept learning. Mach. Learn. 2, 319\u2013342 (1988)","journal-title":"Mach. Learn."},{"key":"5_CR7","doi-asserted-by":"crossref","unstructured":"Argyros, G., D\u2019antoni, L.: The learnability of symbolic automata. In: International Conference on Computer Aided Verification (2018)","DOI":"10.1007\/978-3-319-96145-3_23"},{"key":"5_CR8","doi-asserted-by":"crossref","unstructured":"Balle, B., Mohri, M.: Learning weighted automata. In: Conference on Algebraic Informatics (2015)","DOI":"10.1007\/978-3-319-23021-4_1"},{"key":"5_CR9","doi-asserted-by":"publisher","first-page":"1268","DOI":"10.1137\/S009753979326091X","volume":"25","author":"F Bergadano","year":"1994","unstructured":"Bergadano, F., Varricchio, S.: Learning behaviors of automata from multiplicity and equivalence queries. SIAM J. Comput. 25, 1268\u20131280 (1994)","journal-title":"SIAM J. Comput."},{"key":"5_CR10","unstructured":"Bewley, T., L\u00e9cu\u00e9, F.: Interpretable preference-based reinforcement learning with tree-structured reward functions. arXiv:abs\/2112.11230 (2021). https:\/\/api.semanticscholar.org\/CorpusID:245353680"},{"key":"5_CR11","doi-asserted-by":"crossref","unstructured":"B\u0131y\u0131k, E., Talati, A., Sadigh, D.: APReL: A library for active preference-based reward learning algorithms (2022)","DOI":"10.1109\/HRI53351.2022.9889650"},{"key":"5_CR12","unstructured":"Christiano, P.F., Leike, J., Brown, T., Martic, M., Legg, S., Amodei, D.: Deep reinforcement learning from human preferences. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol.\u00a030, Curran Associates, Inc. (2017). https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2017\/file\/d5e2c0adad503c91f91df240d0cd4e49-Paper.pdf"},{"key":"5_CR13","unstructured":"Christiano, P.F., Leike, J., Brown, T.B., Martic, M., Legg, S., Amodei, D.: Deep reinforcement learning from human preferences. arXiv:abs\/1706.03741 (2017). https:\/\/api.semanticscholar.org\/CorpusID:4787508"},{"key":"5_CR14","doi-asserted-by":"crossref","unstructured":"Corazza, J., Gavran, I., Neider, D.: Reinforcement learning with stochastic reward machines. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 6429\u20136436, AAAI Press (2022). https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/20594","DOI":"10.1609\/aaai.v36i6.20594"},{"key":"5_CR15","doi-asserted-by":"crossref","unstructured":"De\u00a0Moura, L., Bj\u00f8rner, N.: Z3: an efficient SMT solver. In: Proceedings of the Theory and Practice of Software, 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 337\u2013340, TACAS\u201908\/ETAPS\u201908, Springer-Verlag, Berlin, Heidelberg (2008). ISBN 3540787992","DOI":"10.1007\/978-3-540-78800-3_24"},{"key":"5_CR16","doi-asserted-by":"crossref","unstructured":"Dohmen, T., Topper, N., Atia, G.K., Beckus, A., Trivedi, A., Velasquez, A.: Inferring probabilistic reward machines from non-markovian reward signals for reinforcement learning. In: Kumar, A., Thi\u00e9baux, S., Varakantham, P., Yeoh, W. (eds.) Proceedings of the Thirty-Second International Conference on Automated Planning and Scheduling, ICAPS 2022, Singapore (virtual), June 13-24, 2022, pp. 574\u2013582, AAAI Press (2022). https:\/\/ojs.aaai.org\/index.php\/ICAPS\/article\/view\/19844","DOI":"10.1609\/icaps.v32i1.19844"},{"key":"5_CR17","doi-asserted-by":"crossref","unstructured":"Drews, S., D\u2019antoni, L.: Learning symbolic automata. In: International Conference on Tools and Algorithms for Construction and Analysis of Systems (2017)","DOI":"10.1007\/978-3-662-54577-5_10"},{"key":"5_CR18","doi-asserted-by":"publisher","unstructured":"Dupont, P.: Regular grammatical inference from positive and negative samples by genetic search: the GIG method. In: Carrasco, R.C., Oncina, J. (eds.) Grammatical Inference and Applications, Second International Colloquium, ICGI-94, Alicante, Spain, September 21-23, 1994, Proceedings, Lecture Notes in Computer Science, vol. 862, pp. 236\u2013245, Springer (1994), https:\/\/doi.org\/10.1007\/3-540-58473-0_152","DOI":"10.1007\/3-540-58473-0_152"},{"key":"5_CR19","doi-asserted-by":"publisher","unstructured":"Fleischner, H.: On the equivalence of mealy-type and Moore-type automata and a relation between reducibility and Moore-reducibility. J. Comput. Syst. Sci. 14(1), 1\u201316 (1977). ISSN 0022-0000, https:\/\/doi.org\/10.1016\/S0022-0000(77)80038-X, URL https:\/\/www.sciencedirect.com\/science\/article\/pii\/S002200007780038X","DOI":"10.1016\/S0022-0000(77)80038-X"},{"key":"5_CR20","doi-asserted-by":"crossref","unstructured":"Gaon, M., Brafman, R.I.: Reinforcement learning with non-markovian rewards. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 3980\u20133987, AAAI Press (2020)","DOI":"10.1609\/aaai.v34i04.5814"},{"key":"5_CR21","doi-asserted-by":"crossref","unstructured":"Giannakopoulou, D., Rakamaric, Z., Raman, V.: Symbolic learning of component interfaces. In: Sensors Applications Symposium (2012). https:\/\/api.semanticscholar.org\/CorpusID:1449946","DOI":"10.1007\/978-3-642-33125-1_18"},{"key":"5_CR22","doi-asserted-by":"crossref","unstructured":"Gold, E.M.: Complexity of automaton identification from given data. Inf. Control. 37, 302\u2013320 (1978). https:\/\/api.semanticscholar.org\/CorpusID:8943792","DOI":"10.1016\/S0019-9958(78)90562-4"},{"key":"5_CR23","doi-asserted-by":"crossref","unstructured":"Guerin, J.T., Allen, T.E., Goldsmith, J.: Learning CP-net preferences online from user queries. In: AAAI Conference on Artificial Intelligence (2013). https:\/\/api.semanticscholar.org\/CorpusID:15976671","DOI":"10.1007\/978-3-642-41575-3_16"},{"key":"5_CR24","unstructured":"Hopcroft, J.E., Karp, R.M.: A linear algorithm for testing equivalence of finite automata. (1971). https:\/\/api.semanticscholar.org\/CorpusID:120207847"},{"key":"5_CR25","unstructured":"Icarte, R.T., Klassen, T., Valenzano, R., McIlraith, S.: Using reward machines for high-level task specification and decomposition in reinforcement learning. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol.\u00a080, pp. 2107\u20132116, PMLR (2018). https:\/\/proceedings.mlr.press\/v80\/icarte18a.html"},{"issue":"6","key":"5_CR26","doi-asserted-by":"publisher","first-page":"995","DOI":"10.1037\/0022-3514.79.6.995","volume":"79","author":"SS Iyengar","year":"2000","unstructured":"Iyengar, S.S., Lepper, M.R.: When choice is demotivating: can one desire too much of a good thing? J. Pers. Soc. Psychol. 79(6), 995 (2000)","journal-title":"J. Pers. Soc. Psychol."},{"key":"5_CR27","unstructured":"Kalra, A., Brown, D.S.: Can differentiable decision trees learn interpretable reward functions? arXiv:abs\/2306.13004 (2023). https:\/\/api.semanticscholar.org\/CorpusID:259224487"},{"key":"5_CR28","doi-asserted-by":"crossref","unstructured":"Koriche, F., Zanuttini, B.: Learning conditional preference networks with queries. Artif. Intell. 174, 685\u2013703 (2009). https:\/\/api.semanticscholar.org\/CorpusID:3060370","DOI":"10.1016\/j.artint.2010.04.019"},{"key":"5_CR29","doi-asserted-by":"crossref","unstructured":"Lin, S.W., \u00c9tienne Andr\u00e9, Liu, Y., Sun, J., Dong, J.S.: Learning assumptions for compositional verification of timed systems. IEEE Trans. Softw. Eng. 40(2), 137\u2013153 (2014)","DOI":"10.1109\/TSE.2013.57"},{"key":"5_CR30","unstructured":"MacGlashan, J., et al.: Interactive learning from policy-dependent human feedback. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol.\u00a070, pp. 2285\u20132294, PMLR (2017). https:\/\/proceedings.mlr.press\/v70\/macglashan17a.html"},{"key":"5_CR31","volume-title":"Unification in linear time and space: a structured presentation","author":"A Martelli","year":"1976","unstructured":"Martelli, A., Montanari, U.: Unification in linear time and space: a structured presentation. Tech. rep, Istituto di Elaborazione della Informazione, Pisa (1976)"},{"issue":"5","key":"5_CR32","doi-asserted-by":"publisher","first-page":"1045","DOI":"10.1002\/j.1538-7305.1955.tb03788.x","volume":"34","author":"GH Mealy","year":"1955","unstructured":"Mealy, G.H.: A method for synthesizing sequential circuits. Bell Syst. Tech. J. 34(5), 1045\u20131079 (1955). https:\/\/doi.org\/10.1002\/j.1538-7305.1955.tb03788.x","journal-title":"Bell Syst. Tech. J."},{"key":"5_CR33","first-page":"129","volume-title":"Automata Studies","author":"EF Moore","year":"1956","unstructured":"Moore, E.F.: Gedanken-experiments on sequential machines. In: Shannon, C., McCarthy, J. (eds.) Automata Studies, pp. 129\u2013153. Princeton University Press, Princeton, NJ (1956)"},{"key":"5_CR34","unstructured":"Ouyang, L., et al.: Training language models to follow instructions with human feedback. arXiv:abs\/2203.02155 (2022). https:\/\/api.semanticscholar.org\/CorpusID:246426909"},{"key":"5_CR35","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1145\/321250.321253","volume":"12","author":"JA Robinson","year":"1965","unstructured":"Robinson, J.A.: A machine-oriented logic based on the resolution principle. J. ACM 12, 23\u201341 (1965)","journal-title":"J. ACM"},{"key":"5_CR36","doi-asserted-by":"crossref","unstructured":"Sadigh, D., Dragan, A.D., Sastry, S., Seshia, S.A.: Active preference-based learning of reward functions. In: Robotics: Science and Systems (2017)","DOI":"10.15607\/RSS.2017.XIII.053"},{"key":"5_CR37","doi-asserted-by":"crossref","unstructured":"Schuts, M., Hooman, J., Vaandrager, F.: Refactoring of Legacy Software Using Model Learning and Equivalence Checking: An Industrial Experience Report. Springer International Publishing (2016)","DOI":"10.1007\/978-3-319-33693-0_20"},{"key":"5_CR38","unstructured":"Shah, A., Vazquez-Chanlatte, M., Junges, S., Seshia, S.A.: Learning formal specifications from membership and preference queries (2023)"},{"key":"5_CR39","doi-asserted-by":"crossref","unstructured":"Tappler, M., Aichernig, B.K., Bacci, G., Eichlseder, M., Larsen, K.G.: L*-based learning of Markov decision processes. In: International Symposium on Formal Methods, pp. 651\u2013669, Springer (2019)","DOI":"10.1007\/978-3-030-30942-8_38"},{"key":"5_CR40","unstructured":"Topper, N., Velasquez, A., Atia, G.: Bayesian inverse reinforcement learning for non-markovian rewards (2024)"},{"key":"5_CR41","doi-asserted-by":"publisher","unstructured":"Toro Icarte, R., Klassen, T.Q., Valenzano, R.A., McIlraith, S.A.: Reward machines: exploiting reward function structure in reinforcement learning. J. Artif. Intell. Res. (JAIR) 73, 173\u2013208 (2022). https:\/\/doi.org\/10.1613\/jair.1.12440","DOI":"10.1613\/jair.1.12440"},{"key":"5_CR42","unstructured":"Toro\u00a0Icarte, R., Waldie, E., Klassen, T., Valenzano, R., Castro, M., McIlraith, S.: Learning reward machines for partially observable reinforcement learning. In: Wallach, H., Larochelle, H., Beygelzimer, A., d\u2019Alch\u00e9-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol.\u00a032, Curran Associates, Inc. (2019). https:\/\/proceedings.neurips.cc\/paper\/2019\/file\/532435c44bec236b471a47a88d63513d-Paper.pdf"},{"key":"5_CR43","doi-asserted-by":"publisher","unstructured":"Valiant, L.G.: A theory of the learnable. Commun. ACM 27(11), 1134\u20131142 (1984). ISSN 0001-0782, https:\/\/doi.org\/10.1145\/1968.1972","DOI":"10.1145\/1968.1972"},{"key":"5_CR44","unstructured":"Weiss, G., Goldberg, Y., Yahav, E.: Learning deterministic weighted automata with queries and counterexamples. In: Wallach, H., Larochelle, H., Beygelzimer, A., d\u2019Alch\u00e9-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol.\u00a032, Curran Associates, Inc. (2019). https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2019\/file\/d3f93e7766e8e1b7ef66dfdd9a8be93b-Paper.pdf"},{"key":"5_CR45","doi-asserted-by":"publisher","unstructured":"Xu, Z., Gavran, I., Ahmad, Y., Majumdar, R., Neider, D., Topcu, U.,Wu, B.: Joint inference of reward machines and policies for reinforcement learning. Proc. Int. Conf. Autom. Plann. Sched. 30(1), 590\u2013598 (2020). https:\/\/doi.org\/10.1609\/icaps.v30i1.6756, https:\/\/ojs.aaai.org\/index.php\/ICAPS\/article\/view\/6756","DOI":"10.1609\/icaps.v30i1.6756"},{"key":"5_CR46","doi-asserted-by":"publisher","unstructured":"Xu, Z., Wu, B., Ojha, A., Neider, D., Topcu, U.: Active finite reward automaton inference and reinforcement learning using queries and counterexamples. In: Machine Learning and Knowledge Extraction: 5th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2021, Virtual Event, August 17\u201320, 2021, Proceedings, pp. 115\u2013135, Springer-Verlag, Berlin, Heidelberg (2021). ISBN 978-3-030-84059-4, https:\/\/doi.org\/10.1007\/978-3-030-84060-0_8","DOI":"10.1007\/978-3-030-84060-0_8"},{"key":"5_CR47","unstructured":"Zhou, W., Li, W.: A hierarchical Bayesian approach to inverse reinforcement learning with symbolic reward machines. In: Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. (eds.) Proceedings of the 39th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 162, pp. 27159\u201327178, PMLR (2022). https:\/\/proceedings.mlr.press\/v162\/zhou22b.html"}],"container-title":["Lecture Notes in Computer Science","Computer Aided Verification"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-98685-7_5","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T08:54:49Z","timestamp":1760086489000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-98685-7_5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025]]},"ISBN":["9783031986840","9783031986857"],"references-count":47,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-98685-7_5","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"type":"print","value":"0302-9743"},{"type":"electronic","value":"1611-3349"}],"subject":[],"published":{"date-parts":[[2025]]},"assertion":[{"value":"23 July 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"The authors have no competing interests to declare that are relevant to the content of this article.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Disclosure of Interests"}},{"value":"CAV","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"International Conference on Computer Aided Verification","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Zagreb","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Croatia","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2025","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"21 July 2025","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"25 July 2025","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"37","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"cav2025","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/conferences.i-cav.org\/2025\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}