{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,8]],"date-time":"2026-04-08T11:44:41Z","timestamp":1775648681380,"version":"3.50.1"},"reference-count":138,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2024,11,28]],"date-time":"2024-11-28T00:00:00Z","timestamp":1732752000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Science Foundation","award":["1948017"],"award-info":[{"award-number":["1948017"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Evol. Learn. Optim."],"published-print":{"date-parts":[[2024,12,31]]},"abstract":"<jats:p>This article pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e., they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e., to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes\u2019 offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open source language models) is that it enables a simple mechanism to evolve semantically rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this article highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a flexible and effective method for evolving genomes representable as text.<\/jats:p>","DOI":"10.1145\/3694791","type":"journal-article","created":{"date-parts":[[2024,9,5]],"date-time":"2024-09-05T15:21:32Z","timestamp":1725549692000},"page":"1-40","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":38,"title":["Language Model Crossover: Variation through Few-Shot Prompting"],"prefix":"10.1145","volume":"4","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1871-2757","authenticated-orcid":false,"given":"Elliot","family":"Meyerson","sequence":"first","affiliation":[{"name":"Cognizant AI Labs, Madison, WI, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1882-8896","authenticated-orcid":false,"given":"Mark J.","family":"Nelson","sequence":"additional","affiliation":[{"name":"American University, Washington, DC, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5390-1257","authenticated-orcid":false,"given":"Herbie","family":"Bradley","sequence":"additional","affiliation":[{"name":"University of Cambridge, Cambridge, United Kingdom and CarperAI, Providence, RI, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4632-0929","authenticated-orcid":false,"given":"Adam","family":"Gaier","sequence":"additional","affiliation":[{"name":"Autodesk Research, San Raphael, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6791-4988","authenticated-orcid":false,"given":"Arash","family":"Moradi","sequence":"additional","affiliation":[{"name":"New Jersey Institute of Technology, Newark, NJ, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4661-8178","authenticated-orcid":false,"given":"Amy K.","family":"Hoover","sequence":"additional","affiliation":[{"name":"New Jersey Institute of Technology, Newark, NJ, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9535-1123","authenticated-orcid":false,"given":"Joel","family":"Lehman","sequence":"additional","affiliation":[{"name":"CarperAI, Providence, RI, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,11,28]]},"reference":[{"key":"e_1_3_2_2_1","doi-asserted-by":"publisher","DOI":"10.1142\/9789814282673_0010"},{"key":"e_1_3_2_3_1","doi-asserted-by":"publisher","DOI":"10.5555\/865123"},{"key":"e_1_3_2_4_1","first-page":"371","volume-title":"Proceedings of the International Conference on Parallel Problem Solving from Nature","author":"Bentley Peter J.","year":"2022","unstructured":"Peter J. Bentley, Soo Ling Lim, Adam Gaier, and Linh Tran. 2022. Evolving through the looking glass: Learning improved search spaces with variational autoencoders. In Proceedings of the International Conference on Parallel Problem Solving from Nature. Springer, 371\u2013384."},{"key":"e_1_3_2_5_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1015059928466"},{"key":"e_1_3_2_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/CEC.2019.8790077"},{"key":"e_1_3_2_7_1","first-page":"2397","volume-title":"Proceedings of the International Conference on Machine Learning.","author":"Biderman Stella","year":"2023","unstructured":"Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O\u2019Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, U. S. V. S. N. Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. In Proceedings of the International Conference on Machine Learning. PMLR, 2397\u20132430."},{"key":"e_1_3_2_8_1","first-page":"936","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Biggio Luca","year":"2021","unstructured":"Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, and Giambattista Parascandolo. 2021. Neural symbolic regression that scales. In Proceedings of the International Conference on Machine Learning. PMLR, 936\u2013945."},{"key":"e_1_3_2_9_1","unstructured":"Rishi Bommasani Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill et al. 2021. On the opportunities and risks of foundation models. arXiv:2108.07258. Retrieved from https:\/\/arxiv.org\/abs\/2108.07258"},{"key":"e_1_3_2_10_1","doi-asserted-by":"crossref","first-page":"267","DOI":"10.1007\/978-3-319-77583-8_18","volume-title":"Proceedings of the Computational Intelligence in Music, Sound, Art and Design: 7th International Conference, EvoMUSART \u201918","author":"Bontrager Philip","year":"2018","unstructured":"Philip Bontrager, Wending Lin, Julian Togelius, and Sebastian Risi. 2018. Deep interactive evolution. In Proceedings of the Computational Intelligence in Music, Sound, Art and Design: 7th International Conference, EvoMUSART \u201918. Springer, 267\u2013282."},{"key":"e_1_3_2_11_1","volume-title":"Proceedings of the the 12th International Conference on Learning Representations","author":"Bradley Herbie","year":"2024","unstructured":"Herbie Bradley, Andrew Dai, Hannah Benita Teufel, Jenny Zhang, Koen Oostermeijer, Marco Bellagente, Jeff Clune, Kenneth Stanley, Gregory Schott, and Joel Lehman. 2024a. Quality-diversity through AI feedback. In Proceedings of the the 12th International Conference on Learning Representations. ICLR."},{"key":"e_1_3_2_12_1","doi-asserted-by":"crossref","first-page":"177","DOI":"10.1007\/978-981-99-8413-8_10","volume-title":"Genetic Programming Theory and Practice XX","author":"Bradley Herbie","year":"2024","unstructured":"Herbie Bradley, Honglu Fan, Theodoros Galanos, Ryan Zhou, Daniel Scott, and Joel Lehman. 2024b. The openelm library: Leveraging progress in language models for novel evolutionary algorithms. In Genetic Programming Theory and Practice XX. Springer, 177\u2013201."},{"key":"e_1_3_2_13_1","first-page":"1877","article-title":"Language","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877\u20131901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_14_1","doi-asserted-by":"crossref","unstructured":"Jose Camacho-Collados Kiamehr Rezaee Talayeh Riahi Asahi Ushio Daniel Loureiro Dimosthenis Antypas Joanne Boisson Luis Espinosa-Anke Fangyu Liu and Eugenio Mart\u00ednez-C\u00e1mara. 2022. TweetNLP: Cutting-edge natural language processing for social media. arXiv:2206.14774.","DOI":"10.18653\/v1\/2022.emnlp-demos.5"},{"key":"e_1_3_2_15_1","first-page":"18878","volume-title":"Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)","author":"Chan Stephanie C. Y.","year":"2022","unstructured":"Stephanie C. Y. Chan, Adam Santoro, Andrew K. Lampinen, Jane X. Wang, Aaditya Singh, Pierre H. Richemond, Jay McClelland, and Felix Hill. 2022. Data distributional properties drive emergent few-shot learning in transformers. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), 18878\u201318891."},{"key":"e_1_3_2_16_1","first-page":"7787","article-title":"EvoPrompting: Language models for code-level neural architecture search","volume":"36","author":"Chen Angelica","year":"2023","unstructured":"Angelica Chen, David M. Dohan, and David R. So. 2023. EvoPrompting: Language models for code-level neural architecture search. Advances in Neural Information Processing Systems 36 (2023), 7787\u20137817.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_17_1","unstructured":"Mark Chen Jerry Tworek Heewoo Jun Qiming Yuan Henrique Ponde de Oliveira Pinto Jared Kaplan Harri Edwards Yuri Burda Nicholas Joseph Greg Brockman et al. 2021. Evaluating large language models trained on code. arXiv:2107.03374."},{"key":"e_1_3_2_18_1","first-page":"1137","volume-title":"Proceedings of the IEEE Congress on Evolutionary Computation (CEC)","author":"Chen Qi","year":"2015","unstructured":"Qi Chen, Bing Xue, and Mengjie Zhang. 2015. Generalisation and domain adaptation in GP with gradient descent for symbolic regression. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC). IEEE, 1137\u20131144."},{"key":"e_1_3_2_19_1","first-page":"3080","volume-title":"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing","volume":"1","author":"Cheng Hao","year":"2021","unstructured":"Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2021. UnitedQA: A hybrid approach for open domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol. 1: Long Papers). Association for Computational Linguistics, 3080\u20133090."},{"key":"e_1_3_2_20_1","doi-asserted-by":"crossref","first-page":"753","DOI":"10.1145\/3071178.3071285","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Chicano Francisco","year":"2017","unstructured":"Francisco Chicano, Darrell Whitley, Gabriela Ochoa, and Renato Tin\u00f3s. 2017. Optimizing one million variable NK landscapes by hybridizing deterministic recombination and local search. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 753\u2013760."},{"issue":"70","key":"e_1_3_2_21_1","first-page":"1","article-title":"Scaling instruction-finetuned language models","volume":"25","author":"Chung Hyung Won","year":"2024","unstructured":"Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2024. Scaling instruction-finetuned language models. Journal of Machine Learning Research 25, 70 (2024), 1\u201353.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_22_1","unstructured":"Alexander W. Churchill Siddharth Sigtia and Chrisantha Fernando. 2014. A denoising autoencoder that guides stochastic search. arXiv:1404.1614."},{"key":"e_1_3_2_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/s40747-019-0113-4"},{"key":"e_1_3_2_24_1","first-page":"16318","article-title":"Towards automated circuit discovery for mechanistic interpretability","volume":"36","author":"Conmy Arthur","year":"2023","unstructured":"Arthur Conmy, Augustine Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adri\u00e0 Garriga-Alonso. 2023. Towards automated circuit discovery for mechanistic interpretability. Advances in Neural Information Processing Systems 36 (2023), 16318\u201316352.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2017.2704781"},{"key":"e_1_3_2_26_1","doi-asserted-by":"publisher","DOI":"10.1007\/BF02551274"},{"key":"e_1_3_2_27_1","volume-title":"Evolutionary Computation: A Unified Approach","author":"Jong Kenneth A. De","year":"2006","unstructured":"Kenneth A. De Jong. 2006. Evolutionary Computation: A Unified Approach. MIT Press, Cambridge, Massachusetts."},{"key":"e_1_3_2_28_1","first-page":"653","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Deb Kalyanmoy","year":"2016","unstructured":"Kalyanmoy Deb and Christie Myburgh. 2016. Breaking the billion-variable barrier in real-world optimization using a customized evolutionary algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 653\u2013660."},{"key":"e_1_3_2_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/4235.996017"},{"key":"e_1_3_2_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2017.2696576"},{"key":"e_1_3_2_31_1","first-page":"255","article-title":"Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework","volume":"17","author":"Derrac J.","year":"2015","unstructured":"J. Derrac, S. Garcia, L. Sanchez, and F. Herrera. 2015. Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. Journal of Multiple-Valued Logic and Soft Computing 17 (2015), 255\u2013287.","journal-title":"Journal of Multiple-Valued Logic and Soft Computing"},{"key":"e_1_3_2_32_1","volume-title":"Proceedings of NAACL-HLT","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT. Association for Computational Linguistics."},{"key":"e_1_3_2_33_1","unstructured":"Benjamin Doerr and Anne Auger. 2011. Theory of Randomized Search Heuristics: Foundations and Recent Developments. World Scientific Singapore. DOI: https:\/\/cds.cern.ch\/record\/1413962"},{"key":"e_1_3_2_34_1","doi-asserted-by":"crossref","first-page":"854","DOI":"10.18653\/v1\/2021.findings-emnlp.73","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2021","author":"Fajcik Martin","year":"2021","unstructured":"Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-D2: A modular baseline for open-domain question answering. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, 854\u2013870."},{"key":"e_1_3_2_35_1","unstructured":"Chrisantha Fernando Dylan Banarse Henryk Michalewski Simon Osindero and Tim Rockt\u00e4schel. 2023. Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv:2309.16797."},{"key":"e_1_3_2_36_1","first-page":"10040","article-title":"Differentiable quality diversity","volume":"34","author":"Fontaine Matthew","year":"2021","unstructured":"Matthew Fontaine and Stefanos Nikolaidis. 2021. Differentiable quality diversity. Advances in Neural Information Processing Systems 34 (2021), 10040\u201310052.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_37_1","first-page":"103","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Gaier Adam","year":"2020","unstructured":"Adam Gaier, Alexander Asteroth, and Jean-Baptiste Mouret. 2020. Discovering representations for black-box optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 103\u2013111."},{"key":"e_1_3_2_38_1","doi-asserted-by":"crossref","first-page":"849","DOI":"10.1145\/3205455.3205645","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Garciarena Unai","year":"2018","unstructured":"Unai Garciarena, Roberto Santana, and Alexander Mendiburu. 2018. Expanding variational autoencoders for learning and exploiting latent representations in search distributions. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 849\u2013856."},{"key":"e_1_3_2_39_1","volume-title":"Proceedings of the International Conference on Machine Learning (ICML)","author":"Giannou Angeliki","year":"2023","unstructured":"Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D. Lee, and Dimitris Papailiopoulos. 2023. Looped transformers as programmable computers. In Proceedings of the International Conference on Machine Learning (ICML)."},{"key":"e_1_3_2_40_1","first-page":"41","volume-title":"Genetic Algorithms and Their Applications: Proceedings of the Second International Conference on Genetic Algorithms","author":"Goldberg and David E.","year":"1987","unstructured":"David E. Goldberg andJon Richardson. 1987. Genetic algorithms with sharing for multimodal function optimization. In Genetic Algorithms and Their Applications: Proceedings of the Second International Conference on Genetic Algorithms. Hillsdale, NJ: Lawrence Erlbaum, 41\u201349."},{"key":"e_1_3_2_41_1","unstructured":"Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv:1308.0850."},{"key":"e_1_3_2_42_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Guo Qingyan","year":"2024","unstructured":"Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. 2024. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_43_1","unstructured":"Nikolaus Hansen. 2016. The CMA evolution strategy: A tutorial. arXiv:1604.00772."},{"key":"e_1_3_2_44_1","doi-asserted-by":"publisher","DOI":"10.1162\/106365601750190398"},{"key":"e_1_3_2_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/4235.797971"},{"key":"e_1_3_2_46_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.swevo.2011.08.003"},{"key":"e_1_3_2_47_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.1127647"},{"key":"e_1_3_2_48_1","doi-asserted-by":"publisher","DOI":"10.1038\/scientificamerican0792-66"},{"key":"e_1_3_2_49_1","doi-asserted-by":"publisher","DOI":"10.1016\/0893-6080(89)90020-8"},{"key":"e_1_3_2_50_1","doi-asserted-by":"publisher","DOI":"10.1023\/B:JMMA.0000049378.57591.c6"},{"key":"e_1_3_2_51_1","doi-asserted-by":"publisher","DOI":"10.1162\/coli_a_00426"},{"key":"e_1_3_2_52_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0213918"},{"key":"e_1_3_2_53_1","first-page":"10269","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Kamienny Pierre-Alexandre","year":"2022","unstructured":"Pierre-Alexandre Kamienny, St\u00e9phane d\u2019Ascoli, Guillaume Lample, and Francois Charton. 2022. End-to-end symbolic regression with transformers. In Proceedings of the Advances in Neural Information Processing Systems, 10269\u201310281."},{"key":"e_1_3_2_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3555858.3563267"},{"key":"e_1_3_2_55_1","first-page":"953","article-title":"On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition","volume":"114","author":"Kolmogorov Andrei Nikolaevich","year":"1957","unstructured":"Andrei Nikolaevich Kolmogorov. 1957. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. In Doklady Akademii Nauk, Vol. 114. Russian Academy of Sciences, 953\u2013956.","journal-title":"Doklady Akademii Nauk"},{"key":"e_1_3_2_56_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10710-019-09371-3"},{"key":"e_1_3_2_57_1","doi-asserted-by":"crossref","first-page":"662","DOI":"10.1007\/978-3-319-46448-0_40","volume-title":"Proceedings of the Computer Vision\u2013ECCV 2016: 14th European Conference","author":"Kong Shu","year":"2016","unstructured":"Shu Kong, Xiaohui Shen, Zhe Lin, Radomir Mech, and Charless Fowlkes. 2016. Photo aesthetics ranking network with attributes and content adaptation. In Proceedings of the Computer Vision\u2013ECCV 2016: 14th European Conference. Springer, 662\u2013679."},{"key":"e_1_3_2_58_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-29414-4_9"},{"key":"e_1_3_2_59_1","unstructured":"William La Cava Patryk Orzechowski Bogdan Burlacu Fabricio Olivetti de Franca Marco Virgolin Ying Jin Michael Kommenda and Jason H. Moore. 2021. Contemporary symbolic regression methods and their relative performance. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. J. Vanschoren and S. Yeung (Eds.). Retrieved from https:\/\/datasets-benchmarks-proceedings.neurips.cc\/paper_files\/paper\/2021\/file\/c0c7c76d30bd3dcaefc96f40275bdc0a-Paper-round1.pdf"},{"key":"e_1_3_2_60_1","volume-title":"Foundations of Genetic Programming","author":"Langdon William B.","year":"2013","unstructured":"William B. Langdon and Riccardo Poli. 2013. Foundations of Genetic Programming. Springer Science & Business Media."},{"key":"e_1_3_2_61_1","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1007\/978-1-4615-1539-5_3","volume-title":"Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation","author":"Larranaga Pedro","year":"2002","unstructured":"Pedro Larranaga. 2002. A review on estimation of distribution algorithms: 3. In Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation, 57\u2013100."},{"key":"e_1_3_2_62_1","volume":"2","author":"Larra\u00f1aga Pedro","year":"2001","unstructured":"Pedro Larra\u00f1aga and Jose A. Lozano. 2001. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation, Vol. 2. Pedro Larra\u00f1aga and Jose A. Lozano (Eds.), Springer Science & Business Media.","journal-title":"Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation"},{"key":"e_1_3_2_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2011.104"},{"key":"e_1_3_2_64_1","first-page":"331","volume-title":"Handbook of Evolutionary Machine Learning","author":"Lehman Joel","year":"2023","unstructured":"Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, and Kenneth O. Stanley. 2023. Evolution through large models. In Handbook of Evolutionary Machine Learning. Springer, 331\u2013366."},{"key":"e_1_3_2_65_1","doi-asserted-by":"publisher","DOI":"10.1162\/EVCO_a_00025"},{"key":"e_1_3_2_66_1","first-page":"211","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO),","author":"Lehman Joel","year":"2011","unstructured":"Joel Lehman and Kenneth O. Stanley. 2011b. Evolving a diversity of virtual creatures through novelty search and local competition. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 211\u2013218."},{"key":"e_1_3_2_67_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.abq1158"},{"key":"e_1_3_2_68_1","unstructured":"Fei Liu Xi Lin Zhenkun Wang Shunyu Yao Xialiang Tong Mingxuan Yuan and Qingfu Zhang. 2023a. Large language model for multi-objective evolutionary optimization. arXiv:2310.12541. Retrieved from https:\/\/arxiv.org\/abs\/2310.12541"},{"key":"e_1_3_2_69_1","unstructured":"Fei Liu Xialiang Tong Mingxuan Yuan and Qingfu Zhang. 2023b. Algorithm evolution using large language model. arXiv:2311.15249. Retrieved from https:\/\/arxiv.org\/abs\/2311.15249"},{"key":"e_1_3_2_70_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00638"},{"key":"e_1_3_2_71_1","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Liventsev Vadim","year":"2023","unstructured":"Vadim Liventsev, Anastasiia Grishina, Aki H\u00e4rm\u00e4, and Leon Moonen. 2023. Fully autonomous programming with large language models. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), ACM."},{"key":"e_1_3_2_72_1","first-page":"2507","article-title":"Learn to explain: Multimodal reasoning via thought chains for science question answering","volume":"35","author":"Lu Pan","year":"2022","unstructured":"Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022b. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems 35 (2022), 2507\u20132521.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_73_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.556"},{"key":"e_1_3_2_74_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Ma Yecheng Jason","year":"2024","unstructured":"Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2024. Eureka: Human-level reward design via coding large language models. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_75_1","volume-title":"Niching Methods for Genetic Algorithms","author":"Mahfoud Samir W.","year":"1995","unstructured":"Samir W. Mahfoud. 1995. Niching Methods for Genetic Algorithms. Ph.D. Dissertation. University of Illinois at Urbana-Champaign."},{"key":"e_1_3_2_76_1","first-page":"791","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"McDermott James","year":"2012","unstructured":"James McDermott, David R. White, Sean Luke, Luca Manzoni, Mauro Castelli, Leonardo Vanneschi, Wojciech Jaskowski, Krzysztof Krawiec, Robin Harper, Kenneth De Jong, et al. 2012. Genetic programming needs better benchmarks. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 791\u2013798."},{"key":"e_1_3_2_77_1","doi-asserted-by":"crossref","first-page":"327","DOI":"10.1117\/12.336896","volume-title":"Proceedings of the SPIE Conference on Precision Agriculture and Biological Quality","volume":"3543","author":"Meyer George E.","year":"1999","unstructured":"George E. Meyer, Timothy W. Hindman, and Koppolu Laksmi. 1999. Machine vision detection parameters for plant species identification. In Proceedings of the SPIE Conference on Precision Agriculture and Biological Quality, Vol. 3543, 327\u2013335."},{"key":"e_1_3_2_78_1","doi-asserted-by":"crossref","unstructured":"Elliot Meyerson Mark J. Nelson Herbie Bradley Adam Gaier Arash Moradi Amy K. Hoover and Joel Lehman. 2023. Language model crossover: Variation through few-shot prompting. arXiv:2302.12170. Retrieved from https:\/\/arxiv.org\/abs\/2302.12170","DOI":"10.1145\/3694791"},{"key":"e_1_3_2_79_1","doi-asserted-by":"crossref","first-page":"739","DOI":"10.1145\/3512290.3528746","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Meyerson Elliot","year":"2022","unstructured":"Elliot Meyerson, Xin Qiu, and Risto Miikkulainen. 2022. Simple genetic operators are universal approximators of probability distributions (and other advantages of expressive encodings). In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 739\u2013748."},{"key":"e_1_3_2_80_1","doi-asserted-by":"crossref","first-page":"983","DOI":"10.1145\/3205455.3205597","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Moreno Matthew Andres","year":"2018","unstructured":"Matthew Andres Moreno, Wolfgang Banzhaf, and Charles Ofria. 2018. Learning an evolvable genotype-phenotype mapping. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 983\u2013990."},{"key":"e_1_3_2_81_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-18272-3_10"},{"key":"e_1_3_2_82_1","unstructured":"Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping elites. arXiv:1504.04909."},{"key":"e_1_3_2_83_1","doi-asserted-by":"publisher","DOI":"10.5555\/645823.670694"},{"key":"e_1_3_2_84_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13278-021-00776-6"},{"key":"e_1_3_2_85_1","doi-asserted-by":"crossref","first-page":"1110","DOI":"10.1145\/3638529.3654017","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Nasir Muhammad U.","year":"2024","unstructured":"Muhammad U. Nasir, Sam Earle, Julian Togelius, Steven James, and Christopher Cleghorn. 2024. LLMatic: Neural architecture search via large language models and quality diversity optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 1110\u20131118."},{"key":"e_1_3_2_86_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Nijkamp Erik","year":"2023","unstructured":"Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. CodeGen: An open large language model for code with multi-turn program synthesis. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_87_1","doi-asserted-by":"publisher","DOI":"10.1162\/evco.2006.14.2.157"},{"key":"e_1_3_2_88_1","volume-title":"Behaviour & Information Technology","author":"Oppenlaender Jonas","year":"2023","unstructured":"Jonas Oppenlaender. 2023. A taxonomy of prompt modifiers for text-to-image generation. Behaviour & Information Technology (2023)."},{"key":"e_1_3_2_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/3205455.3205539"},{"key":"e_1_3_2_90_1","first-page":"525","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","volume":"1","author":"Pelikan Martin","year":"1999","unstructured":"Martin Pelikan, David E. Goldberg, and Erick Cant\u00fa-Paz. 1999. BOA: The Bayesian optimization algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Vol. 1, 525\u2013532."},{"key":"e_1_3_2_91_1","doi-asserted-by":"publisher","DOI":"10.1007\/b10910"},{"key":"e_1_3_2_92_1","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Petersen Brenden K.","year":"2021","unstructured":"Brenden K. Petersen, Mikel Landajuela Larma, Terrell N. Mundhenk, Claudio Prata Santiago, Soo Kyung Kim, and Joanne Taery Kim. 2021. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In Proceedings of the International Conference on Learning Representations. ICLR."},{"key":"e_1_3_2_93_1","doi-asserted-by":"publisher","DOI":"10.1145\/2739482.2764691"},{"key":"e_1_3_2_94_1","doi-asserted-by":"publisher","DOI":"10.5555\/3455716.3455794"},{"key":"e_1_3_2_95_1","first-page":"8748","volume-title":"Proceedings of the 38th International Conference on Machine Learning","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, 8748\u20138763."},{"key":"e_1_3_2_96_1","doi-asserted-by":"crossref","first-page":"901","DOI":"10.1145\/3449639.3459320","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Rakicevic Nemanja","year":"2021","unstructured":"Nemanja Rakicevic, Antoine Cully, and Petar Kormushev. 2021. Policy manifold search: Exploring the manifold hypothesis for diversity-based neuroevolution. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 901\u2013909."},{"key":"e_1_3_2_97_1","first-page":"8821","volume-title":"Proceedings of the International Conference on Machine Learning.","author":"Ramesh Aditya","year":"2021","unstructured":"Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In Proceedings of the International Conference on Machine Learning. PMLR, 8821\u20138831."},{"key":"e_1_3_2_98_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1410"},{"key":"e_1_3_2_99_1","first-page":"10684","volume-title":"Proceedings of the Computer Vision and Pattern Recognition Conference","author":"Rombach Robin","year":"2022","unstructured":"Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the Computer Vision and Pattern Recognition Conference, 10684\u201310695."},{"key":"e_1_3_2_100_1","doi-asserted-by":"crossref","unstructured":"Bernardino Romera-Paredes Mohammadamin Barekatain Alexander Novikov Matej Balog M. Pawan Kumar Emilien Dupont Francisco J. R. Ruiz Jordan S. Ellenberg Pengming Wang Omar Fawzi et al. 2024. Mathematical discoveries from program search with large language models. Nature 625 (2024) 468\u2013475.","DOI":"10.1038\/s41586-023-06924-6"},{"key":"e_1_3_2_101_1","volume-title":"Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL),2655\u20132671","author":"Rubin Ohad","year":"2022","unstructured":"Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2655\u20132671."},{"key":"e_1_3_2_102_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Sanh Victor","year":"2022","unstructured":"Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2022. Multitask prompted training enables zero-shot task generalization. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_103_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.eacl-main.20"},{"key":"e_1_3_2_104_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.1165893"},{"key":"e_1_3_2_105_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4419-7747-2_8"},{"key":"e_1_3_2_106_1","first-page":"148","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Schrum Jacob","year":"2020","unstructured":"Jacob Schrum, Jake Gutierrez, Vanessa Volz, Jialin Liu, Simon Lucas, and Sebastian Risi. 2020. Interactive evolution and exploration within latent level-design space of generative adversarial networks. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 148\u2013156."},{"key":"e_1_3_2_107_1","unstructured":"Christoph Schuhmann. 2022. LAION-Aesthetics. Retrieved February 9 2023 from https:\/\/laion.ai\/blog\/laion-aesthetics\/"},{"key":"e_1_3_2_108_1","unstructured":"Dale Schuurmans. 2023. Memory augmented large language models are computationally universal. arXiv:2301.04589."},{"key":"e_1_3_2_109_1","doi-asserted-by":"crossref","first-page":"1759","DOI":"10.1145\/1357054.1357328","volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","author":"Secretan Jimmy","year":"2008","unstructured":"Jimmy Secretan, Nicholas Beato, David B. D. Ambrosio, Adelein Rodriguez, Adam Campbell, and Kenneth O. Stanley. 2008. Picbreeder: Evolving pictures collaboratively online. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1759\u20131768."},{"key":"e_1_3_2_110_1","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1007\/978-3-642-28900-2_2","volume-title":"Markov Networks in Evolutionary Computation","author":"Shakya Siddhartha","year":"2012","unstructured":"Siddhartha Shakya and Roberto Santana. 2012. A review of estimation of distribution algorithms and Markov networks. Markov Networks in Evolutionary Computation (2012), 21\u201337."},{"key":"e_1_3_2_111_1","doi-asserted-by":"publisher","DOI":"10.1145\/584091.584093"},{"key":"e_1_3_2_112_1","unstructured":"Haihao Shen Hanwen Chang Bo Dong Yu Luo and Hengyu Meng. 2023. Efficient LLM inference on CPUs. arXiv:2311.00502."},{"key":"e_1_3_2_113_1","unstructured":"Chandan Singh Jeevana Priya Inala Michel Galley Rich Caruana and Jianfeng Gao. 2024. Rethinking interpretability in the era of large language models. arXiv:2402.01761. Retrieved from https:\/\/arxiv.org\/abs\/2402.01761"},{"key":"e_1_3_2_114_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10710-007-9028-8"},{"key":"e_1_3_2_115_1","first-page":"3008","article-title":"Learning to summarize with human feedback","volume":"33","author":"Stiennon Nisan","year":"2020","unstructured":"Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems 33 (2020), 3008\u20133021.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_116_1","first-page":"54213","article-title":"MarioGPT: Open-ended text2level generation through large language models","volume":"36","author":"Sudhakaran Shyam","year":"2023","unstructured":"Shyam Sudhakaran, Miguel Gonz\u00e1lez-Duque, Matthias Freiberger, Claire Glanois, Elias Najarro, and Sebastian Risi. 2023. MarioGPT: Open-ended text2level generation through large language models. Advances in Neural Information Processing Systems 36 (2023), 54213\u201354227.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_117_1","first-page":"218","volume-title":"Proceedings of the 12th European Conference on Artificial Life (ECAL \u201913)","author":"Szerlip Paul","year":"2013","unstructured":"Paul Szerlip and Kenneth Stanley. 2013. Indirectly encoded sodarace for artificial life. In Proceedings of the 12th European Conference on Artificial Life (ECAL \u201913). MIT Press, 218\u2013225."},{"key":"e_1_3_2_118_1","unstructured":"Ross Taylor Marcin Kardas Guillem Cucurull Thomas Scialom Anthony Hartshorn Elvis Saravia Andrew Poulton Viktor Kerkez and Robert Stojnic. 2022. Galactica: A large language model for science. arXiv:2211.09085."},{"key":"e_1_3_2_119_1","article-title":"GSR: A generalized symbolic regression approach","author":"Tohme Tony","year":"2022","unstructured":"Tony Tohme, Dehong Liu, and Kamal Youcef-Toumi. 2022. GSR: A generalized symbolic regression approach. Transactions on Machine Learning Research (2022).","journal-title":"Transactions on Machine Learning Research"},{"key":"e_1_3_2_120_1","first-page":"3","volume-title":"Proceedings of the NeurIPS 2020 Competition and Demonstration Track","author":"Turner Ryan","year":"2021","unstructured":"Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. 2021. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Proceedings of the NeurIPS 2020 Competition and Demonstration Track. PMLR, 3\u201326."},{"issue":"16","key":"e_1_3_2_121_1","article-title":"AI Feynman: A physics-inspired method for symbolic regression","volume":"6","author":"Udrescu Silviu-Marian","year":"2020","unstructured":"Silviu-Marian Udrescu and Max Tegmark. 2020. AI Feynman: A physics-inspired method for symbolic regression. Science Advances 6, 16 (2020).","journal-title":"Science Advances"},{"key":"e_1_3_2_122_1","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017), 5998\u20136008.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_123_1","first-page":"35151","volume-title":"Proceedings of the International Conference on Machine Learning (ICML)","author":"Oswald Johannes von","year":"2023","unstructured":"Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, Jo\u00e3o Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov. 2023. Transformers learn in-context by gradient descent. In Proceedings of the International Conference on Machine Learning (ICML), 35151\u201335174."},{"key":"e_1_3_2_124_1","first-page":"128","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)","author":"Werra Leandro von","year":"2022","unstructured":"Leandro von Werra, Lewis Tunstall, Abhishek Thakur, Alexandra Sasha Luccioni, Tristan Thrush, Aleksandra Piktus, Felix Marty, Nazneen Rajani, Victor Mustar, Helen Ngo, et al. 2022. Evaluate & evaluation on the hub: Better best practices for data and model measurement. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 128\u2013136."},{"key":"e_1_3_2_125_1","doi-asserted-by":"crossref","first-page":"142","DOI":"10.1145\/3321707.3321799","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Wang Rui","year":"2019","unstructured":"Rui Wang, Joel Lehman, Jeff Clune, and Kenneth O. Stanley. 2019a. Poet: Open-ended coevolution of environments and their optimized solutions. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 142\u2013151."},{"key":"e_1_3_2_126_1","doi-asserted-by":"publisher","DOI":"10.1557\/mrc.2019.85"},{"key":"e_1_3_2_127_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022a. Finetuned language models are zero-shot learners. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_128_1","unstructured":"Jason Wei Yi Tay Rishi Bommasani Colin Raffel Barret Zoph Sebastian Borgeaud Dani Yogatama Maarten Bosma Denny Zhou Donald Metzler Ed H. Chi Tatsunori Hashimoto Oriol Vinyals Percy Liang Jeff Dean and William Fedus. 2022b. Emergent abilities of large language models. arXiv:2206.07682. Retrieved from https:\/\/arxiv.org\/abs\/2206.07682"},{"key":"e_1_3_2_129_1","doi-asserted-by":"publisher","DOI":"10.5555\/2627435.2638566"},{"key":"e_1_3_2_130_1","doi-asserted-by":"crossref","first-page":"102","DOI":"10.1007\/978-3-031-02056-8_7","volume-title":"European Conference on Genetic Programming (Part of EvoStar)","author":"Wittenberg David","year":"2022","unstructured":"David Wittenberg. 2022. Using denoising autoencoder genetic programming to control exploration and exploitation in search. In European Conference on Genetic Programming (Part of EvoStar). Springer, 102\u2013117."},{"key":"e_1_3_2_131_1","first-page":"1037","volume-title":"Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)","author":"Wittenberg David","year":"2020","unstructured":"David Wittenberg, Franz Rothlauf, and Dirk Schweim. 2020. DAE-GP: Denoising autoencoder LSTM networks as probabilistic models in estimation of distribution genetic programming. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 1037\u20131045."},{"key":"e_1_3_2_132_1","first-page":"28","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)","author":"Wolf Thomas","year":"2020","unstructured":"Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2020. Huggingface's transformers: State-of-the-art natural language processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 28\u201345."},{"key":"e_1_3_2_133_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Xie Sang Michael","year":"2022","unstructured":"Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An explanation of in-context learning as implicit Bayesian inference. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_134_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Xu Can","year":"2024","unstructured":"Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2024. WizardLM: Empowering large language models to follow complex instructions. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_135_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Yang Chengrun","year":"2024","unstructured":"Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. 2024. Large language models as optimizers. In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_136_1","volume-title":"Transactions on Machine Learning Research","author":"Yu Jiahui","year":"2022","unstructured":"Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. CoCa: Contrastive captioners are image-text foundation models. Transactions on Machine Learning Research (Aug. 2022)."},{"key":"e_1_3_2_137_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Yun Chulhee","year":"2019","unstructured":"Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. 2019. Are Transformers universal approximators of sequence-to-sequence functions? In Proceedings of the International Conference on Learning Representations (ICLR). ICLR."},{"key":"e_1_3_2_138_1","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2003.820663"},{"key":"e_1_3_2_139_1","unstructured":"Tianyi Zhang Jonah Wonkyu Yi Bowen Yao Zhaozhuo Xu and Anshumali Shrivastava. 2024. NoMAD-attention: Efficient LLM inference on CPUs through multiply-add-free attention. arXiv:2403.01273. Retrieved from https:\/\/arxiv.org\/abs\/2403.01273"}],"container-title":["ACM Transactions on Evolutionary Learning and Optimization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3694791","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3694791","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:05:48Z","timestamp":1750291548000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3694791"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,28]]},"references-count":138,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,12,31]]}},"alternative-id":["10.1145\/3694791"],"URL":"https:\/\/doi.org\/10.1145\/3694791","relation":{},"ISSN":["2688-3007"],"issn-type":[{"value":"2688-3007","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,28]]},"assertion":[{"value":"2023-07-31","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-08-21","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-28","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}