{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T19:45:49Z","timestamp":1774381549071,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":25,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,7,15]],"date-time":"2023-07-15T00:00:00Z","timestamp":1689379200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,7,15]]},"DOI":"10.1145\/3583133.3590581","type":"proceedings-article","created":{"date-parts":[[2023,7,24]],"date-time":"2023-07-24T23:30:33Z","timestamp":1690241433000},"page":"699-702","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Efficient Quality-Diversity Optimization through Diverse Quality Species"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6886-6973","authenticated-orcid":false,"given":"Ryan","family":"Wickman","sequence":"first","affiliation":[{"name":"University of Memphis, Memphis, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1535-0743","authenticated-orcid":false,"given":"Bibek","family":"Poudel","sequence":"additional","affiliation":[{"name":"University of Memphis, Memphis, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-4919-0236","authenticated-orcid":false,"given":"Taylor Michael","family":"Villarreal","sequence":"additional","affiliation":[{"name":"University of Memphis, Memphis, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5605-6295","authenticated-orcid":false,"given":"Xiaofei","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Memphis, Memphis, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0780-738X","authenticated-orcid":false,"given":"Weizi","family":"Li","sequence":"additional","affiliation":[{"name":"University of Memphis, Memphis, United States of America"}]}],"member":"320","published-online":{"date-parts":[[2023,7,24]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"Proceedings of the 16th International Conference on Neural Information Processing Systems (Whistler","author":"Barber David","year":"2003","unstructured":"David Barber and Felix Agakov . 2003 . The IM Algorithm: A Variational Approach to Information Maximization . In Proceedings of the 16th International Conference on Neural Information Processing Systems (Whistler , British Columbia, Canada) (NIPS'03). MIT Press, Cambridge, MA, USA, 201--208. David Barber and Felix Agakov. 2003. The IM Algorithm: A Variational Approach to Information Maximization. In Proceedings of the 16th International Conference on Neural Information Processing Systems (Whistler, British Columbia, Canada) (NIPS'03). MIT Press, Cambridge, MA, USA, 201--208."},{"key":"e_1_3_2_2_2_1","volume-title":"Black Box Optimization, Machine Learning, and No-Free Lunch Theorems","author":"Chatzilygeroudis Konstantinos","unstructured":"Konstantinos Chatzilygeroudis , Antoine Cully , Vassilis Vassiliades , and Jean-Baptiste Mouret . 2021. Quality-Diversity Optimization: a novel branch of stochastic optimization . In Black Box Optimization, Machine Learning, and No-Free Lunch Theorems . Springer , 109--135. Konstantinos Chatzilygeroudis, Antoine Cully, Vassilis Vassiliades, and Jean-Baptiste Mouret. 2021. Quality-Diversity Optimization: a novel branch of stochastic optimization. In Black Box Optimization, Machine Learning, and No-Free Lunch Theorems. Springer, 109--135."},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377930.3390217"},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2017.2704781"},{"key":"e_1_3_2_2_5_1","volume-title":"Deep Reinforcement Learning Workshop NeurIPS","author":"D'Oro Pierluca","year":"2022","unstructured":"Pierluca D'Oro , Max Schwarzer , Evgenii Nikishin , Pierre-Luc Bacon , Marc G Bellemare , and Aaron Courville . [n. d.]. Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier . In Deep Reinforcement Learning Workshop NeurIPS 2022 . Pierluca D'Oro, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, and Aaron Courville. [n. d.]. Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier. In Deep Reinforcement Learning Workshop NeurIPS 2022."},{"key":"e_1_3_2_2_6_1","volume-title":"First return, then explore. Nature 590, 7847","author":"Ecoffet Adrien","year":"2021","unstructured":"Adrien Ecoffet , Joost Huizinga , Joel Lehman , Kenneth O Stanley , and Jeff Clune . 2021. First return, then explore. Nature 590, 7847 ( 2021 ), 580--586. Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O Stanley, and Jeff Clune. 2021. First return, then explore. Nature 590, 7847 (2021), 580--586."},{"key":"e_1_3_2_2_7_1","volume-title":"International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=SJx63jRqFm","author":"Eysenbach Benjamin","year":"2019","unstructured":"Benjamin Eysenbach , Abhishek Gupta , Julian Ibarz , and Sergey Levine . 2019 . Diversity is All You Need: Learning Skills without a Reward Function . In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=SJx63jRqFm Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. 2019. Diversity is All You Need: Learning Skills without a Reward Function. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=SJx63jRqFm"},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3577203"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377930.3390232"},{"key":"e_1_3_2_2_10_1","volume-title":"International conference on machine learning. PMLR, 1587--1596","author":"Fujimoto Scott","year":"2018","unstructured":"Scott Fujimoto , Herke Hoof , and David Meger . 2018 . Addressing function approximation error in actor-critic methods . In International conference on machine learning. PMLR, 1587--1596 . Scott Fujimoto, Herke Hoof, and David Meger. 2018. Addressing function approximation error in actor-critic methods. In International conference on machine learning. PMLR, 1587--1596."},{"key":"e_1_3_2_2_11_1","volume-title":"Hillsdale, NJ: Lawrence Erlbaum.","author":"Goldberg David E","year":"1987","unstructured":"David E Goldberg , Jon Richardson , 1987 . Genetic algorithms with sharing for multimodal function optimization. In Genetic algorithms and their applications: Proceedings of the Second International Conference on Genetic Algorithms , Vol. 4149 . Hillsdale, NJ: Lawrence Erlbaum. David E Goldberg, Jon Richardson, et al. 1987. Genetic algorithms with sharing for multimodal function optimization. In Genetic algorithms and their applications: Proceedings of the Second International Conference on Genetic Algorithms, Vol. 4149. Hillsdale, NJ: Lawrence Erlbaum."},{"key":"e_1_3_2_2_12_1","first-page":"8198","article-title":"One solution is not all you need: Few-shot extrapolation via structured maxent rl","volume":"33","author":"Kumar Saurabh","year":"2020","unstructured":"Saurabh Kumar , Aviral Kumar , Sergey Levine , and Chelsea Finn . 2020 . One solution is not all you need: Few-shot extrapolation via structured maxent rl . Advances in Neural Information Processing Systems 33 (2020), 8198 -- 8210 . Saurabh Kumar, Aviral Kumar, Sergey Levine, and Chelsea Finn. 2020. One solution is not all you need: Few-shot extrapolation via structured maxent rl. Advances in Neural Information Processing Systems 33 (2020), 8198--8210.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_13_1","volume-title":"Map-based multi-policy reinforcement learning: enhancing adaptability of robots by deep reinforcement learning. arXiv preprint arXiv:1710.06117","author":"Kume Ayaka","year":"2017","unstructured":"Ayaka Kume , Eiichi Matsumoto , Kuniyuki Takahashi , Wilson Ko , and Jethro Tan . 2017. Map-based multi-policy reinforcement learning: enhancing adaptability of robots by deep reinforcement learning. arXiv preprint arXiv:1710.06117 ( 2017 ). Ayaka Kume, Eiichi Matsumoto, Kuniyuki Takahashi, Wilson Ko, and Jethro Tan. 2017. Map-based multi-policy reinforcement learning: enhancing adaptability of robots by deep reinforcement learning. arXiv preprint arXiv:1710.06117 (2017)."},{"key":"e_1_3_2_2_14_1","volume-title":"Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation 19, 2","author":"Lehman Joel","year":"2011","unstructured":"Joel Lehman and Kenneth O Stanley . 2011. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation 19, 2 ( 2011 ), 189--223. Joel Lehman and Kenneth O Stanley. 2011. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation 19, 2 (2011), 189--223."},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA46639.2022.9811559"},{"key":"e_1_3_2_2_16_1","volume-title":"Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909","author":"Mouret Jean-Baptiste","year":"2015","unstructured":"Jean-Baptiste Mouret and Jeff Clune . 2015. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909 ( 2015 ). Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909 (2015)."},{"key":"e_1_3_2_2_17_1","volume-title":"International Conference on Machine Learning. PMLR, 16828--16847","author":"Nikishin Evgenii","year":"2022","unstructured":"Evgenii Nikishin , Max Schwarzer , Pierluca D'Oro , Pierre-Luc Bacon , and Aaron Courville . 2022 . The primacy bias in deep reinforcement learning . In International Conference on Machine Learning. PMLR, 16828--16847 . Evgenii Nikishin, Max Schwarzer, Pierluca D'Oro, Pierre-Luc Bacon, and Aaron Courville. 2022. The primacy bias in deep reinforcement learning. In International Conference on Machine Learning. PMLR, 16828--16847."},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3449639.3459304"},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3512290.3528845"},{"key":"e_1_3_2_2_20_1","volume-title":"Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI","author":"Pugh Justin K","year":"2016","unstructured":"Justin K Pugh , Lisa B Soros , and Kenneth O Stanley . 2016. Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI ( 2016 ), 40. Justin K Pugh, Lisa B Soros, and Kenneth O Stanley. 2016. Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI (2016), 40."},{"key":"e_1_3_2_2_21_1","volume-title":"Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440","author":"Schulman John","year":"2017","unstructured":"John Schulman , Xi Chen , and Pieter Abbeel . 2017. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440 ( 2017 ). John Schulman, Xi Chen, and Pieter Abbeel. 2017. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440 (2017)."},{"key":"e_1_3_2_2_22_1","volume-title":"Why greatness cannot be planned: The myth of the objective","author":"Stanley Kenneth O","unstructured":"Kenneth O Stanley and Joel Lehman . 2015. Why greatness cannot be planned: The myth of the objective . Springer . Kenneth O Stanley and Joel Lehman. 2015. Why greatness cannot be planned: The myth of the objective. Springer."},{"key":"e_1_3_2_2_23_1","volume-title":"Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2","author":"Stanley Kenneth O","year":"2002","unstructured":"Kenneth O Stanley and Risto Miikkulainen . 2002. Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2 ( 2002 ), 99--127. Kenneth O Stanley and Risto Miikkulainen. 2002. Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2 (2002), 99--127."},{"key":"e_1_3_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2017.2735550"},{"key":"e_1_3_2_2_25_1","volume-title":"Aaai","volume":"8","author":"Ziebart Brian D","year":"2008","unstructured":"Brian D Ziebart , Andrew L Maas , J Andrew Bagnell , Anind K Dey , 2008 . Maximum entropy inverse reinforcement learning .. In Aaai , Vol. 8 . Chicago, IL, USA, 1433--1438. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. 2008. Maximum entropy inverse reinforcement learning.. In Aaai, Vol. 8. Chicago, IL, USA, 1433--1438."}],"event":{"name":"GECCO '23 Companion: Companion Conference on Genetic and Evolutionary Computation","location":"Lisbon Portugal","acronym":"GECCO '23 Companion","sponsor":["SIGEVO ACM Special Interest Group on Genetic and Evolutionary Computation"]},"container-title":["Proceedings of the Companion Conference on Genetic and Evolutionary Computation"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583133.3590581","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3583133.3590581","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:44Z","timestamp":1750178264000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583133.3590581"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,15]]},"references-count":25,"alternative-id":["10.1145\/3583133.3590581","10.1145\/3583133"],"URL":"https:\/\/doi.org\/10.1145\/3583133.3590581","relation":{},"subject":[],"published":{"date-parts":[[2023,7,15]]},"assertion":[{"value":"2023-07-24","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}