{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T04:59:25Z","timestamp":1750309165853,"version":"3.41.0"},"reference-count":42,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2024,8,2]],"date-time":"2024-08-02T00:00:00Z","timestamp":1722556800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Recomm. Syst."],"published-print":{"date-parts":[[2025,3,31]]},"abstract":"<jats:p>\n            Podcasting is an increasingly popular medium for entertainment and discourse around the world, with tens of thousands of new podcasts released on a monthly basis. We consider the problem of identifying from these newly released podcasts those with the largest potential audiences so they can be considered for personalized recommendation to users. We first study and then discard a supervised approach due to the inadequacy of either content or consumption features for this task and instead propose a novel non-contextual bandit algorithm in the fixed-budget infinitely armed pure-exploration setting. We demonstrate that our algorithm is well suited to the best-arm identification task for a broad class of arm reservoir distributions, out-competing a large number of state-of-the-art algorithms. We then apply the algorithm to identifying podcasts with broad appeal in a simulated study and show that it efficiently sorts podcasts into groups by increasing appeal while avoiding the popularity bias inherent in supervised approaches. Finally, we study a setting in which users are more likely to stream more-streamed podcasts independent of their general appeal and find that our proposed algorithm is robust to this type of popularity bias.\n            <jats:xref ref-type=\"fn\">\n              <jats:sup>1<\/jats:sup>\n            <\/jats:xref>\n          <\/jats:p>","DOI":"10.1145\/3626324","type":"journal-article","created":{"date-parts":[[2023,10,5]],"date-time":"2023-10-05T15:43:31Z","timestamp":1696520611000},"page":"1-22","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Unbiased Identification of Broadly Appealing Content Using a Pure Exploration Infinitely Armed Bandit Strategy"],"prefix":"10.1145","volume":"3","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7377-7262","authenticated-orcid":false,"given":"Maryam","family":"Aziz","sequence":"first","affiliation":[{"name":"Spotify, New York, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0405-2441","authenticated-orcid":false,"given":"Jesse","family":"Anderton","sequence":"additional","affiliation":[{"name":"Spotify, New York, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2054-2985","authenticated-orcid":false,"given":"Kevin","family":"Jamieson","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8827-3780","authenticated-orcid":false,"given":"Alice","family":"Wang","sequence":"additional","affiliation":[{"name":"Spotify, New York, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2315-8268","authenticated-orcid":false,"given":"Hugues","family":"Bouchard","sequence":"additional","affiliation":[{"name":"Spotify, Barcelona, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-5098-6594","authenticated-orcid":false,"given":"Javed","family":"Aslam","sequence":"additional","affiliation":[{"name":"Northeastern University, Boston, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,8,2]]},"reference":[{"key":"e_1_3_3_2_2","first-page":"II\u20131638\u2013II\u20131646","volume-title":"Proceedings of the 31st International Conference on International Conference on Machine Learning, Volume 32 (ICML\u201914)","author":"Agarwal Alekh","year":"2014","unstructured":"Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire. 2014. Taming the monster: A fast and simple algorithm for contextual bandits. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Volume 32 (ICML\u201914). JMLR.org, II\u20131638\u2013II\u20131646."},{"key":"e_1_3_3_3_2","first-page":"III\u20131220\u2013III\u201312","volume-title":"Proceedings of the 30th International Conference on International Conference on Machine Learning, Volume 28 (ICML\u201913)","author":"Agrawal Shipra","year":"2013","unstructured":"Shipra Agrawal and Navin Goyal. 2013. Thompson sampling for contextual bandits with linear payoffs. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Volume 28 (ICML\u201913). JMLR.org, III\u20131220\u2013III\u20131228."},{"key":"e_1_3_3_4_2","volume-title":"Proceedings of the 23th Conference on Learning Theory (COLT\u201910)","author":"Audibert Jean-Yves","year":"2010","unstructured":"Jean-Yves Audibert and S\u00e9bastien Bubeck. 2010. Best arm identification in multi-armed bandits. In Proceedings of the 23th Conference on Learning Theory (COLT\u201910). 13 pages."},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1013689704352"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3523227.3546766"},{"key":"e_1_3_3_7_2","volume-title":"ALT 2018-Algorithmic Learning Theory","author":"Aziz Maryam","year":"2018","unstructured":"Maryam Aziz, Jesse Anderton, Emilie Kaufmann, and Javed Aslam. 2018. Pure exploration in infinitely-armed bandit models with fixed-confidence. In ALT 2018-Algorithmic Learning Theory."},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.1177\/1354856517736979"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.1214\/aos\/1069362389"},{"key":"e_1_3_3_10_2","article-title":"Two-target algorithms for infinite-armed bandits with bernoulli rewards","author":"Bonald Thomas","year":"2013","unstructured":"Thomas Bonald and Alexandre Proutiere. 2013. Two-target algorithms for infinite-armed bandits with bernoulli rewards. In Advances in Neural Information Processing Systems, C. J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.). Vol. 26. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper_les\/paper\/2013\/le\/fc2c7c47b918d0c2d792a719dfb602ef-Paper.pdf","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1561\/2200000024"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-04414-4_7"},{"key":"e_1_3_3_13_2","unstructured":"Giuseppe Burtini Jason Loeppky and Ramon Lawrence. 2015. A survey of online experiment design with the stochastic multi-armed bandit. (2015). arXiv:1510.00757 [stat.ML]."},{"key":"e_1_3_3_14_2","article-title":"Simple regret for infinitely many armed bandits","author":"Carpentier Alexandra","year":"2015","unstructured":"Alexandra Carpentier and Michal Valko. 2015. Simple regret for infinitely many armed bandits. In Proceedings of the 32nd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 37), Francis Bach and David Blei (Eds.). PMLR, Lille, France, 1133--1141. https:\/\/proceedings.mlr.press\/v37\/carpentier15.html","journal-title":"Proceedings of the 32nd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 37)"},{"key":"e_1_3_3_15_2","unstructured":"Hock Peng Chan and Shouri Hu. 2018. Infinite arms bandit: Optimality via confidence bounds. CoRR (2020). arXiv:1805.11793 [stat.ML]."},{"key":"e_1_3_3_16_2","volume-title":"Conference on Learning Theory","author":"Chandrasekaran Karthekeyan","year":"2014","unstructured":"Karthekeyan Chandrasekaran and Richard Karp. 2014. Finding a most biased coin with fewest flips. In Conference on Learning Theory. 394--407."},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v31i1.10802"},{"key":"e_1_3_3_18_2","first-page":"425","volume-title":"Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI\u201918)","author":"Chaudhuri Arghya Roy","year":"2018","unstructured":"Arghya Roy Chaudhuri and Shivaram Kalyanakrishnan. 2018. Quantile-regret minimisation in infinitely many-armed bandits. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI\u201918). 425\u2013434."},{"key":"e_1_3_3_19_2","series-title":"Proceedings of Machine Learning Research","first-page":"208","volume-title":"Proceedings of the 14th International Conference on Artificial Intelligence and Statistics","volume":"15","author":"Chu Wei","year":"2011","unstructured":"Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. 2011. Contextual bandits with linear payoff functions. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 15), Geoffrey Gordon, David Dunson, and Miroslav Dud\u00edk (Eds.). PMLR, 208\u2013214. https:\/\/proceedings.mlr.press\/v15\/chu11a.html"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-662-44848-9_20"},{"key":"e_1_3_3_21_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-23528-8_29"},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/1935826.1935925"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3530790"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/2827872"},{"key":"e_1_3_3_25_2","first-page":"775","volume-title":"Advances in Neural Information Processing Systems","author":"Jamieson Kevin G.","year":"2016","unstructured":"Kevin G. Jamieson, Daniel Haas, and Benjamin Recht. 2016. The power of adaptivity in identifying statistical alternatives. In Advances in Neural Information Processing Systems. 775\u2013783."},{"key":"e_1_3_3_26_2","article-title":"Next: A system for real-world development, evaluation, and application of active learning","author":"Jamieson Kevin G.","year":"2015","unstructured":"Kevin G. Jamieson, Lalit Jain, Chris Fernandez, Nicholas J. Glattard, and Rob Nowak. 2015. Next: A system for real-world development, evaluation, and application of active learning. Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.). Vol. 28. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper_les\/paper\/2015\/le\/89ae0fe22c47d374bc9350ef99e01685-Paper.pdf","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_27_2","volume-title":"Proceedings of the Conference on Learning Theory (COLT\u201914)","author":"Jamieson Kevin G.","year":"2014","unstructured":"Kevin G. Jamieson, Matthew Malloy, Robert D. Nowak, and S\u00e9bastien Bubeck. 2014. lil\u2019 UCB : An optimal exploration algorithm for multi-armed bandits. In Proceedings of the Conference on Learning Theory (COLT\u201914)."},{"key":"e_1_3_3_28_2","first-page":"1238","volume-title":"Proceedings of the 30th International Conference on Machine Learning (ICML\u201913)","volume":"28","author":"Karnin Zohar","year":"2013","unstructured":"Zohar Karnin, Tomer Koren, and Oren Somekh. 2013. Almost optimal exploration in multi-armed bandits. In Proceedings of the 30th International Conference on Machine Learning (ICML\u201913), Vol. 28. 1238\u20131246."},{"key":"e_1_3_3_29_2","article-title":"On the complexity of best-arm identification in multi-armed bandit models","author":"Kaufmann Emilie","year":"2016","unstructured":"Emilie Kaufmann, Olivier Capp\u00e9, and Aur\u00e9lien Garivier. 2016. On the complexity of best-arm identification in multi-armed bandit models. Journal of Machine Learning Research 17, 1 (2016), 1--42. http:\/\/jmlr.org\/papers\/v17\/kaufman16a.html","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.5555\/2981562.2981665"},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/1772690.1772758"},{"key":"e_1_3_3_32_2","article-title":"Hyperband: A novel bandit-based approach to hyperparameter optimization.","author":"Li Lisha","year":"2017","unstructured":"Lisha Li, Kevin G. Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2017. Hyperband: A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research 18, 1 (January 2017), 6765--6816.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_3_33_2","unstructured":"Larkin Liu Richard Downe and Joshua Reid. 2019. Multi-armed bandit strategies for non-stationary reward distributions and delayed feedback processes. CoRR abs\/1902.08593 (2019). arXiv:1902.08593 http:\/\/arxiv.org\/abs\/1902.08593"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2016.07.043"},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/CISS.2012.6310773"},{"key":"e_1_3_3_36_2","article-title":"Scikit-learn: Machine learning in Python","author":"Pedregosa F.","year":"2011","unstructured":"F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12 (2011), 2825--2830.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAI.2021.3117743"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/2433396.2433443"},{"key":"e_1_3_3_39_2","unstructured":"Wenbo Ren Jia Liu and Ness Shroff. 2018. Exploring k out of top \\(\\rho\\) fraction of arms in stochastic bandits. 89 (16--18 Apr 2019) 2820--2828. https:\/\/proceedings.mlr.press\/v89\/ren19a.html"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.1121066"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/1787234.1787254"},{"key":"e_1_3_3_42_2","article-title":"Anytime many-armed bandits","author":"Teytaud Olivier","year":"2007","unstructured":"Olivier Teytaud, Sylvain Gelly, and Michele Sebag. 2007. Anytime many-armed bandits. In CAP07. https:\/\/inria.hal.science\/inria-00173263","journal-title":"CAP07"},{"key":"e_1_3_3_43_2","article-title":"Algorithms for infinitely many-armed bandits","author":"Wang Yizao","year":"2008","unstructured":"Yizao Wang, Jean-Yves Audibert, and R\u00e9mi Munos. 2008. Algorithms for infinitely many-armed bandits. In Advances in Neural Information Processing Systems, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou (Eds.). Vol. 21. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper_les\/paper\/2008\/le\/49ae49a23f67c759bf4fc791ba842aa2-Paper.pdf","journal-title":"Advances in Neural Information Processing Systems"}],"container-title":["ACM Transactions on Recommender Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3626324","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3626324","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:53:37Z","timestamp":1750287217000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3626324"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,2]]},"references-count":42,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,3,31]]}},"alternative-id":["10.1145\/3626324"],"URL":"https:\/\/doi.org\/10.1145\/3626324","relation":{},"ISSN":["2770-6699"],"issn-type":[{"type":"electronic","value":"2770-6699"}],"subject":[],"published":{"date-parts":[[2024,8,2]]},"assertion":[{"value":"2022-12-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-09-10","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-08-02","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}