{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T20:33:52Z","timestamp":1773261232920,"version":"3.50.1"},"reference-count":71,"publisher":"Association for Computing Machinery (ACM)","issue":"CoNEXT4","license":[{"start":{"date-parts":[[2024,11,25]],"date-time":"2024-11-25T00:00:00Z","timestamp":1732492800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Netw."],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:p>Despite offering early promise, Deep Reinforcement Learning (DRL) suffers from several challenges in adaptive bitrate streaming stemming from the uncertainty and noise in network conditions. However, in this paper, we find that although these challenges complicate the training process, in practice, we can substantially mitigate their effects by addressing a key overlooked factor: the skewed input trace distribution in DRL training datasets.<\/jats:p>\n          <jats:p>\n            We introduce a generalized framework,\n            <jats:italic toggle=\"yes\">Plume<\/jats:italic>\n            , to automatically identify and balance the skew using a three-stage process. First, we identify the critical features that determine the behavior of the traces. Second, we classify the traces into clusters. Finally, we prioritize the salient clusters to improve the\n            <jats:italic toggle=\"yes\">overall<\/jats:italic>\n            performance of the controller. We implement our ideas with a novel ABR controller,\n            <jats:italic toggle=\"yes\">Gelato<\/jats:italic>\n            , and evaluate the performance against state-of-the-art controllers in the real world for more than a year, streaming 59 stream-years of television to over 280,000 users on the live streaming platform Puffer. Gelato trained with Plume outperforms all baseline solutions and becomes the first controller on the platform to deliver statistically significant improvements in both video quality and stalling, decreasing stalls by as much as 75%.\n          <\/jats:p>","DOI":"10.1145\/3696401","type":"journal-article","created":{"date-parts":[[2024,11,25]],"date-time":"2024-11-25T11:15:47Z","timestamp":1732533347000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Practically High Performant Neural Adaptive Video Streaming"],"prefix":"10.1145","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0009-0000-6122-2705","authenticated-orcid":false,"given":"Sagar","family":"Patel","sequence":"first","affiliation":[{"name":"University of California, Irvine, Irvine, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-7234-4557","authenticated-orcid":false,"given":"Junyang","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of California, Irvine, Irvine, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5181-4560","authenticated-orcid":false,"given":"Nina","family":"Narodystka","sequence":"additional","affiliation":[{"name":"VMware Research by Broadcom, Palo Alto, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-0503-4478","authenticated-orcid":false,"given":"Sangeetha Abdu","family":"Jyothi","sequence":"additional","affiliation":[{"name":"University of California, Irvine &amp; VMware Research by Broadcom, Irvine, CA, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,11,25]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"[n. d.]. Expectation--maximization algorithm - Wikipedia. https:\/\/en.wikipedia.org\/wiki\/Expectation%E2%80% 93maximization_algorithm. (Accessed on 01\/16\/2023)."},{"key":"e_1_2_1_2_1","unstructured":"[n. d.]. Puffer. https:\/\/puffer.stanford.edu\/results\/. (Accessed on 04\/20\/2022)."},{"key":"e_1_2_1_3_1","unstructured":"[n. d.]. Puffer. https:\/\/puffer.stanford.edu\/bola\/. (Accessed on 06\/09\/2024)."},{"key":"e_1_2_1_4_1","unstructured":"[n. d.]. Scikit-Learn Recursive Feature Elimiation. https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.feature_ selection.RFE.html#sklearn.feature_selection.RFE. (Accessed on 01\/15\/2023)."},{"key":"e_1_2_1_5_1","unstructured":"[n. d.]. Silhouette (clustering) - Wikipedia. https:\/\/en.wikipedia.org\/wiki\/Silhouette_(clustering). (Accessed on 01\/16\/2023)."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3387514.3405892"},{"key":"e_1_2_1_7_1","unstructured":"Joshua Achiam. 2018. Spinning Up in Deep Reinforcement Learning. (2018)."},{"key":"e_1_2_1_8_1","volume-title":"Aaron C Courville, and Marc Bellemare.","author":"Agarwal Rishabh","year":"2021","unstructured":"Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. 2021. Deep reinforcement learning at the edge of the statistical precipice. Advances in neural information processing systems 34 (2021), 29304--29320."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3230543.3230558"},{"key":"e_1_2_1_10_1","volume-title":"20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23)","author":"Alomar Abdullah","year":"2023","unstructured":"Abdullah Alomar, Pouya Hamadanian, Arash Nasr-Esfahany, Anish Agarwal, Mohammad Alizadeh, and Devavrat Shah. 2023. {CausalSim}: A Causal Framework for Unbiased {Trace-Driven} Simulation. In 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23). 1115--1147."},{"key":"e_1_2_1_11_1","volume-title":"International conference on learning representations.","author":"Andrychowicz Marcin","year":"2020","unstructured":"Marcin Andrychowicz, Anton Raichuk, Piotr Stanczyk, Manu Orsini, Sertan Girgin, Rapha\u00ebl Marinier, Leonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, et al. 2020. What matters for on-policy deep actor-critic methods? a large-scale study. In International conference on learning representations."},{"key":"e_1_2_1_12_1","volume-title":"Openai gym. arXiv preprint arXiv:1606.01540","author":"Brockman Greg","year":"2016","unstructured":"Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016)."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.03.067"},{"key":"e_1_2_1_14_1","volume-title":"Cisco visual networking index: Forecast and trends","author":"Cisco V","year":"2017","unstructured":"V Cisco. 2018. Cisco visual networking index: Forecast and trends, 2017--2022. White paper 1, 1 (2018)."},{"key":"e_1_2_1_15_1","volume-title":"Let's Play Again: Variability of Deep Reinforcement Learning Agents in Atari Environments. arXiv preprint arXiv:1904.06312","author":"Clary Kaleigh","year":"2019","unstructured":"Kaleigh Clary, Emma Tosch, John Foley, and David Jensen. 2019. Let's Play Again: Variability of Deep Reinforcement Learning Agents in Atari Environments. arXiv preprint arXiv:1904.06312 (2019)."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2021.114885"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/90.944338"},{"key":"e_1_2_1_18_1","volume-title":"International conference on machine learning. PMLR, 1146--1155","author":"Foerster Jakob","year":"2017","unstructured":"Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip HS Torr, Pushmeet Kohli, and Shimon Whiteson. 2017. Stabilising experience replay for deep multi-agent reinforcement learning. In International conference on machine learning. PMLR, 1146--1155."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3365609.3365862"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-020--2649--2"},{"key":"e_1_2_1_21_1","volume-title":"Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415","author":"Hendrycks Dan","year":"2016","unstructured":"Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 (2016)."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11796"},{"key":"e_1_2_1_23_1","volume-title":"Hado Van Hasselt, and David Silver","author":"Horgan Dan","year":"2018","unstructured":"Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado Van Hasselt, and David Silver. 2018. Distributed prioritized experience replay. arXiv preprint arXiv:1803.00933 (2018)."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2619239.2626296"},{"key":"e_1_2_1_25_1","volume-title":"When to trust your model: Model-based policy optimization. Advances in neural information processing systems 32","author":"Janner Michael","year":"2019","unstructured":"Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. 2019. When to trust your model: Model-based policy optimization. Advances in neural information processing systems 32 (2019)."},{"key":"e_1_2_1_26_1","volume-title":"International conference on machine learning. PMLR, 3050--3059","author":"Jay Nathan","year":"2019","unstructured":"Nathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, and Aviv Tamar. 2019. A deep reinforcement learning perspective on internet congestion control. In International conference on machine learning. PMLR, 3050--3059."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-67235-9_9"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10796-020-10022-7"},{"key":"e_1_2_1_29_1","volume-title":"International conference on learning representations.","author":"Kapturowski Steven","year":"2018","unstructured":"Steven Kapturowski, Georg Ostrovski, John Quan, Remi Munos, and Will Dabney. 2018. Recurrent experience replay in distributed reinforcement learning. In International conference on learning representations."},{"key":"e_1_2_1_30_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3343440","article-title":"A systematic review on imbalanced data challenges in machine learning: Applications and solutions","volume":"52","author":"Kaur Harsurinder","year":"2019","unstructured":"Harsurinder Kaur, Husanbir Singh Pannu, and Avleen Kaur Malhi. 2019. A systematic review on imbalanced data challenges in machine learning: Applications and solutions. ACM Computing Surveys (CSUR) 52, 4 (2019), 1--36.","journal-title":"ACM Computing Surveys (CSUR)"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/BigDataService52369.2021.00023"},{"key":"e_1_2_1_32_1","volume-title":"Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643","author":"Levine Sergey","year":"2020","unstructured":"Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. 2020. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 (2020)."},{"key":"e_1_2_1_33_1","volume-title":"RLlib: Abstractions for Distributed Reinforcement Learning. In International Conference on Machine Learning (ICML).","author":"Liang Eric","year":"2018","unstructured":"Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph E. Gonzalez, Michael I. Jordan, and Ion Stoica. 2018. RLlib: Abstractions for Distributed Reinforcement Learning. In International Conference on Machine Learning (ICML)."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCNC.2019.8685519"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3005745.3005750"},{"key":"e_1_2_1_36_1","volume-title":"Real-world video adaptation with reinforcement learning. arXiv preprint arXiv:2008.12858","author":"Mao Hongzi","year":"2020","unstructured":"Hongzi Mao, Shannon Chen, Drew Dimmery, Shaun Singh, Drew Blaisdell, Yuandong Tian, Mohammad Alizadeh, and Eytan Bakshy. 2020. Real-world video adaptation with reinforcement learning. arXiv preprint arXiv:2008.12858 (2020)."},{"key":"e_1_2_1_37_1","volume-title":"Songtao He, Vikram Nathan, et al.","author":"Mao Hongzi","year":"2019","unstructured":"Hongzi Mao, Parimarjan Negi, Akshay Narayan, Hanrui Wang, Jiacheng Yang, Haonan Wang, Ryan Marcus, Mehrdad Khani Shirkoohi, Songtao He, Vikram Nathan, et al. 2019. Park: An open platform for learning-augmented computer systems. Advances in Neural Information Processing Systems 32 (2019)."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3098822.3098843"},{"key":"e_1_2_1_39_1","volume-title":"Malte Schwarzkopf, and Mohammad Alizadeh.","author":"Mao Hongzi","year":"2018","unstructured":"Hongzi Mao, Shaileshh Bojja Venkatakrishnan, Malte Schwarzkopf, and Mohammad Alizadeh. 2018. Variance reduction for reinforcement learning in input-driven environments. arXiv preprint arXiv:1807.02264 (2018)."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/SOFTCOM.2015.7314110"},{"key":"e_1_2_1_41_1","volume-title":"International conference on machine learning. PMLR","author":"Mnih Volodymyr","year":"2016","unstructured":"Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In International conference on machine learning. PMLR, 1928--1937."},{"key":"e_1_2_1_42_1","volume-title":"Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602","author":"Mnih Volodymyr","year":"2013","unstructured":"Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)."},{"key":"e_1_2_1_43_1","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Moritz Philipp","year":"2018","unstructured":"Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I Jordan, et al. 2018. Ray: A distributed framework for emerging {AI} applications. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 561--577."},{"key":"e_1_2_1_44_1","volume-title":"Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941","author":"Narasimhan Karthik","year":"2015","unstructured":"Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941 (2015)."},{"key":"e_1_2_1_45_1","volume-title":"Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019)."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/268437.268737"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.5555\/1953048.2078195"},{"key":"e_1_2_1_48_1","volume-title":"Dan Horgan, David Budden, Gabriel Barth-Maron, Hado Van Hasselt, John Quan, Mel Vecer\u00edk, et al.","author":"Pohlen Tobias","year":"2018","unstructured":"Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado Van Hasselt, John Quan, Mel Vecer\u00edk, et al. 2018. Observe and look further: Achieving consistent performance on Atari. arXiv preprint arXiv:1805.11593 (2018)."},{"key":"e_1_2_1_49_1","first-page":"1","article-title":"Stable-Baselines3: Reliable Reinforcement Learning Implementations","volume":"22","author":"Raffin Antonin","year":"2021","unstructured":"Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. 2021. Stable-Baselines3: Reliable Reinforcement Learning Implementations. Journal of Machine Learning Research 22, 268 (2021), 1--8. http:\/\/jmlr.org\/papers\/v22\/20--1364.html","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_1_50_1","volume-title":"Conference on Robot Learning. PMLR, 1634--1644","author":"Raffin Antonin","year":"2022","unstructured":"Antonin Raffin, Jens Kober, and Freek Stulp. 2022. Smooth exploration for robotic reinforcement learning. In Conference on Robot Learning. PMLR, 1634--1644."},{"key":"e_1_2_1_51_1","volume-title":"Prioritized experience replay. arXiv preprint arXiv:1511.05952","author":"Schaul Tom","year":"2015","unstructured":"Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2015. Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015)."},{"key":"e_1_2_1_52_1","doi-asserted-by":"crossref","unstructured":"Julian Schrittwieser Ioannis Antonoglou Thomas Hubert Karen Simonyan Laurent Sifre Simon Schmitt Arthur Guez Edward Lockhart Demis Hassabis Thore Graepel et al. 2020. Mastering Atari Go Chess and Shogi by planning with a learned model. Nature 588 7839 (2020) 604--609.","DOI":"10.1038\/s41586-020-03051-4"},{"key":"e_1_2_1_53_1","volume-title":"International conference on machine learning. PMLR","author":"Schulman John","year":"2015","unstructured":"John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In International conference on machine learning. PMLR, 1889--1897."},{"key":"e_1_2_1_54_1","volume-title":"Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347","author":"Schulman John","year":"2017","unstructured":"John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)."},{"key":"e_1_2_1_55_1","unstructured":"David Silver. 2015. Lectures on Reinforcement Learning. url: https:\/\/www.davidsilver.uk\/teaching\/."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNET.2020.2996964"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/1943552.1943572"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/2934872.2934898"},{"key":"e_1_2_1_59_1","volume-title":"Reinforcement learning: An introduction","author":"Sutton Richard S","unstructured":"Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press."},{"key":"e_1_2_1_60_1","volume-title":"Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224","author":"Wang Ziyu","year":"2016","unstructured":"Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. 2016. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224 (2016)."},{"key":"e_1_2_1_61_1","volume-title":"Image quality assessment: from error visibility to structural similarity","author":"Wang Zhou","year":"2004","unstructured":"Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 4 (2004), 600--612."},{"key":"e_1_2_1_62_1","volume-title":"International conference on machine learning. PMLR","author":"Wang Ziyu","year":"2016","unstructured":"Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. 2016. Dueling network architectures for deep reinforcement learning. In International conference on machine learning. PMLR, 1995--2003."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544216.3544243"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3603269.3604857"},{"key":"e_1_2_1_65_1","volume-title":"17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20)","author":"Yan Francis Y","year":"2020","unstructured":"Francis Y Yan, Hudson Ayers, Chenzhi Zhu, Sadjad Fouladi, James Hong, Keyi Zhang, Philip Levis, and Keith Winstein. 2020. Learning in situ: a randomized experiment in video streaming. In 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20). 495--511."},{"key":"e_1_2_1_66_1","volume-title":"Identifying and compensating for feature deviation in imbalanced deep learning. arXiv preprint arXiv:2001.01385","author":"Ye Han-Jia","year":"2020","unstructured":"Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. 2020. Identifying and compensating for feature deviation in imbalanced deep learning. arXiv preprint arXiv:2001.01385 (2020)."},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/2785956.2787486"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/2785956.2787498"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/3452296.3472926"},{"key":"e_1_2_1_70_1","volume-title":"18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21)","author":"Zhang Xu","year":"2021","unstructured":"Xu Zhang, Yiyang Ou, Siddhartha Sen, and Junchen Jiang. 2021. {SENSEI}: Aligning video streaming quality with dynamic user sensitivity. In 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21). 303--320."},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1145\/2699343.2699359"}],"container-title":["Proceedings of the ACM on Networking"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3696401","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3696401","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,23]],"date-time":"2025-08-23T01:24:25Z","timestamp":1755912265000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3696401"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,25]]},"references-count":71,"journal-issue":{"issue":"CoNEXT4","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["10.1145\/3696401"],"URL":"https:\/\/doi.org\/10.1145\/3696401","relation":{},"ISSN":["2834-5509"],"issn-type":[{"value":"2834-5509","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,25]]},"assertion":[{"value":"2024-11-25","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}