{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T19:03:58Z","timestamp":1772910238165,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":106,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,6,9]],"date-time":"2021-06-09T00:00:00Z","timestamp":1623196800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Innosuisse\/SNF BRIDGE Discovery","award":["40B2-0_187132"],"award-info":[{"award-number":["40B2-0_187132"]}]},{"name":"European Union Horizon 2020 Research and Innovation Programme","award":["DAPHNE, 957407"],"award-info":[{"award-number":["DAPHNE, 957407"]}]},{"name":"Swiss National Science Foundation","award":["200021_184628"],"award-info":[{"award-number":["200021_184628"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,6,9]]},"DOI":"10.1145\/3448016.3459240","type":"proceedings-article","created":{"date-parts":[[2021,6,18]],"date-time":"2021-06-18T17:22:39Z","timestamp":1624036959000},"page":"857-871","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":96,"title":["Towards Demystifying Serverless Machine Learning Training"],"prefix":"10.1145","author":[{"given":"Jiawei","family":"Jiang","sequence":"first","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"given":"Shaoduo","family":"Gan","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"given":"Yue","family":"Liu","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"given":"Fanlin","family":"Wang","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"given":"Gustavo","family":"Alonso","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"given":"Ana","family":"Klimovic","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"given":"Ankit","family":"Singla","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"given":"Wentao","family":"Wu","sequence":"additional","affiliation":[{"name":"Microsoft Research, Redmond, WA, USA"}]},{"given":"Ce","family":"Zhang","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]}],"member":"320","published-online":{"date-parts":[[2021,6,18]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/1376616.1376712"},{"key":"e_1_3_2_2_2_1","volume-title":"12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16)","author":"Abadi Mart'in","year":"2016","unstructured":"Mart'in Abadi , Paul Barham , Jianmin Chen , Zhifeng Chen , Andy Davis , Jeffrey Dean , Matthieu Devin , Sanjay Ghemawat , Geoffrey Irving , Michael Isard , 2016 . Tensorflow: A system for large-scale machine learning . In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) . 265--283. Mart'in Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et almbox. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). 265--283."},{"key":"e_1_3_2_2_3_1","volume-title":"Yggdrasil: An Optimized System for Training Deep Decision Trees at Scale. In NIPS . 3810--3818.","author":"Abuzaid Firas","year":"2016","unstructured":"Firas Abuzaid , Joseph K Bradley , Feynman T Liang , Andrew Feng , Lee Yang , Matei Zaharia , and Ameet S Talwalkar . 2016 . Yggdrasil: An Optimized System for Training Deep Decision Trees at Scale. In NIPS . 3810--3818. Firas Abuzaid, Joseph K Bradley, Feynman T Liang, Andrew Feng, Lee Yang, Matei Zaharia, and Ameet S Talwalkar. 2016. Yggdrasil: An Optimized System for Training Deep Decision Trees at Scale. In NIPS . 3810--3818."},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/CDC.2012.6426626"},{"key":"e_1_3_2_2_5_1","volume-title":"SAND: Towards High-Performance Serverless Computing. In 2018 USENIX Annual Technical Conference (USENIX ATC 18)","author":"Akkus Istemi Ekin","year":"2018","unstructured":"Istemi Ekin Akkus , Ruichuan Chen , Ivica Rimac , Manuel Stein , Klaus Satzke , Andre Beck , Paarijaat Aditya , and Volker Hilt . 2018 . SAND: Towards High-Performance Serverless Computing. In 2018 USENIX Annual Technical Conference (USENIX ATC 18) . 923--935. Istemi Ekin Akkus, Ruichuan Chen, Ivica Rimac, Manuel Stein, Klaus Satzke, Andre Beck, Paarijaat Aditya, and Volker Hilt. 2018. SAND: Towards High-Performance Serverless Computing. In 2018 USENIX Annual Technical Conference (USENIX ATC 18). 923--935."},{"key":"e_1_3_2_2_6_1","volume-title":"QSGD: Communication-efficient SGD via gradient quantization and encoding. In Advances in Neural Information Processing Systems. 1709--1720.","author":"Alistarh Dan","year":"2017","unstructured":"Dan Alistarh , Demjan Grubic , Jerry Li , Ryota Tomioka , and Milan Vojnovic . 2017 . QSGD: Communication-efficient SGD via gradient quantization and encoding. In Advances in Neural Information Processing Systems. 1709--1720. Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. 2017. QSGD: Communication-efficient SGD via gradient quantization and encoding. In Advances in Neural Information Processing Systems. 1709--1720."},{"key":"e_1_3_2_2_7_1","unstructured":"Dan Alistarh Torsten Hoefler Mikael Johansson Sarit Kririrat Nikola Konstantinov and Cedric Renggli. 2018b. The convergence of sparsified gradient methods. In Advances in Neural Information Processing Systems 31. 5973--5983.  Dan Alistarh Torsten Hoefler Mikael Johansson Sarit Kririrat Nikola Konstantinov and Cedric Renggli. 2018b. The convergence of sparsified gradient methods. In Advances in Neural Information Processing Systems 31. 5973--5983."},{"key":"e_1_3_2_2_8_1","unstructured":"Dan-Adrian Alistarh Zeyuan Allen-Zhu and Jerry Li. 2018a. Byzantine Stochastic Gradient Descent. In Advances in Neural Information Processing Systems. 4618--4628.  Dan-Adrian Alistarh Zeyuan Allen-Zhu and Jerry Li. 2018a. Byzantine Stochastic Gradient Descent. In Advances in Neural Information Processing Systems. 4618--4628."},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46759-7_15"},{"key":"e_1_3_2_2_10_1","unstructured":"Amazon. [n.d.] a. Amazon Serverless ML Training. https:\/\/aws.amazon.com\/blogs\/machine-learning\/code-free-machine-learning-automl-with-autogluon-amazon-sagemaker-and-aws-lambda\/.  Amazon. [n.d.] a. Amazon Serverless ML Training. https:\/\/aws.amazon.com\/blogs\/machine-learning\/code-free-machine-learning-automl-with-autogluon-amazon-sagemaker-and-aws-lambda\/."},{"key":"e_1_3_2_2_11_1","unstructured":"Amazon. [n.d.] b. AWS DynamoDB. https:\/\/aws.amazon.com\/dynamodb\/.  Amazon. [n.d.] b. AWS DynamoDB. https:\/\/aws.amazon.com\/dynamodb\/."},{"key":"e_1_3_2_2_12_1","unstructured":"Amazon. [n.d.] c. AWS DynamoDB Limitations. https:\/\/docs.aws.amazon.com\/amazondynamodb\/latest\/developerguide\/Limits.html##limits-items .  Amazon. [n.d.] c. AWS DynamoDB Limitations. https:\/\/docs.aws.amazon.com\/amazondynamodb\/latest\/developerguide\/Limits.html##limits-items ."},{"key":"e_1_3_2_2_13_1","unstructured":"Amazon. [n.d.] d. AWS ElastiCache. https:\/\/aws.amazon.com\/elasticache\/.  Amazon. [n.d.] d. AWS ElastiCache. https:\/\/aws.amazon.com\/elasticache\/."},{"key":"e_1_3_2_2_14_1","unstructured":"Amazon. [n.d.] e. AWS Lambda. https:\/\/aws.amazon.com\/lambda\/.  Amazon. [n.d.] e. AWS Lambda. https:\/\/aws.amazon.com\/lambda\/."},{"key":"e_1_3_2_2_15_1","unstructured":"Amazon. [n.d.] f. AWS Lambda Limitations. https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/gettingstarted-limits.html .  Amazon. [n.d.] f. AWS Lambda Limitations. https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/gettingstarted-limits.html ."},{"key":"e_1_3_2_2_16_1","unstructured":"Amazon. [n.d.] g. AWS Lambda Pricing. https:\/\/aws.amazon.com\/ec2\/pricing\/on-demand\/.  Amazon. [n.d.] g. AWS Lambda Pricing. https:\/\/aws.amazon.com\/ec2\/pricing\/on-demand\/."},{"key":"e_1_3_2_2_17_1","unstructured":"Amazon. [n.d.] h. AWS Lambda Redis vs. Memcached. https:\/\/aws.amazon.com\/elasticache\/redis-vs-memcached\/.  Amazon. [n.d.] h. AWS Lambda Redis vs. Memcached. https:\/\/aws.amazon.com\/elasticache\/redis-vs-memcached\/."},{"key":"e_1_3_2_2_18_1","unstructured":"Amazon. [n.d.] i. AWS S3. https:\/\/aws.amazon.com\/s3\/.  Amazon. [n.d.] i. AWS S3. https:\/\/aws.amazon.com\/s3\/."},{"key":"e_1_3_2_2_19_1","volume-title":"International Conference on Machine Learning . 2454--2462","author":"Aybat Necdet","year":"2015","unstructured":"Necdet Aybat , Zi Wang , and Garud Iyengar . 2015 . An asynchronous distributed proximal gradient method for composite convex optimization . In International Conference on Machine Learning . 2454--2462 . Necdet Aybat, Zi Wang, and Garud Iyengar. 2015. An asynchronous distributed proximal gradient method for composite convex optimization. In International Conference on Machine Learning . 2454--2462."},{"key":"e_1_3_2_2_20_1","volume-title":"et almbox","author":"Baldini Ioana","year":"2017","unstructured":"Ioana Baldini , Paul Castro , Kerry Chang , Perry Cheng , Stephen Fink , Vatche Ishakian , Nick Mitchell , Vinod Muthusamy , Rodric Rabbah , Aleksander Slominski , et almbox . 2017 . Serverless computing: Current trends and open problems. In Research Advances in Cloud Computing . 1--20. Ioana Baldini, Paul Castro, Kerry Chang, Perry Cheng, Stephen Fink, Vatche Ishakian, Nick Mitchell, Vinod Muthusamy, Rodric Rabbah, Aleksander Slominski, et almbox. 2017. Serverless computing: Current trends and open problems. In Research Advances in Cloud Computing . 1--20."},{"key":"e_1_3_2_2_21_1","volume-title":"2019 USENIX Conference on Operational Machine Learning (OpML 19)","author":"Bhattacharjee Anirban","year":"2019","unstructured":"Anirban Bhattacharjee , Yogesh Barve , Shweta Khare , Shunxing Bao , Aniruddha Gokhale , and Thomas Damiano . 2019 . Stratum: A serverless framework for the lifecycle management of machine learning-based data analytics tasks . In 2019 USENIX Conference on Operational Machine Learning (OpML 19) . 59--61. Anirban Bhattacharjee, Yogesh Barve, Shweta Khare, Shunxing Bao, Aniruddha Gokhale, and Thomas Damiano. 2019. Stratum: A serverless framework for the lifecycle management of machine learning-based data analytics tasks. In 2019 USENIX Conference on Operational Machine Learning (OpML 19). 59--61."},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.14778\/2732286.2732292"},{"key":"e_1_3_2_2_23_1","volume-title":"et almbox","author":"Boyd Stephen","year":"2011","unstructured":"Stephen Boyd , Neal Parikh , Eric Chu , Borja Peleato , Jonathan Eckstein , et almbox . 2011 . Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine learning , Vol. 3 , 1 (2011), 1--122. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et almbox. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine learning , Vol. 3, 1 (2011), 1--122."},{"key":"e_1_3_2_2_24_1","volume-title":"Workshop on Systems for ML and Open Source Software at NeurIPS","volume":"2018","author":"Carreira Joao","year":"2018","unstructured":"Joao Carreira , Pedro Fonseca , Alexey Tumanov , Andrew Zhang , and Randy Katz . 2018 . A case for serverless machine learning . In Workshop on Systems for ML and Open Source Software at NeurIPS , Vol. 2018 . Joao Carreira, Pedro Fonseca, Alexey Tumanov, Andrew Zhang, and Randy Katz. 2018. A case for serverless machine learning. In Workshop on Systems for ML and Open Source Software at NeurIPS, Vol. 2018."},{"key":"e_1_3_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357223.3362711"},{"key":"e_1_3_2_2_26_1","unstructured":"Sorathan Chaturapruek John C Duchi and Christopher R\u00e9. 2015. Asynchronous stochastic convex optimization: the noise is in the noise and SGD don't care. In Advances in Neural Information Processing Systems. 1531--1539.  Sorathan Chaturapruek John C Duchi and Christopher R\u00e9. 2015. Asynchronous stochastic convex optimization: the noise is in the noise and SGD don't care. In Advances in Neural Information Processing Systems. 1531--1539."},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939785"},{"key":"e_1_3_2_2_28_1","volume-title":"International Conference on Machine Learning . 1388--1396","author":"Colin Igor","year":"2016","unstructured":"Igor Colin , Aur\u00e9lien Bellet , Joseph Salmon , and St\u00e9phan Cl\u00e9mencc on. 2016 . Gossip dual averaging for decentralized optimization of pairwise functions . In International Conference on Machine Learning . 1388--1396 . Igor Colin, Aur\u00e9lien Bellet, Joseph Salmon, and St\u00e9phan Cl\u00e9mencc on. 2016. Gossip dual averaging for decentralized optimization of pairwise functions. In International Conference on Machine Learning . 1388--1396."},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v29i1.9195"},{"key":"e_1_3_2_2_30_1","volume-title":"et almbox","author":"Dean Jeffrey","year":"2012","unstructured":"Jeffrey Dean , Greg Corrado , Rajat Monga , Kai Chen , Matthieu Devin , Mark Mao , Marc'aurelio Ranzato , Andrew Senior , Paul Tucker , Ke Yang , et almbox . 2012 . Large scale distributed deep networks. In Advances in Neural Information Processing Systems . 1223--1231. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, et almbox. 2012. Large scale distributed deep networks. In Advances in Neural Information Processing Systems. 1223--1231."},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.14778\/2994509.2994515"},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2094114.2094126"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3386137"},{"key":"e_1_3_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/CLOUD.2018.00049"},{"key":"e_1_3_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3343737.3343750"},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.14778\/3342263.3342273"},{"key":"e_1_3_2_2_37_1","unstructured":"Google. [n.d.]. Google Cloud Functions. https:\/\/cloud.google.com\/functions\/.  Google. [n.d.]. Google Cloud Functions. https:\/\/cloud.google.com\/functions\/."},{"key":"e_1_3_2_2_38_1","volume-title":"Argonne Distinguished Fellow Emeritus Ewing Lusk, and Anthony Skjellum","author":"Gropp William","year":"1999","unstructured":"William Gropp , William D Gropp , Ewing Lusk , Argonne Distinguished Fellow Emeritus Ewing Lusk, and Anthony Skjellum . 1999 . Using MPI: portable parallel programming with the message-passing interface. Vol. 1 . MIT press . William Gropp, William D Gropp, Ewing Lusk, Argonne Distinguished Fellow Emeritus Ewing Lusk, and Anthony Skjellum. 1999. Using MPI: portable parallel programming with the message-passing interface. Vol. 1. MIT press."},{"key":"e_1_3_2_2_39_1","volume-title":"Oversketched newton: Fast convex optimization for serverless systems. arXiv preprint arXiv:1903.08857","author":"Gupta Vipul","year":"2019","unstructured":"Vipul Gupta , Swanand Kadhe , Thomas Courtade , Michael W Mahoney , and Kannan Ramchandran . 2019. Oversketched newton: Fast convex optimization for serverless systems. arXiv preprint arXiv:1903.08857 ( 2019 ). Vipul Gupta, Swanand Kadhe, Thomas Courtade, Michael W Mahoney, and Kannan Ramchandran. 2019. Oversketched newton: Fast convex optimization for serverless systems. arXiv preprint arXiv:1903.08857 (2019)."},{"key":"e_1_3_2_2_40_1","volume-title":"Sangeetha Abdu Jyothi, and Roy H Campbell","author":"Hashemi Sayed Hadi","year":"2019","unstructured":"Sayed Hadi Hashemi , Sangeetha Abdu Jyothi, and Roy H Campbell . 2019 . Tictac : Accelerating distributed deep learning with communication scheduling. In SysML . Sayed Hadi Hashemi, Sangeetha Abdu Jyothi, and Roy H Campbell. 2019. Tictac: Accelerating distributed deep learning with communication scheduling. In SysML ."},{"key":"e_1_3_2_2_41_1","volume-title":"Advances In Neural Information Processing Systems","volume":"31","author":"He Lie","year":"2018","unstructured":"Lie He , An Bian , and Martin Jaggi . 2018 . COLA: Decentralized Linear Learning . In Advances In Neural Information Processing Systems , Vol. 31 . Lie He, An Bian, and Martin Jaggi. 2018. COLA: Decentralized Linear Learning. In Advances In Neural Information Processing Systems, Vol. 31."},{"key":"e_1_3_2_2_42_1","volume-title":"Serverless Computing: One Step Forward, Two Steps Back. In CIDR .","author":"Hellerstein Joseph M.","year":"2019","unstructured":"Joseph M. Hellerstein , Jose M. Faleiro , Joseph Gonzalez , Johann Schleier-Smith , Vikram Sreekanti , Alexey Tumanov , and Chenggang Wu . 2019 . Serverless Computing: One Step Forward, Two Steps Back. In CIDR . Joseph M. Hellerstein, Jose M. Faleiro, Joseph Gonzalez, Johann Schleier-Smith, Vikram Sreekanti, Alexey Tumanov, and Chenggang Wu. 2019. Serverless Computing: One Step Forward, Two Steps Back. In CIDR ."},{"key":"e_1_3_2_2_43_1","volume-title":"Eugene Fratkin, Aleksander Gorajek, Kee Siong Ng, Caleb Welton, Xixuan Feng, Kun Li, et almbox.","author":"Hellerstein Joseph M","year":"2012","unstructured":"Joseph M Hellerstein , Christoper R\u00e9 , Florian Schoppmann , Daisy Zhe Wang , Eugene Fratkin, Aleksander Gorajek, Kee Siong Ng, Caleb Welton, Xixuan Feng, Kun Li, et almbox. 2012 . The MADlib Analytics Library . Proceedings of the VLDB Endowment , Vol. 5 , 12 (2012). Joseph M Hellerstein, Christoper R\u00e9, Florian Schoppmann, Daisy Zhe Wang, Eugene Fratkin, Aleksander Gorajek, Kee Siong Ng, Caleb Welton, Xixuan Feng, Kun Li, et almbox. 2012. The MADlib Analytics Library. Proceedings of the VLDB Endowment , Vol. 5, 12 (2012)."},{"key":"e_1_3_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.5555\/3027041.3027047"},{"key":"e_1_3_2_2_45_1","volume-title":"Phillip B Gibbons, Garth A Gibson, Greg Ganger, and Eric P Xing.","author":"Ho Qirong","year":"2013","unstructured":"Qirong Ho , James Cipar , Henggang Cui , Seunghak Lee , Jin Kyu Kim , Phillip B Gibbons, Garth A Gibson, Greg Ganger, and Eric P Xing. 2013 . More effective distributed ml via a stale synchronous parallel parameter server. In Advances in Neural Information Processing Systems . 1223--1231. Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B Gibbons, Garth A Gibson, Greg Ganger, and Eric P Xing. 2013. More effective distributed ml via a stale synchronous parallel parameter server. In Advances in Neural Information Processing Systems. 1223--1231."},{"key":"e_1_3_2_2_46_1","volume-title":"14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17)","author":"Hsieh Kevin","year":"2017","unstructured":"Kevin Hsieh , Aaron Harlap , Nandita Vijaykumar , Dimitris Konomis , Gregory R Ganger , Phillip B Gibbons , and Onur Mutlu . 2017 . Gaia: Geo-distributed machine learning approaching LAN speeds . In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17) . 629--647. Kevin Hsieh, Aaron Harlap, Nandita Vijaykumar, Dimitris Konomis, Gregory R Ganger, Phillip B Gibbons, and Onur Mutlu. 2017. Gaia: Geo-distributed machine learning approaching LAN speeds. In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17). 629--647."},{"key":"e_1_3_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3187009.3177734"},{"key":"e_1_3_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/IC2E.2018.00052"},{"key":"e_1_3_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3380575"},{"key":"e_1_3_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3183713.3196892"},{"key":"e_1_3_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3035918.3035933"},{"key":"e_1_3_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3183713.3196894"},{"key":"e_1_3_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00778-019-00596-3"},{"key":"e_1_3_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3035918.3064042"},{"key":"e_1_3_2_2_55_1","doi-asserted-by":"publisher","DOI":"10.14778\/3297753.3297756"},{"key":"e_1_3_2_2_56_1","doi-asserted-by":"crossref","unstructured":"Can Karakus Yifan Sun Suhas Diggavi and Wotao Yin. 2017. Straggler mitigation in distributed optimization through data encoding. In Advances in Neural Information Processing Systems. 5434--5442.  Can Karakus Yifan Sun Suhas Diggavi and Wotao Yin. 2017. Straggler mitigation in distributed optimization through data encoding. In Advances in Neural Information Processing Systems. 5434--5442.","DOI":"10.1109\/ISIT.2017.8007058"},{"key":"e_1_3_2_2_57_1","volume-title":"2018 USENIX Annual Technical Conference (USENIX ATC 18)","author":"Klimovic Ana","year":"2018","unstructured":"Ana Klimovic , Yawen Wang , Christos Kozyrakis , Patrick Stuedi , Jonas Pfefferle , and Animesh Trivedi . 2018 a. Understanding ephemeral storage for serverless analytics . In 2018 USENIX Annual Technical Conference (USENIX ATC 18) . 789--794. Ana Klimovic, Yawen Wang, Christos Kozyrakis, Patrick Stuedi, Jonas Pfefferle, and Animesh Trivedi. 2018a. Understanding ephemeral storage for serverless analytics. In 2018 USENIX Annual Technical Conference (USENIX ATC 18). 789--794."},{"key":"e_1_3_2_2_58_1","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Klimovic Ana","year":"2018","unstructured":"Ana Klimovic , Yawen Wang , Patrick Stuedi , Animesh Trivedi , Jonas Pfefferle , and Christos Kozyrakis . 2018 b. Pocket: Elastic ephemeral storage for serverless analytics . In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18) . 427--444. Ana Klimovic, Yawen Wang, Patrick Stuedi, Animesh Trivedi, Jonas Pfefferle, and Christos Kozyrakis. 2018b. Pocket: Elastic ephemeral storage for serverless analytics. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 427--444."},{"key":"e_1_3_2_2_59_1","volume-title":"International Conference on Machine Learning . PMLR, 3478--3487","author":"Koloskova Anastasia","year":"2019","unstructured":"Anastasia Koloskova , Sebastian Stich , and Martin Jaggi . 2019 . Decentralized stochastic optimization and gossip algorithms with compressed communication . In International Conference on Machine Learning . PMLR, 3478--3487 . Anastasia Koloskova, Sebastian Stich, and Martin Jaggi. 2019. Decentralized stochastic optimization and gossip algorithms with compressed communication. In International Conference on Machine Learning . PMLR, 3478--3487."},{"key":"e_1_3_2_2_60_1","volume-title":"CIDR","volume":"1","author":"Kraska Tim","year":"2013","unstructured":"Tim Kraska , Ameet Talwalkar , John C Duchi , Rean Griffith , Michael J Franklin , and Michael I Jordan . 2013 . MLbase: A Distributed Machine-learning System .. In CIDR , Vol. 1 . 2--1. Tim Kraska, Ameet Talwalkar, John C Duchi, Rean Griffith, Michael J Franklin, and Michael I Jordan. 2013. MLbase: A Distributed Machine-learning System.. In CIDR, Vol. 1. 2--1."},{"key":"e_1_3_2_2_61_1","unstructured":"Alex Krizhevsky Ilya Sutskever and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097--1105.  Alex Krizhevsky Ilya Sutskever and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097--1105."},{"key":"e_1_3_2_2_62_1","doi-asserted-by":"publisher","DOI":"10.5555\/1005332.1005345"},{"key":"e_1_3_2_2_63_1","doi-asserted-by":"crossref","unstructured":"Mu Li David G Andersen Alexander J Smola and Kai Yu. 2014. Communication efficient distributed machine learning with the parameter server. In Advances in Neural Information Processing Systems. 19--27.  Mu Li David G Andersen Alexander J Smola and Kai Yu. 2014. Communication efficient distributed machine learning with the parameter server. In Advances in Neural Information Processing Systems. 19--27.","DOI":"10.1145\/2640087.2644155"},{"key":"e_1_3_2_2_64_1","doi-asserted-by":"publisher","DOI":"10.14778\/3415478.3415530"},{"key":"e_1_3_2_2_65_1","unstructured":"Xiangru Lian Ce Zhang Huan Zhang Cho-Jui Hsieh Wei Zhang and Ji Liu. 2017. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in Neural Information Processing Systems. 5336--5346.  Xiangru Lian Ce Zhang Huan Zhang Cho-Jui Hsieh Wei Zhang and Ji Liu. 2017. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in Neural Information Processing Systems. 5336--5346."},{"key":"e_1_3_2_2_66_1","volume-title":"Use Local SGD. In International Conference on Learning Representations .","author":"Lin Tao","year":"2019","unstructured":"Tao Lin , Sebastian U Stich , Kumar Kshitij Patel , and Martin Jaggi . 2019 . Don't Use Large Mini-batches , Use Local SGD. In International Conference on Learning Representations . Tao Lin, Sebastian U Stich, Kumar Kshitij Patel, and Martin Jaggi. 2019. Don't Use Large Mini-batches, Use Local SGD. In International Conference on Learning Representations ."},{"key":"e_1_3_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1561\/1900000062"},{"key":"e_1_3_2_2_68_1","volume-title":"International Conference on Machine Learning. 1973--1982","author":"Ma Chenxin","year":"2015","unstructured":"Chenxin Ma , Virginia Smith , Martin Jaggi , Michael Jordan , Peter Richt\u00e1rik , and Martin Tak\u00e1c . 2015 . Adding vs. averaging in distributed primal-dual optimization . In International Conference on Machine Learning. 1973--1982 . Chenxin Ma, Virginia Smith, Martin Jaggi, Michael Jordan, Peter Richt\u00e1rik, and Martin Tak\u00e1c. 2015. Adding vs. averaging in distributed primal-dual optimization. In International Conference on Machine Learning. 1973--1982."},{"key":"e_1_3_2_2_69_1","doi-asserted-by":"publisher","DOI":"10.14778\/3236187.3236188"},{"key":"e_1_3_2_2_70_1","volume-title":"KungFu: Making Training in Distributed Machine Learning Adaptive. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20)","author":"Mai Luo","year":"2020","unstructured":"Luo Mai , Guo Li , Marcel Wagenl\"ander, Konstantinos Fertakis , Andrei-Octavian Brabete , and Peter Pietzuch . 2020 . KungFu: Making Training in Distributed Machine Learning Adaptive. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20) . 937--954. Luo Mai, Guo Li, Marcel Wagenl\"ander, Konstantinos Fertakis, Andrei-Octavian Brabete, and Peter Pietzuch. 2020. KungFu: Making Training in Distributed Machine Learning Adaptive. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20) . 937--954."},{"key":"e_1_3_2_2_71_1","volume-title":"15th Workshop on Hot Topics in Operating Systems (HotOS 15)","author":"McSherry Frank","year":"2015","unstructured":"Frank McSherry , Michael Isard , and Derek G Murray . 2015 . Scalability! But at what COST? . In 15th Workshop on Hot Topics in Operating Systems (HotOS 15) . Frank McSherry, Michael Isard, and Derek G Murray. 2015. Scalability! But at what COST?. In 15th Workshop on Hot Topics in Operating Systems (HotOS 15) ."},{"key":"e_1_3_2_2_72_1","doi-asserted-by":"publisher","DOI":"10.5555\/2946645.2946679"},{"key":"e_1_3_2_2_73_1","unstructured":"Microsoft. [n.d.] a. Azure Functions. https:\/\/azure.microsoft.com\/en-us\/services\/functions\/.  Microsoft. [n.d.] a. Azure Functions. https:\/\/azure.microsoft.com\/en-us\/services\/functions\/."},{"key":"e_1_3_2_2_74_1","unstructured":"Microsoft. [n.d.] b. Azure HDInsight. https:\/\/docs.microsoft.com\/en-us\/azure\/hdinsight\/.  Microsoft. [n.d.] b. Azure HDInsight. https:\/\/docs.microsoft.com\/en-us\/azure\/hdinsight\/."},{"key":"e_1_3_2_2_75_1","unstructured":"MIT. [n.d.]. StarCluster. http:\/\/star.mit.edu\/cluster\/.  MIT. [n.d.]. StarCluster. http:\/\/star.mit.edu\/cluster\/."},{"key":"e_1_3_2_2_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3389758"},{"key":"e_1_3_2_2_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/2733373.2807410"},{"key":"e_1_3_2_2_78_1","unstructured":"Oracle. 2019. Scaling R to the Enterprise. https:\/\/www.oracle.com\/technetwork\/database\/database-technologies\/r\/r-enterprise\/bringing-r-to-the-enterprise-1956618.pdf .  Oracle. 2019. Scaling R to the Enterprise. https:\/\/www.oracle.com\/technetwork\/database\/database-technologies\/r\/r-enterprise\/bringing-r-to-the-enterprise-1956618.pdf ."},{"key":"e_1_3_2_2_79_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359642"},{"key":"e_1_3_2_2_80_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3380609"},{"key":"e_1_3_2_2_81_1","volume-title":"16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19)","author":"Pu Qifan","year":"2019","unstructured":"Qifan Pu , Shivaram Venkataraman , and Ion Stoica . 2019 . Shuffling, fast and slow: Scalable analytics on serverless infrastructure . In 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19) . 193--206. Qifan Pu, Shivaram Venkataraman, and Ion Stoica. 2019. Shuffling, fast and slow: Scalable analytics on serverless infrastructure. In 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19) . 193--206."},{"key":"e_1_3_2_2_82_1","volume-title":"2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19)","author":"Rausch Thomas","year":"2019","unstructured":"Thomas Rausch , Waldemar Hummer , Vinod Muthusamy , Alexander Rashed , and Schahram Dustdar . 2019 . Towards a serverless platform for edge AI . In 2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19) . Thomas Rausch, Waldemar Hummer, Vinod Muthusamy, Alexander Rashed, and Schahram Dustdar. 2019. Towards a serverless platform for edge AI. In 2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19) ."},{"key":"e_1_3_2_2_83_1","volume-title":"Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems. 693--701.","author":"Recht Benjamin","year":"2011","unstructured":"Benjamin Recht , Christopher Re , Stephen Wright , and Feng Niu . 2011 . Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems. 693--701. Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. 2011. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems. 693--701."},{"key":"e_1_3_2_2_84_1","doi-asserted-by":"publisher","DOI":"10.14778\/3407790.3407796"},{"key":"e_1_3_2_2_85_1","doi-asserted-by":"publisher","DOI":"10.14778\/3352063.3352083"},{"key":"e_1_3_2_2_86_1","volume-title":"Numpywren: Serverless linear algebra. arXiv preprint arXiv:1810.09679","author":"Shankar Vaishaal","year":"2018","unstructured":"Vaishaal Shankar , Karl Krauth , Qifan Pu , Eric Jonas , Shivaram Venkataraman , Ion Stoica , Benjamin Recht , and Jonathan Ragan-Kelley . 2018 . Numpywren: Serverless linear algebra. arXiv preprint arXiv:1810.09679 (2018). Vaishaal Shankar, Karl Krauth, Qifan Pu, Eric Jonas, Shivaram Venkataraman, Ion Stoica, Benjamin Recht, and Jonathan Ragan-Kelley. 2018. Numpywren: Serverless linear algebra. arXiv preprint arXiv:1810.09679 (2018)."},{"key":"e_1_3_2_2_87_1","first-page":"230","article-title":"CoCoA: A general framework for communication-efficient distributed optimization","volume":"18","author":"Smith Virginia","year":"2018","unstructured":"Virginia Smith , Simone Forte , Ma Chenxin , Martin Tak\u00e1vc , Michael I Jordan , and Martin Jaggi . 2018 . CoCoA: A general framework for communication-efficient distributed optimization . Journal of Machine Learning Research , Vol. 18 (2018), 230 . Virginia Smith, Simone Forte, Ma Chenxin, Martin Tak\u00e1vc, Michael I Jordan, and Martin Jaggi. 2018. CoCoA: A general framework for communication-efficient distributed optimization. Journal of Machine Learning Research , Vol. 18 (2018), 230.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_2_88_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2017.109"},{"key":"e_1_3_2_2_89_1","volume-title":"International Conference on Machine Learning. 3368--3376","author":"Tandon Rashish","year":"2017","unstructured":"Rashish Tandon , Qi Lei , Alexandros G Dimakis , and Nikos Karampatziakis . 2017 . Gradient coding: Avoiding stragglers in distributed learning . In International Conference on Machine Learning. 3368--3376 . Rashish Tandon, Qi Lei, Alexandros G Dimakis, and Nikos Karampatziakis. 2017. Gradient coding: Avoiding stragglers in distributed learning. In International Conference on Machine Learning. 3368--3376."},{"key":"e_1_3_2_2_90_1","unstructured":"Hanlin Tang Shaoduo Gan Ce Zhang Tong Zhang and Ji Liu. 2018a. Communication Compression for Decentralized Training. In Advances in Neural Information Processing Systems. 7663--7673.  Hanlin Tang Shaoduo Gan Ce Zhang Tong Zhang and Ji Liu. 2018a. Communication Compression for Decentralized Training. In Advances in Neural Information Processing Systems. 7663--7673."},{"key":"e_1_3_2_2_91_1","volume-title":"International Conference on Machine Learning. 4848--4856","author":"Tang Hanlin","year":"2018","unstructured":"Hanlin Tang , Xiangru Lian , Ming Yan , Ce Zhang , and Ji Liu . 2018 b. D$^2$: Decentralized training over decentralized data . In International Conference on Machine Learning. 4848--4856 . Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, and Ji Liu. 2018b. D$^2$: Decentralized training over decentralized data. In International Conference on Machine Learning. 4848--4856."},{"key":"e_1_3_2_2_92_1","volume-title":"Distributed machine learning with a serverless architecture","author":"Wang Hao","unstructured":"Hao Wang , Di Niu , and Baochun Li. 2019. Distributed machine learning with a serverless architecture . In IEEE INFOCOM . 1288--1296. Hao Wang, Di Niu, and Baochun Li. 2019. Distributed machine learning with a serverless architecture. In IEEE INFOCOM . 1288--1296."},{"key":"e_1_3_2_2_93_1","volume-title":"ATOMO: Communication-efficient Learning via Atomic Sparsification. In Advances in Neural Information Processing Systems. 9872--9883.","author":"Wang H","year":"2018","unstructured":"H Wang , S Sievert , S Liu , Z Charles , D Papailiopoulos , and SJ Wright . 2018 b. ATOMO: Communication-efficient Learning via Atomic Sparsification. In Advances in Neural Information Processing Systems. 9872--9883. H Wang, S Sievert, S Liu, Z Charles, D Papailiopoulos, and SJ Wright. 2018b. ATOMO: Communication-efficient Learning via Atomic Sparsification. In Advances in Neural Information Processing Systems. 9872--9883."},{"key":"e_1_3_2_2_94_1","volume-title":"Adaptive communication strategies to achieve the best error-runtime trade-off in local-update SGD. arXiv preprint arXiv:1810.08313","author":"Wang Jianyu","year":"2018","unstructured":"Jianyu Wang and Gauri Joshi . 2018. Adaptive communication strategies to achieve the best error-runtime trade-off in local-update SGD. arXiv preprint arXiv:1810.08313 ( 2018 ). Jianyu Wang and Gauri Joshi. 2018. Adaptive communication strategies to achieve the best error-runtime trade-off in local-update SGD. arXiv preprint arXiv:1810.08313 (2018)."},{"key":"e_1_3_2_2_95_1","volume-title":"2018 USENIX Annual Technical Conference (USENIX ATC 18)","author":"Wang Liang","year":"2018","unstructured":"Liang Wang , Mengyuan Li , Yinqian Zhang , Thomas Ristenpart , and Michael Swift . 2018 a. Peeking behind the curtains of serverless platforms . In 2018 USENIX Annual Technical Conference (USENIX ATC 18) . 133--146. Liang Wang, Mengyuan Li, Yinqian Zhang, Thomas Ristenpart, and Michael Swift. 2018a. Peeking behind the curtains of serverless platforms. In 2018 USENIX Annual Technical Conference (USENIX ATC 18). 133--146."},{"key":"e_1_3_2_2_96_1","unstructured":"Wei Wen Cong Xu Feng Yan Chunpeng Wu Yandan Wang Yiran Chen and Hai Li. 2017. TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning. In Advances in Neural Information Processing Systems. 1508--1518.  Wei Wen Cong Xu Feng Yan Chunpeng Wu Yandan Wang Yiran Chen and Hai Li. 2017. TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning. In Advances in Neural Information Processing Systems. 1508--1518."},{"key":"e_1_3_2_2_97_1","volume-title":"International Conference on Machine Learning . 5325--5333","author":"Wu Jiaxiang","year":"2018","unstructured":"Jiaxiang Wu , Weidong Huang , Junzhou Huang , and Tong Zhang . 2018 . Error compensated quantized SGD and its applications to large-scale distributed optimization . In International Conference on Machine Learning . 5325--5333 . Jiaxiang Wu, Weidong Huang, Junzhou Huang, and Tong Zhang. 2018. Error compensated quantized SGD and its applications to large-scale distributed optimization. In International Conference on Machine Learning . 5325--5333."},{"key":"e_1_3_2_2_98_1","volume-title":"International Conference on Machine Learning. 6893--6901","author":"Xie Cong","year":"2019","unstructured":"Cong Xie , Sanmi Koyejo , and Indranil Gupta . 2019 . Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance . In International Conference on Machine Learning. 6893--6901 . Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2019. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In International Conference on Machine Learning. 6893--6901."},{"key":"e_1_3_2_2_99_1","unstructured":"Yahoo. [n.d.]. YFCC100M. http:\/\/projects.dfki.uni-kl.de\/yfcc100m\/.  Yahoo. [n.d.]. YFCC100M. http:\/\/projects.dfki.uni-kl.de\/yfcc100m\/."},{"key":"e_1_3_2_2_100_1","doi-asserted-by":"publisher","DOI":"10.1137\/130943170"},{"key":"e_1_3_2_2_101_1","volume-title":"International Conference on Machine Learning . 4035--4043","author":"Zhang Hantian","year":"2017","unstructured":"Hantian Zhang , Jerry Li , Kaan Kara , Dan Alistarh , Ji Liu , and Ce Zhang . 2017 . Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning . In International Conference on Machine Learning . 4035--4043 . Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, and Ce Zhang. 2017. Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning. In International Conference on Machine Learning . 4035--4043."},{"key":"e_1_3_2_2_102_1","doi-asserted-by":"publisher","DOI":"10.5555\/2567709.2567769"},{"key":"e_1_3_2_2_103_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2019.00194"},{"key":"e_1_3_2_2_104_1","volume-title":"International Conference on Machine Learning. 4120--4129","author":"Zheng Shuxin","year":"2017","unstructured":"Shuxin Zheng , Qi Meng , Taifeng Wang , Wei Chen , Nenghai Yu , Zhi-Ming Ma , and Tie-Yan Liu . 2017 a. Asynchronous stochastic gradient descent with delay compensation . In International Conference on Machine Learning. 4120--4129 . Shuxin Zheng, Qi Meng, Taifeng Wang, Wei Chen, Nenghai Yu, Zhi-Ming Ma, and Tie-Yan Liu. 2017a. Asynchronous stochastic gradient descent with delay compensation. In International Conference on Machine Learning. 4120--4129."},{"key":"e_1_3_2_2_105_1","volume-title":"International Conference on Machine Learning. 4120--4129","author":"Zheng Shuxin","year":"2017","unstructured":"Shuxin Zheng , Qi Meng , Taifeng Wang , Wei Chen , Nenghai Yu , Zhi-Ming Ma , and Tie-Yan Liu . 2017 b. Asynchronous stochastic gradient descent with delay compensation . In International Conference on Machine Learning. 4120--4129 . Shuxin Zheng, Qi Meng, Taifeng Wang, Wei Chen, Nenghai Yu, Zhi-Ming Ma, and Tie-Yan Liu. 2017b. Asynchronous stochastic gradient descent with delay compensation. In International Conference on Machine Learning. 4120--4129."},{"key":"e_1_3_2_2_106_1","unstructured":"Martin Zinkevich Markus Weimer Alexander J. Smola and Lihong Li. 2010. Parallelized Stochastic Gradient Descent. In Advances in Neural Information Processing Systems. 2595--2603.  Martin Zinkevich Markus Weimer Alexander J. Smola and Lihong Li. 2010. Parallelized Stochastic Gradient Descent. In Advances in Neural Information Processing Systems. 2595--2603."}],"event":{"name":"SIGMOD\/PODS '21: International Conference on Management of Data","location":"Virtual Event China","acronym":"SIGMOD\/PODS '21","sponsor":["SIGMOD ACM Special Interest Group on Management of Data"]},"container-title":["Proceedings of the 2021 International Conference on Management of Data"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3448016.3459240","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3448016.3459240","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:25:04Z","timestamp":1750195504000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3448016.3459240"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,6,9]]},"references-count":106,"alternative-id":["10.1145\/3448016.3459240","10.1145\/3448016"],"URL":"https:\/\/doi.org\/10.1145\/3448016.3459240","relation":{},"subject":[],"published":{"date-parts":[[2021,6,9]]},"assertion":[{"value":"2021-06-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}