{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T05:05:09Z","timestamp":1750309509080,"version":"3.41.0"},"reference-count":108,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T00:00:00Z","timestamp":1741737600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"King Abdullah University of Science and Technology (KAUST) Office of Research Administration","award":["ORA-CRG2021-4699"],"award-info":[{"award-number":["ORA-CRG2021-4699"]}]},{"DOI":"10.13039\/501100004329","name":"Slovenian Research Agency","doi-asserted-by":"crossref","award":["J2-3047"],"award-info":[{"award-number":["J2-3047"]}],"id":[{"id":"10.13039\/501100004329","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Model. Perform. Eval. Comput. Syst."],"published-print":{"date-parts":[[2025,3,31]]},"abstract":"<jats:p>Growing concerns about centralized mining of personal data threatens to stifle further proliferation of machine learning (ML) applications. Consequently, a recent trend in ML training advocates for a paradigm shift \u2013 moving the computation of ML models from a centralized server to a federation of edge devices owned by the users whose data is to be mined. Though such decentralization aims to alleviate concerns related to raw data sharing, it introduces a set of challenges due to the hardware heterogeneity among the devices possessing the data. The heterogeneity may, in the most extreme cases, impede the participation of low-end devices in the training or even prevent the deployment of the ML model to such devices.<\/jats:p>\n          <jats:p\/>\n          <jats:p>Recent research in distributed collaborative machine learning (DCML) promises to address the issue of ML model training over heterogeneous devices. However, the actual extent to which the issue is solved remains unclear, especially as an independent investigation of the proposed methods\u2019 performance in realistic settings is missing. In this paper, we present a detailed survey and an evaluation of algorithms that aim to enable collaborative model training across diverse devices. We explore approaches that harness three major strategies for DCML, namely Knowledge Distillation, Split Learning, and Partial Training, and we conduct a thorough experimental evaluation of these approaches on a real-world testbed of 14 heterogeneous devices. Our analysis compares algorithms based on the resulting model accuracy, memory consumption, CPU utilization, network activity, and other relevant metrics, and provides guidelines for practitioners as well as pointers for future research in DCML.<\/jats:p>","DOI":"10.1145\/3708983","type":"journal-article","created":{"date-parts":[[2024,12,20]],"date-time":"2024-12-20T10:07:23Z","timestamp":1734689243000},"page":"1-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Review and Comparative Evaluation of Resource-Adaptive Collaborative Training for Heterogeneous Edge Devices"],"prefix":"10.1145","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-4142-931X","authenticated-orcid":false,"given":"Boris","family":"Radovi\u010d","sequence":"first","affiliation":[{"name":"King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"},{"name":"University of Ljubljana, Faculty of Computer and Information Science, Ljubljana, Slovenia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5051-4283","authenticated-orcid":false,"given":"Marco","family":"Canini","sequence":"additional","affiliation":[{"name":"King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9009-0024","authenticated-orcid":false,"given":"Veljko","family":"Pejovi\u0107","sequence":"additional","affiliation":[{"name":"University of Ljubljana, Faculty of Computer and Information Science, Ljubljana Slovenia"},{"name":"Jozef Stefan Institute, Ljubljana Slovenia"}]}],"member":"320","published-online":{"date-parts":[[2025,3,12]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/3517207.3526969"},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/3552326.3567485"},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/3320269.3384740"},{"key":"e_1_3_3_5_2","volume-title":"NeurIPS","author":"Alam Samiul","year":"2022","unstructured":"Samiul Alam, Luyang Liu, Ming Yan, and Mi Zhang. 2022. FedRolex: Model-heterogeneous federated learning with rolling sub-model extraction. In NeurIPS."},{"key":"e_1_3_3_6_2","volume-title":"ICLR","author":"Anil Rohan","year":"2018","unstructured":"Rohan Anil, Gabriel Pereyra, Alexandre Passos, R\u00f3bert Orm\u00e1ndi, George E. Dahl, and Geoffrey E. Hinton. 2018. Large scale distributed neural network training through online distillation. In ICLR."},{"key":"e_1_3_3_7_2","unstructured":"Manoj Ghuhan Arivazhagan Vinay Aggarwal Aaditya Kumar Singh and Sunav Choudhary. 2019. Federated learning with personalization layers. (2019). arxiv:1912.00818 [cs.DC]"},{"key":"e_1_3_3_8_2","unstructured":"Gustav A. Baumgart Jaemin Shin Ali Payani Myungjin Lee and Ramana Rao Kompella. 2024. Not all federated learning algorithms are created equal: A performance evaluation study. (2024). arxiv:2403.17287 [cs.DC]"},{"key":"e_1_3_3_9_2","unstructured":"Daniel J. Beutel Taner Topal Akhil Mathur Xinchi Qiu Titouan Parcollet and Nicholas D. Lane. 2020. Flower: A friendly federated learning research framework. (2020). arxiv:2007.14390 [cs.DC]"},{"key":"e_1_3_3_10_2","unstructured":"Keith Bonawitz Hubert Eichner Wolfgang Grieskamp Dzmitry Huba Alex Ingerman Vladimir Ivanov Chloe Kiddon Jakub Kone\u010dn\u00fd Stefano Mazzocchi H. Brendan McMahan Timon Van Overveldt David Petrou Daniel Ramage and Jason Roselander. 2019. Towards federated learning at scale: System design. (2019). arxiv:1902.01046 [cs.DC]"},{"key":"e_1_3_3_11_2","volume-title":"ACM\/IEEE Symposium on Edge Computing (SEC)","author":"Bo\u017ei\u010d Janez","year":"2024","unstructured":"Janez Bo\u017ei\u010d, Am\u00e2ndio R. Faustino, Boris Radovi\u010d, Marco Canini, and Veljko Pejovi\u0107. 2024. Where is the testbed for my federated learning research?. In ACM\/IEEE Symposium on Edge Computing (SEC)."},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN48605.2020.9207469"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/1150402.1150464"},{"key":"e_1_3_3_14_2","unstructured":"S. Caldas J. Kone\u010dny H. B. McMahan et\u00a0al. 2018. Expanding the reach of federated learning by reducing client resource requirements. (2018). arxiv:1812.07210 [cs.DC]"},{"key":"e_1_3_3_15_2","unstructured":"Hongyan Chang Virat Shejwalkar Reza Shokri and Amir Houmansadr. 2019. Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer. (2019). arxiv:1912.11279 [cs.DC]"},{"key":"e_1_3_3_16_2","unstructured":"Zachary Charles Kallista A. Bonawitz Stanislav Chiknavaryan Brendan McMahan and Blaise Ag\u00fcera y Arcas. 2022. Federated select: A primitive for communication- and memory-efficient federated learning. (2022). arxiv:2208.09432 [cs.DC]"},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-acl.277"},{"key":"e_1_3_3_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW56347.2022.00382"},{"key":"e_1_3_3_19_2","unstructured":"Sijie Cheng Jingwen Wu Yanghua Xiao and Yang Liu. 2021. FedGEMS: Federated learning of larger server models via selective knowledge fusion. (2021). arxiv:2110.11027 [cs.DC]"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2022\/399"},{"key":"e_1_3_3_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2022.3231527"},{"key":"e_1_3_3_22_2","unstructured":"Ayush Chopra Surya Kant Sahu Abhishek Singh Abhinav Java Praneeth Vepakomma Vivek Sharma and Ramesh Raskar. 2021. AdaSplit: Adaptive trade-offs for resource-constrained distributed deep learning. arxiv:2112.01637 [cs.LG]"},{"key":"e_1_3_3_23_2","unstructured":"Luke Nicholas Darlow Elliot J. Crowley Antreas Antoniou and Amos J. Storkey. 2018. CINIC-10 is not ImageNet or CIFAR-10. (2018). arxiv:1810.03505 [cs.DC]"},{"key":"e_1_3_3_24_2","volume-title":"ICLR","author":"Diao Enmao","year":"2021","unstructured":"Enmao Diao, Jie Ding, and Vahid Tarokh. 2021. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. In ICLR."},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.3390\/s22165983"},{"key":"e_1_3_3_26_2","volume-title":"AISTATS","author":"Dun Chen","year":"2023","unstructured":"Chen Dun, Mirian Hipolito Garcia, Chris Jermaine, Dimitrios Dimitriadis, and Anastasios Kyrillidis. 2023. Efficient and light-weight federated learning via asynchronous distributed dropout. In AISTATS."},{"key":"e_1_3_3_27_2","doi-asserted-by":"crossref","DOI":"10.1201\/9781003214892","volume-title":"Conference on Uncertainty in Artificial Intelligence (UAI)","author":"Dun Chen","year":"2022","unstructured":"Chen Dun, Cameron R. Wolfe, Christopher M. Jermaine, and Anastasios Kyrillidis. 2022. ResIST: Layer-wise decomposition of ResNets for distributed training. In Conference on Uncertainty in Artificial Intelligence (UAI)."},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICC51166.2024.10622512"},{"key":"e_1_3_3_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/SRDS51746.2020.00017"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2021.3135752"},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-80432-9_34"},{"key":"e_1_3_3_32_2","volume-title":"NeurIPS","author":"Ghosh Avishek","year":"2020","unstructured":"Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. 2020. An efficient framework for clustered federated learning. In NeurIPS."},{"key":"e_1_3_3_33_2","volume-title":"ICML","author":"Gilad-Bachrach Ran","year":"2016","unstructured":"Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin E. Lauter, Michael Naehrig, and John Wernsing. 2016. CryptoNets: Applying neural networks to encrypted data with high throughput and accuracy. In ICML."},{"key":"e_1_3_3_34_2","unstructured":"Google. 2022. How Messages improves suggestions with federated technology. https:\/\/support.google.com\/messages\/answer\/9327902?hl=en. Accessed: 2023-11-11."},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jnca.2018.05.003"},{"key":"e_1_3_3_36_2","unstructured":"Andrew Hard Kanishka Rao Rajiv Mathews Fran\u00e7oise Beaufays Sean Augenstein Hubert Eichner Chlo\u00e9 Kiddon and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. (2018). arxiv:1811.03604"},{"key":"e_1_3_3_37_2","volume-title":"NeurIPS","author":"He Chaoyang","year":"2020","unstructured":"Chaoyang He, Murali Annavaram, and Salman Avestimehr. 2020. Group knowledge transfer: Federated learning of large CNNs at the edge. In NeurIPS."},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_3_39_2","unstructured":"Geoffrey E. Hinton Oriol Vinyals and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. (2015). arxiv:1503.02531 [cs.DC]"},{"key":"e_1_3_3_40_2","volume-title":"NeurIPS","author":"Horv\u00e1th Samuel","year":"2021","unstructured":"Samuel Horv\u00e1th, Stefanos Laskaridis, M\u00e1rio Almeida, Ilias Leontiadis, Stylianos I. Venieris, and Nicholas D. Lane. 2021. FjORD: Fair and accurate federated learning under heterogeneous targets with ordered dropout. In NeurIPS."},{"key":"e_1_3_3_41_2","volume-title":"ICML","author":"Hsieh Kevin","year":"2020","unstructured":"Kevin Hsieh, Amar Phanishayee, Onur Mutlu, and Phillip B. Gibbons. 2020. The non-IID data quagmire of decentralized machine learning. In ICML."},{"key":"e_1_3_3_42_2","unstructured":"Tzu-Ming Harry Hsu Hang Qi and Matthew Brown. 2019. Measuring the effects of non-identical data distribution for federated visual classification. (2019). arxiv:1909.06335 [cs.DC]"},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2021.01.046"},{"key":"e_1_3_3_44_2","volume-title":"NeurIPS","author":"Huang Yanping","year":"2019","unstructured":"Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. 2019. GPipe: Efficient training of giant neural networks using pipeline parallelism. In NeurIPS."},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2021.3070013"},{"key":"e_1_3_3_46_2","unstructured":"Eunjeong Jeong Seungeun Oh Hyesung Kim Jihong Park Mehdi Bennis and Seong-Lyun Kim. 2018. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-IID private data. (2018). arXiv:1811.11479"},{"key":"e_1_3_3_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2022.3166101"},{"key":"e_1_3_3_48_2","article-title":"Computation and communication efficient federated learning with adaptive model pruning","author":"Jiang Zhida","year":"2024","unstructured":"Zhida Jiang, Yang Xu, Hongli Xu, Zhiyuan Wang, Jianchun Liu, Chen Qian, and Chunming Qiao. 2024. Computation and communication efficient federated learning with adaptive model pruning. IEEE Transactions on Mobile Computing (2024).","journal-title":"IEEE Transactions on Mobile Computing"},{"key":"e_1_3_3_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/WCNC55385.2023.10118601"},{"key":"e_1_3_3_50_2","unstructured":"James Kirkpatrick Razvan Pascanu Neil C. Rabinowitz et\u00a0al. 2016. Overcoming catastrophic forgetting in neural networks. (2016). arxiv:1612.00796"},{"key":"e_1_3_3_51_2","unstructured":"A. Krizhevsky. 2009. Learning multiple layers of features from tiny images. (2009)."},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447993.3483278"},{"key":"e_1_3_3_53_2","unstructured":"Daliang Li and Junpu Wang. 2019. FedMD: Heterogenous federated learning via model distillation. (2019). arxiv:1910.03581 [cs.DC]"},{"key":"e_1_3_3_54_2","volume-title":"MLSys","author":"Li Tian","year":"2020","unstructured":"Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. In MLSys."},{"key":"e_1_3_3_55_2","unstructured":"Paul Pu Liang Terrance Liu Ziyin Liu Ruslan Salakhutdinov and Louis-Philippe Morency. 2020. Think locally act globally: Federated learning with local and global representations. (2020). arxiv:2001.01523 [cs.DC]"},{"key":"e_1_3_3_56_2","article-title":"Accelerating federated learning with data and model parallelism in edge computing","author":"Liao Yunming","year":"2023","unstructured":"Yunming Liao, Yang Xu, Hongli Xu, Zhiwei Yao, Lun Wang, and Chunming Qiao. 2023. Accelerating federated learning with data and model parallelism in edge computing. IEEE\/ACM Transactions on Networking (2023).","journal-title":"IEEE\/ACM Transactions on Networking"},{"key":"e_1_3_3_57_2","volume-title":"NeurIPS","author":"Lin Tao","year":"2020","unstructured":"Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. 2020. Ensemble distillation for robust model fusion in federated learning. In NeurIPS."},{"key":"e_1_3_3_58_2","unstructured":"Terrance Liu and Paul Liang. 2020. Federated learning with local and global representations. https:\/\/github.com\/pliang279\/LG-FedAvg. Accessed: 2024-03-06."},{"key":"e_1_3_3_59_2","volume-title":"AISTATS","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Ag\u00fcera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In AISTATS."},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-43415-0_23"},{"key":"e_1_3_3_61_2","volume-title":"AISTATS","author":"Nguyen John","year":"2022","unstructured":"John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Mike Rabbat, Mani Malek, and Dzmitry Huba. 2022. Federated learning with buffered asynchronous aggregation. In AISTATS."},{"key":"e_1_3_3_62_2","volume-title":"ICLR","author":"Nguyen John","year":"2023","unstructured":"John Nguyen, Jianyu Wang, Kshitiz Malik, Maziar Sanjabi, and Michael G. Rabbat. 2023. Where to begin? On the impact of pre-training and initialization in federated learning. In ICLR."},{"key":"e_1_3_3_63_2","unstructured":"Yue Niu Saurav Prakash Souvik Kundu Sunwoo Lee and Salman Avestimehr. 2022. Federated learning of large models at the edge via principal sub-model training. (2022). arxiv:2208.13141 [cs.DC]"},{"key":"e_1_3_3_64_2","unstructured":"Ziru Niu Hai Dong and A. Kai Qin. 2024. FedSPU: Personalized federated learning for resource-constrained devices with stochastic parameter update. (2024). arxiv:2403.11464 [cs.DC]"},{"key":"e_1_3_3_65_2","volume-title":"WWW","author":"Oh Seungeun","year":"2022","unstructured":"Seungeun Oh, Jihong Park, Praneeth Vepakomma, Sihun Baek, Ramesh Raskar, Mehdi Bennis, and Seong-Lyun Kim. 2022. LocFedMix-SL: Localize, federate, and mix for improved scalability, convergence, and latency in split learning. In WWW."},{"key":"e_1_3_3_66_2","unstructured":"Shraman Pal Mansi Uniyal Jihong Park Praneeth Vepakomma Ramesh Raskar Mehdi Bennis Moongu Jeon and Jinho Choi. 2021. Server-side local gradient averaging and learning rate acceleration for scalable split learning. (2021). arxiv:2112.05929 [cs.DC]"},{"key":"e_1_3_3_67_2","doi-asserted-by":"publisher","DOI":"10.1145\/3446382.3448362"},{"key":"e_1_3_3_68_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01225-0_36"},{"key":"e_1_3_3_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/3460120.3485259"},{"key":"e_1_3_3_70_2","article-title":"Advances and open problems in federated learning","author":"al. Peter Kairouz et","year":"2021","unstructured":"Peter Kairouz et al.2021. Advances and open problems in federated learning. Foundations and Trends\u00ae in Machine Learning (2021).","journal-title":"Foundations and Trends\u00ae in Machine Learning"},{"key":"e_1_3_3_71_2","volume-title":"ICML","author":"Pillutla Krishna","year":"2022","unstructured":"Krishna Pillutla, Kshitiz Malik, Abdelrahman Mohamed, Michael G. Rabbat, Maziar Sanjabi, and Lin Xiao. 2022. Federated learning with partial model personalization. In ICML."},{"key":"e_1_3_3_72_2","unstructured":"Maarten G. Poirot Praneeth Vepakomma Ken Chang Jayashree Kalpathy-Cramer Rajiv Gupta and Ramesh Raskar. 2019. Split learning for collaborative deep learning in healthcare. (2019). arxiv:1912.12115 [cs.DC]"},{"key":"e_1_3_3_73_2","volume-title":"ICLR","author":"Reddi Sashank J.","year":"2021","unstructured":"Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Kone\u010dn\u00fd, Sanjiv Kumar, and Hugh Brendan McMahan. 2021. Adaptive federated optimization. In ICLR."},{"key":"e_1_3_3_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2010.127"},{"key":"e_1_3_3_75_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.comnet.2022.109380"},{"key":"e_1_3_3_76_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2021.3129371"},{"key":"e_1_3_3_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.3015958"},{"key":"e_1_3_3_78_2","unstructured":"Tao Shen Jie Zhang Xinkang Jia Fengda Zhang Gang Huang Pan Zhou Kun Kuang Fei Wu and Chao Wu. 2020. Federated mutual learning. (2020). arxiv:2006.16765 [cs.LG]"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSP.2020.3046971"},{"key":"e_1_3_3_80_2","volume-title":"ICML","author":"Shulgin Egor","year":"2024","unstructured":"Egor Shulgin and Peter Richt\u00e1rik. 2024. Towards a better theoretical understanding of independent subnetwork training. In ICML."},{"key":"e_1_3_3_81_2","unstructured":"Dan Simmons. 2022. 17 Countries with GDPR-like data privacy laws. https:\/\/insights.comforte.com\/countries-with-gdpr-like-data-privacy-laws. Accessed: 2023-12-06."},{"key":"e_1_3_3_82_2","unstructured":"Abhishek Singh Praneeth Vepakomma Otkrist Gupta and Ramesh Raskar. 2019. Detailed comparison of communication efficiency of split learning and federated learning. (2019). arXiv:1909.09145"},{"key":"e_1_3_3_83_2","volume-title":"NeurIPS","author":"Singhal Karan","year":"2021","unstructured":"Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, John Rush, and Sushant Prakash. 2021. Federated reconstruction: Partially local federated learning. In NeurIPS."},{"key":"e_1_3_3_84_2","volume-title":"ICLR","author":"Springenberg Jost Tobias","year":"2015","unstructured":"Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2015. Striving for simplicity: The all convolutional net. In ICLR."},{"key":"e_1_3_3_85_2","unstructured":"Sebastian U. Stich. 2018. Local SGD converges fast and communicates little. (2018). arxiv:1805.09767 [cs.DC]"},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2022.3160699"},{"key":"e_1_3_3_87_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i8.20819"},{"key":"e_1_3_3_88_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i8.20825"},{"key":"e_1_3_3_89_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2021.102402"},{"key":"e_1_3_3_90_2","doi-asserted-by":"publisher","DOI":"10.1145\/3386367.3431678"},{"key":"e_1_3_3_91_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDCSW53096.2021.00012"},{"key":"e_1_3_3_92_2","unstructured":"Praneeth Vepakomma Otkrist Gupta Tristan Swedish and Ramesh Raskar. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. (2018). arxiv:1812.00564 [cs.DC]"},{"key":"e_1_3_3_93_2","doi-asserted-by":"publisher","DOI":"10.1109\/LWC.2022.3149783"},{"key":"e_1_3_3_94_2","unstructured":"Herbert Woisetschl\u00e4ger Alexander Isenko Ruben Mayer and Hans-Arno Jacobsen. 2023. FLEDGE: Benchmarking federated machine learning applications in edge computing systems. (2023). arxiv:2306.05172"},{"key":"e_1_3_3_95_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41468-023-00127-8"},{"key":"e_1_3_3_96_2","unstructured":"Kok-Seng Wong Manh Nguyen-Duc Khiem Le-Huy et\u00a0al. 2023. An empirical study of federated learning on IoT-edge devices: Resource allocation and heterogeneity. (2023). arxiv:2305.19831"},{"key":"e_1_3_3_97_2","article-title":"Communication-efficient federated learning via knowledge distillation","author":"Wu Chuhan","year":"2022","unstructured":"Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, and Xing Xie. 2022. Communication-efficient federated learning via knowledge distillation. Nature Communications (2022).","journal-title":"Nature Communications"},{"key":"e_1_3_3_98_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2022.3176469"},{"key":"e_1_3_3_99_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cosrev.2023.100595"},{"key":"e_1_3_3_100_2","article-title":"Accelerating federated learning for IoT in big data analytics with pruning, quantization and selective updating","author":"Xu Wenyuan","year":"2021","unstructured":"Wenyuan Xu, Weiwei Fang, Yi Ding, Meixia Zou, and Naixue Xiong. 2021. Accelerating federated learning for IoT in big data analytics with pruning, quantization and selective updating. IEEE Access (2021).","journal-title":"IEEE Access"},{"key":"e_1_3_3_101_2","unstructured":"Mark Xue and Julien Freudiger. 2019. Designing for privacy. https:\/\/developer.apple.com\/videos\/play\/wwdc2019\/708. Accessed: 2023-11-11."},{"key":"e_1_3_3_102_2","doi-asserted-by":"publisher","DOI":"10.1145\/3581783.3611781"},{"key":"e_1_3_3_103_2","unstructured":"Ashkan Yousefpour Shen Guo Ashish Shenoy Sayan Ghosh Pierre Stock Kiwan Maeng Schalk-Willem Kr\u00fcger Michael G. Rabbat Carole-Jean Wu and Ilya Mironov. 2023. Green federated learning. (2023). arxiv:2303.14604 [cs.DC]"},{"key":"e_1_3_3_104_2","doi-asserted-by":"publisher","DOI":"10.14778\/3529337.3529343"},{"key":"e_1_3_3_105_2","volume-title":"ICML","author":"Yurochkin Mikhail","year":"2019","unstructured":"Mikhail Yurochkin, Mayank Agarwal, Soumya Ghosh, Kristjan Greenewald, Nghia Hoang, and Yasaman Khazaeni. 2019. Bayesian nonparametric federated learning of neural networks. In ICML."},{"key":"e_1_3_3_106_2","volume-title":"NeurIPS","author":"Zhang Jie","year":"2021","unstructured":"Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wenchao Xu, and Feijie Wu. 2021. Parameterized knowledge transfer for personalized federated learning. In NeurIPS."},{"key":"e_1_3_3_107_2","article-title":"Edge-assisted u-shaped split federated learning with privacy-preserving for internet of things","author":"Zhang Shiqiang","year":"2025","unstructured":"Shiqiang Zhang, Zihang Zhao, Detian Liu, Yang Cao, Hengliang Tang, and Siqing You. 2025. Edge-assisted u-shaped split federated learning with privacy-preserving for internet of things. Expert Systems with Applications (2025).","journal-title":"Expert Systems with Applications"},{"key":"e_1_3_3_108_2","volume-title":"NeurIPS","author":"Zhang Xiang","year":"2015","unstructured":"Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NeurIPS."},{"key":"e_1_3_3_109_2","unstructured":"Yue Zhao Meng Li Liangzhen Lai Naveen Suda Damon Civin and Vikas Chandra. 2018. Federated learning with non-IID data. (2018). arxiv:1806.00582 [cs.DC]"}],"container-title":["ACM Transactions on Modeling and Performance Evaluation of Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3708983","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3708983","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:17:55Z","timestamp":1750295875000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3708983"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,12]]},"references-count":108,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,3,31]]}},"alternative-id":["10.1145\/3708983"],"URL":"https:\/\/doi.org\/10.1145\/3708983","relation":{},"ISSN":["2376-3639","2376-3647"],"issn-type":[{"type":"print","value":"2376-3639"},{"type":"electronic","value":"2376-3647"}],"subject":[],"published":{"date-parts":[[2025,3,12]]},"assertion":[{"value":"2024-04-22","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-30","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-12","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}