{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,9]],"date-time":"2026-04-09T14:45:27Z","timestamp":1775745927651,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":145,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,10,17]],"date-time":"2021-10-17T00:00:00Z","timestamp":1634428800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2021R1C1C1008617"],"award-info":[{"award-number":["NRF-2021R1C1C1008617"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["CCF-1652132,CCF-1618039"],"award-info":[{"award-number":["CCF-1652132,CCF-1618039"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,10,18]]},"DOI":"10.1145\/3466752.3480129","type":"proceedings-article","created":{"date-parts":[[2021,10,17]],"date-time":"2021-10-17T19:12:05Z","timestamp":1634497925000},"page":"183-198","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":77,"title":["AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning"],"prefix":"10.1145","author":[{"given":"Young Geun","family":"Kim","sequence":"first","affiliation":[{"name":"Soongsil University, United States of America"}]},{"given":"Carole-Jean","family":"Wu","sequence":"additional","affiliation":[{"name":"Arizona State University, United States of America"}]}],"member":"320","published-online":{"date-parts":[[2021,10,17]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA51647.2021.00072"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.2981434"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/TETC.2014.2387752"},{"key":"e_1_3_2_1_4_1","volume-title":"https:\/\/developer.amazon.com\/en-US\/alexa","year":"2021","unstructured":"Amazon. 2021. Alexa. ( 2021 ). https:\/\/developer.amazon.com\/en-US\/alexa Amazon. 2021. Alexa. (2021). https:\/\/developer.amazon.com\/en-US\/alexa"},{"key":"e_1_3_2_1_5_1","volume-title":"https:\/\/aws.amazon.com\/ec2","author":"Amazon","year":"2021","unstructured":"Amazon. 2021. Amazon EC2. ( 2021 ). https:\/\/aws.amazon.com\/ec2 Amazon. 2021. Amazon EC2. (2021). https:\/\/aws.amazon.com\/ec2"},{"key":"e_1_3_2_1_6_1","unstructured":"Android. 2021. Android Neural Networks API. (2021). https:\/\/developer.android.com\/ndk\/guides\/neuralnetworks  Android. 2021. Android Neural Networks API. (2021). https:\/\/developer.android.com\/ndk\/guides\/neuralnetworks"},{"key":"e_1_3_2_1_7_1","volume-title":"https:\/\/developer.apple.com\/documentation\/coreml","author":"ML.","year":"2021","unstructured":"Apple. 2021. Core ML. ( 2021 ). https:\/\/developer.apple.com\/documentation\/coreml Apple. 2021. CoreML. (2021). https:\/\/developer.apple.com\/documentation\/coreml"},{"key":"e_1_3_2_1_8_1","volume-title":"https:\/\/www.apple.com\/siri","year":"2021","unstructured":"Apple. 2021. Siri. ( 2021 ). https:\/\/www.apple.com\/siri Apple. 2021. Siri. (2021). https:\/\/www.apple.com\/siri"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3146347.3146356"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611976700.41"},{"key":"e_1_3_2_1_11_1","volume-title":"Towards Federated Learning at Scale: System Design. arXiv:1902.01046","author":"Bonawitz Keith","year":"2019","unstructured":"Keith Bonawitz , Hubert Eichner , Wolfgang Grieskamp , Dzmitry Huba , Alex Ingerman , Vladimir Ivanov , Chloe Kiddon , Jakub Konecny , Stefano Mazzocchi , H\u00a0B McMahan , Timon\u00a0Van Overveldt , David Petrou , Daniel Ramage , and Jason Roselander . 2019. Towards Federated Learning at Scale: System Design. arXiv:1902.01046 ( 2019 ). Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konecny, Stefano Mazzocchi, H\u00a0B McMahan, Timon\u00a0Van Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. 2019. Towards Federated Learning at Scale: System Design. arXiv:1902.01046 (2019)."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijmedinf.2018.01.007"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/339647.339657"},{"key":"e_1_3_2_1_14_1","volume-title":"Proceedings of the Asian Conference on Machine Learning (ACML).","author":"Cai Ermao","year":"2017","unstructured":"Ermao Cai , Da-Cheng Juan , Dimitrios Stamoulis , and Diana Maculescu . 2017 . NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks . In Proceedings of the Asian Conference on Machine Learning (ACML). Ermao Cai, Da-Cheng Juan, Dimitrios Stamoulis, and Diana Maculescu. 2017. NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks. In Proceedings of the Asian Conference on Machine Learning (ACML)."},{"key":"e_1_3_2_1_15_1","volume-title":"Proceedings of the USENIX Conference on Operational Machine Learning (OpML).","author":"Chai Zheng","year":"2019","unstructured":"Zheng Chai , Hannan Fayyaz , Zeshan Fayyaz , Ali Anwar , Yi Zhou , Heiko Ludwig , and Yue Cheng . 2019 . Towards Taming the Resource and Data Heterogeneity in Federated Learning . In Proceedings of the USENIX Conference on Operational Machine Learning (OpML). Zheng Chai, Hannan Fayyaz, Zeshan Fayyaz, Ali Anwar, Yi Zhou, Heiko Ludwig, and Yue Cheng. 2019. Towards Taming the Resource and Data Heterogeneity in Federated Learning. In Proceedings of the USENIX Conference on Operational Machine Learning (OpML)."},{"key":"e_1_3_2_1_16_1","volume-title":"Proceedings of the USENIX Symposium on Operating Systems Design and Implementation.","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen , Thierry Moreau , Ziheng Jiang , Lianmin Zheng , Eddie Yan , Meghan Cowan , Haichen Shen , Leyuan Wang , Yuwei Hu , Luis Ceze , Carlos Guestrin , and Arvind Krishnamurthy . 2018 . TVM: An Automated End-to-End Optimizing Compiler for Deep Learning . In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation. Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318216.3363316"},{"key":"e_1_3_2_1_18_1","volume-title":"Asynchronous Online Federated Learning for Edge Devices with Non-IID Data. arXiv:1911.02134","author":"Chen Yujing","year":"2019","unstructured":"Yujing Chen , Yue Ning , Martin Slawski , and Huzefa Rangwala . 2019. Asynchronous Online Federated Learning for Edge Devices with Non-IID Data. arXiv:1911.02134 ( 2019 ). Yujing Chen, Yue Ning, Martin Slawski, and Huzefa Rangwala. 2019. Asynchronous Online Federated Learning for Edge Devices with Non-IID Data. arXiv:1911.02134 (2019)."},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/BigData50022.2020.9378161"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3300061.3345449"},{"key":"e_1_3_2_1_21_1","volume-title":"Adapting Multi-Armed Bandits Policies to Contextual Bandits Scenarios. arXiv:1811.04383","author":"Cortes David","year":"2018","unstructured":"David Cortes . 2018. Adapting Multi-Armed Bandits Policies to Contextual Bandits Scenarios. arXiv:1811.04383 ( 2018 ). David Cortes. 2018. Adapting Multi-Armed Bandits Policies to Contextual Bandits Scenarios. arXiv:1811.04383 (2018)."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/1814433.1814441"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3064176.3064206"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/2465529.2466586"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNET.2020.3035770"},{"key":"e_1_3_2_1_27_1","volume-title":"https:\/\/deeplearning4j.org","year":"2021","unstructured":"DL4j. 2021. Deeplearning4j. ( 2021 ). https:\/\/deeplearning4j.org DL4j. 2021. Deeplearning4j. (2021). https:\/\/deeplearning4j.org"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2020.3009406"},{"key":"e_1_3_2_1_29_1","volume-title":"JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services","author":"Eshratifar Amir\u00a0Erfan","year":"2020","unstructured":"Amir\u00a0Erfan Eshratifar , Mohammad\u00a0Saeed Abrishami , and Massoud Pedram . 2020. JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services . IEEE Transactions on Mobile Computing( 2020 ). Amir\u00a0Erfan Eshratifar, Mohammad\u00a0Saeed Abrishami, and Massoud Pedram. 2020. JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services. IEEE Transactions on Mobile Computing(2020)."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.5555\/1248547.1248586"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISPASS.2015.7095808"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2016.7446053"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2018.2883619"},{"key":"e_1_3_2_1_34_1","volume-title":"Differentially Private Federated Learning: A Client Level Perspective. arXiv:1712.07557","author":"Geyer C.","year":"2017","unstructured":"Robin\u00a0 C. Geyer , Tassilo Klein , and Moin Nabi . 2017. Differentially Private Federated Learning: A Client Level Perspective. arXiv:1712.07557 ( 2017 ). Robin\u00a0C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially Private Federated Learning: A Client Level Perspective. arXiv:1712.07557 (2017)."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3410463.3414655"},{"key":"e_1_3_2_1_36_1","unstructured":"Google. 2021. Google Cloud Vision. (2021). https:\/\/cloud.google.com\/vision  Google. 2021. Google Cloud Vision. (2021). https:\/\/cloud.google.com\/vision"},{"key":"e_1_3_2_1_37_1","volume-title":"https:\/\/store.google.com\/us\/product\/pixel_5?hl=en-US","author":"Google Pixel","year":"2021","unstructured":"Google. 2021. Google Pixel 5. ( 2021 ). https:\/\/store.google.com\/us\/product\/pixel_5?hl=en-US Google. 2021. Google Pixel 5. (2021). https:\/\/store.google.com\/us\/product\/pixel_5?hl=en-US"},{"key":"e_1_3_2_1_38_1","volume-title":"https:\/\/translate.google.com","author":"Translate Google","year":"2021","unstructured":"Google. 2021. Google Translate . ( 2021 ). https:\/\/translate.google.com Google. 2021. Google Translate. (2021). https:\/\/translate.google.com"},{"key":"e_1_3_2_1_39_1","volume-title":"https:\/\/cloud.google.com\/speech-to-text","year":"2021","unstructured":"Google. 2021. Speech-to-Text. ( 2021 ). https:\/\/cloud.google.com\/speech-to-text Google. 2021. Speech-to-Text. (2021). https:\/\/cloud.google.com\/speech-to-text"},{"key":"e_1_3_2_1_40_1","volume-title":"Proceedings of Machine Learning and Systems (MLSys).","author":"Guan Hui","year":"2020","unstructured":"Hui Guan , Laxmikant\u00a0Kishor Mokadam , Xipeng Shen , Seung-Hwan Lim , and Robert Patton . 2020 . FLEET: Flexible Efficient Ensemble Training for Heterogeneout Deep Neural Networks . In Proceedings of Machine Learning and Systems (MLSys). Hui Guan, Laxmikant\u00a0Kishor Mokadam, Xipeng Shen, Seung-Hwan Lim, and Robert Patton. 2020. FLEET: Flexible Efficient Ensemble Training for Heterogeneout Deep Neural Networks. In Proceedings of Machine Learning and Systems (MLSys)."},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/PACT.2019.00021"},{"key":"e_1_3_2_1_42_1","volume-title":"WAFFLe: Weight Anonymized Factorization for Federated Learning. arXiv:2008.05687","author":"Hao Weituo","year":"2020","unstructured":"Weituo Hao , Nikhil Mehta , Kevin\u00a0 J. Liang , Pengyu Cheng , Mostafa El-Khamy , and Lawrence Carin . 2020. WAFFLe: Weight Anonymized Factorization for Federated Learning. arXiv:2008.05687 ( 2020 ). Weituo Hao, Nikhil Mehta, Kevin\u00a0J. Liang, Pengyu Cheng, Mostafa El-Khamy, and Lawrence Carin. 2020. WAFFLe: Weight Anonymized Factorization for Federated Learning. arXiv:2008.05687 (2020)."},{"key":"e_1_3_2_1_43_1","volume-title":"Federated Learning for Mobile Keyboard Prediction. arXiv:1811.03604","author":"Hard Andrew","year":"2018","unstructured":"Andrew Hard , Kanishka Rao , Rajiv Mathews , Swaroop Ramaswamy , Francoise Beaufays , Sean Augenstein , Hubert Eichner , Chloe Kiddon , and Daniel Ramage . 2018. Federated Learning for Mobile Keyboard Prediction. arXiv:1811.03604 ( 2018 ). Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Francoise Beaufays, Sean Augenstein, Hubert Eichner, Chloe Kiddon, and Daniel Ramage. 2018. Federated Learning for Mobile Keyboard Prediction. arXiv:1811.03604 (2018)."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2018.00059"},{"key":"e_1_3_2_1_45_1","volume-title":"Piotr Dollar, and Ross Girshick","author":"He Kaiming","year":"2018","unstructured":"Kaiming He , Georgia Gkioxari , Piotr Dollar, and Ross Girshick . 2018 . Mask R-CNN. arXiv:1703.06870v3 (2018). Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. 2018. Mask R-CNN. arXiv:1703.06870v3 (2018)."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00140"},{"key":"e_1_3_2_1_47_1","volume-title":"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861","author":"Howard G.","year":"2017","unstructured":"Andrew\u00a0 G. Howard , Menglong Zhu , Bo Chen , Dimitry Kalenichenko , Weijun Wang , Tobias Weyand , Marco Andreetto , and Hartwig Adam . 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861 ( 2017 ). Andrew\u00a0G. Howard, Menglong Zhu, Bo Chen, Dimitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861 (2017)."},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2019.2928811"},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISPASS.2014.6844460"},{"key":"e_1_3_2_1_50_1","unstructured":"Huawei. 2021. Kirin 980 the World\u2019s First 7nm Process Mobile AI Chipset. (2021). https:\/\/consumer.huawei.com\/en\/campaign\/kirin980\/  Huawei. 2021. Kirin 980 the World\u2019s First 7nm Process Mobile AI Chipset. (2021). https:\/\/consumer.huawei.com\/en\/campaign\/kirin980\/"},{"key":"e_1_3_2_1_51_1","volume-title":"Rethink Evolution.","year":"2021","unstructured":"Huawei. 2021. Kirin 990 Series , Rethink Evolution. ( 2021 ). https:\/\/consumer.huawei.com\/en\/campaign\/kirin-990-series\/ Huawei. 2021. Kirin 990 Series, Rethink Evolution. (2021). https:\/\/consumer.huawei.com\/en\/campaign\/kirin-990-series\/"},{"key":"e_1_3_2_1_52_1","volume-title":"Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning. arXiv:2004.09817","author":"Itahara Sohei","year":"2020","unstructured":"Sohei Itahara , Takayuki Nishio , Masahiro Morikura , and Koji Yamamoto . 2020. Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning. arXiv:2004.09817 ( 2020 ). Sohei Itahara, Takayuki Nishio, Masahiro Morikura, and Koji Yamamoto. 2020. Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning. arXiv:2004.09817 (2020)."},{"key":"e_1_3_2_1_53_1","volume-title":"MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions. arXiv:2010.05894","author":"Jiang Wenqi","year":"2020","unstructured":"Wenqi Jiang , Zhenhao He , Shuai Zhang , Thomas\u00a0 B. Preuber , Kai Zeng , Liang Feng , Jiansong Zhang , Tongxuan Liu , Yong Li , Jingren Zhou , and Ce Zhang . 2020. MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions. arXiv:2010.05894 ( 2020 ). Wenqi Jiang, Zhenhao He, Shuai Zhang, Thomas\u00a0B. Preuber, Kai Zeng, Liang Feng, Jiansong Zhang, Tongxuan Liu, Yong Li, Jingren Zhou, and Ce Zhang. 2020. MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions. arXiv:2010.05894 (2020)."},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/383082.383119"},{"key":"e_1_3_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3079856.3080246"},{"key":"e_1_3_2_1_56_1","volume-title":"Proceedings of International Symposium on Performance Analysis of Systems and Software (ISPASS).","author":"Ju Minho","year":"2016","unstructured":"Minho Ju , Hyeonggyu Kim , and Soontae Kim . 2016 . MofySim: A Mobile Full System Simulation Framework for Energy Consumption and Performance Analysis . In Proceedings of International Symposium on Performance Analysis of Systems and Software (ISPASS). Minho Ju, Hyeonggyu Kim, and Soontae Kim. 2016. MofySim: A Mobile Full System Simulation Framework for Energy Consumption and Performance Analysis. In Proceedings of International Symposium on Performance Analysis of Systems and Software (ISPASS)."},{"key":"e_1_3_2_1_57_1","volume-title":"Advances and Open Problems in Federated Learning. arXiv:1912.04977","author":"Kairouz Peter","year":"2019","unstructured":"Peter Kairouz , H.\u00a0 Brendan McMahan , Brendan Avent , Aurelien Bellet , Mehdi Bennis , Arjun\u00a0Nitin Bhagoji , Kallista Bonawitz , Zachary Charles , Graham Cormode , Rachel Cummings , Rafael G.\u00a0L. D\u2019Oliveira , Hubert Eichner , Salim\u00a0El Rouayheb , David Evans , Josh Gardner , Zachary Garrett , Adria Gascon , Badih Ghazi , Phillip\u00a0 B. Gibbons , Marco Gruteser , Zaid Harchaoui , Chaoyang He , Lie He , Zhouyuan Huo , Ben Hutchinson , Justin Hsu , Martin Jaggi , Tara Javidi , Gauri Joshi , Mikhali Khodak , Jakub Konecny , Aleksandra Korolova , Farinaz Koushanfar , Sanmi Koyejo , Tancrede Lepoint , Yang Liu , Prateek Mittal , Mehryar Mohri , Richard Nock , Ayfer Ozgur , Rasmus Pagh , Mariana Raykova , Hang Qi , Daniel Ramage , Ramesh Raskar , Dawn Song , Weikang Song , Sebastian\u00a0 U. Stich , Ziteng Sun , Ananda\u00a0Theertha Suresh , Florian Tramer , Praneeth Vepakomma , Jianyu Wang , Li Xiong , Zheng Xu , Qiang Yang , Felix\u00a0 X Yu , Han Yu , and Sen Zhao . 2019. Advances and Open Problems in Federated Learning. arXiv:1912.04977 ( 2019 ). Peter Kairouz, H.\u00a0Brendan McMahan, Brendan Avent, Aurelien Bellet, Mehdi Bennis, Arjun\u00a0Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.\u00a0L. D\u2019Oliveira, Hubert Eichner, Salim\u00a0El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adria Gascon, Badih Ghazi, Phillip\u00a0B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhali Khodak, Jakub Konecny, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Ozgur, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian\u00a0U. Stich, Ziteng Sun, Ananda\u00a0Theertha Suresh, Florian Tramer, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix\u00a0X Yu, Han Yu, and Sen Zhao. 2019. Advances and Open Problems in Federated Learning. arXiv:1912.04977 (2019)."},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3037697.3037698"},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO50266.2020.00058"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISLPED.2017.8009182"},{"key":"e_1_3_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2017.2710317"},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2019.2939239"},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO50266.2020.00090"},{"key":"e_1_3_2_1_64_1","volume-title":"Federated Learning: Strategies for Improving Communication Efficiency. arXiv:1610.05492","author":"Konecny Jakub","year":"2016","unstructured":"Jakub Konecny , H.\u00a0 Brendan McMahan , Felix\u00a0 X. Yu , Peter Richtarik , Ananda\u00a0Theertha Suresh , and Dave Bacon . 2016 . Federated Learning: Strategies for Improving Communication Efficiency. arXiv:1610.05492 (2016). Jakub Konecny, H.\u00a0Brendan McMahan, Felix\u00a0X. Yu, Peter Richtarik, Ananda\u00a0Theertha Suresh, and Dave Bacon. 2016. Federated Learning: Strategies for Improving Communication Efficiency. arXiv:1610.05492 (2016)."},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"crossref","unstructured":"D.\u00a0E. Koulouriotis and A. Xanthopoluos. 2008. Reinforcement Learning and Evolutionary Algorithms for Non-stationary Multi-armed Bandit Problems. Appl. Math. Comput. 196(2008).  D.\u00a0E. Koulouriotis and A. Xanthopoluos. 2008. Reinforcement Learning and Evolutionary Algorithms for Non-stationary Multi-armed Bandit Problems. Appl. Math. Comput. 196(2008).","DOI":"10.1016\/j.amc.2007.07.043"},{"key":"e_1_3_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/IPSN.2016.7460664"},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.435"},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_2_1_69_1","volume-title":"\u00a0C. Burges","author":"LeCun Yann","year":"1998","unstructured":"Yann LeCun , Corinna Cortes , and Christopher J . \u00a0C. Burges . 1998 . The MNIST Database of handwritten digits. (1998). http:\/\/yann.lecun.com\/exdb\/mnist\/ Yann LeCun, Corinna Cortes, and Christopher J.\u00a0C. Burges. 1998. The MNIST Database of handwritten digits. (1998). http:\/\/yann.lecun.com\/exdb\/mnist\/"},{"key":"e_1_3_2_1_70_1","volume-title":"GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. arXiv:2006.16668","author":"Lepikhin Dmitry","year":"2020","unstructured":"Dmitry Lepikhin , HyoukJoong Lee , Yuanzhong Xu , Dehao Chen , Orhan Firat , Yanping Huang , Maxim Krikun , Noam Shazeer , and Zhifeng Chen . 2020. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. arXiv:2006.16668 ( 2020 ). Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. arXiv:2006.16668 (2020)."},{"key":"e_1_3_2_1_71_1","unstructured":"Ang Li Jingwei Sun Binghui Wang Lin Duan Sicheng Li Yiran Chen and Hai Li. 2020. LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. arXiv:2008.03371 (2020).  Ang Li Jingwei Sun Binghui Wang Lin Duan Sicheng Li Yiran Chen and Hai Li. 2020. LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. arXiv:2008.03371 (2020)."},{"key":"e_1_3_2_1_72_1","volume-title":"Federated Learning on Non-IID Data Silos: An Experimental Study. arXiv:2102:02079v2","author":"Li Qinbin","year":"2021","unstructured":"Qinbin Li , Yiqun Diao , Quan Chen , and Bingsheng He. 2021. Federated Learning on Non-IID Data Silos: An Experimental Study. arXiv:2102:02079v2 ( 2021 ). Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. 2021. Federated Learning on Non-IID Data Silos: An Experimental Study. arXiv:2102:02079v2 (2021)."},{"key":"e_1_3_2_1_73_1","volume-title":"Proceedings of International Conference on Machine Learning and Systems (MLSys).","author":"Li Tian","year":"2020","unstructured":"Tian Li , Anit\u00a0Kumar Sahu , Manzil Zaheer , Maziar Sanjabi , Ameet Talwalkar , and Virginia Smith . 2020 . Federated Optimization in Heterogeneous Networks . In Proceedings of International Conference on Machine Learning and Systems (MLSys). Tian Li, Anit\u00a0Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated Optimization in Heterogeneous Networks. In Proceedings of International Conference on Machine Learning and Systems (MLSys)."},{"key":"e_1_3_2_1_74_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR).","author":"Li Tian","year":"2020","unstructured":"Tian Li , Maziar Sanjabi , Ahmad Beirami , and Virginia Smith . 2020 . Fair Resource Allocation in Federated Learning . In Proceedings of the International Conference on Learning Representations (ICLR). Tian Li, Maziar Sanjabi, Ahmad Beirami, and Virginia Smith. 2020. Fair Resource Allocation in Federated Learning. In Proceedings of the International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_2_1_75_1","volume-title":"Proceedings of the International Conference on Learing Representation (ICLR).","author":"Li Xiang","year":"2020","unstructured":"Xiang Li , Kaixuan Huang , Wenhao Yang , Shusen Wang , and Zhihua Zhang . 2020 . On the Convergence of FedAvg on Non-IID Data . In Proceedings of the International Conference on Learing Representation (ICLR). Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2020. On the Convergence of FedAvg on Non-IID Data. In Proceedings of the International Conference on Learing Representation (ICLR)."},{"key":"e_1_3_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2018.00023"},{"key":"e_1_3_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2020.2986024"},{"key":"e_1_3_2_1_78_1","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems (NIPS).","author":"Lin Tao","year":"2020","unstructured":"Tao Lin , Lingjing Kong , Sebastian\u00a0 U. Stich , and Martin Jaggi . 2020 . Ensemble Distillation for Robust Model Fusion in Federated Learning . In Proceedings of the International Conference on Neural Information Processing Systems (NIPS). Tao Lin, Lingjing Kong, Sebastian\u00a0U. Stich, and Martin Jaggi. 2020. Ensemble Distillation for Robust Model Fusion in Federated Learning. In Proceedings of the International Conference on Neural Information Processing Systems (NIPS)."},{"key":"e_1_3_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2019.2942179"},{"key":"e_1_3_2_1_80_1","volume-title":"No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data. arXiv:2106.05001","author":"Luo Mi","year":"2021","unstructured":"Mi Luo , Fei Chen , Dapeng Hu , Yifan Zhang , Jian Liang , and Jiashi Feng . 2021. No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data. arXiv:2106.05001 ( 2021 ). Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. 2021. No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data. arXiv:2106.05001 (2021)."},{"key":"e_1_3_2_1_81_1","volume-title":"An Energy-aware Online Learning Framework for Resource Management in Heterogeneous Platforms. ACM Transactions on Design Automation and Electronic Systems","author":"Mandal K.","year":"2020","unstructured":"Sumit\u00a0 K. Mandal , Ganapati Bhat , Janardhan\u00a0Rao Doppa , Partha\u00a0Pratim Pande , and Umit\u00a0 Y. Ogras . 2020. An Energy-aware Online Learning Framework for Resource Management in Heterogeneous Platforms. ACM Transactions on Design Automation and Electronic Systems ( 2020 ). Sumit\u00a0K. Mandal, Ganapati Bhat, Janardhan\u00a0Rao Doppa, Partha\u00a0Pratim Pande, and Umit\u00a0Y. Ogras. 2020. An Energy-aware Online Learning Framework for Resource Management in Heterogeneous Platforms. ACM Transactions on Design Automation and Electronic Systems (2020)."},{"key":"e_1_3_2_1_82_1","volume-title":"Proceedings of International Conference on Learning Representations (ICLR).","author":"Mathieu Michael","year":"2014","unstructured":"Michael Mathieu , Mikael Henaff , and Yann LeCun . 2014 . Fast Training of Convolutional Networks through FFTs . In Proceedings of International Conference on Learning Representations (ICLR). Michael Mathieu, Mikael Henaff, and Yann LeCun. 2014. Fast Training of Convolutional Networks through FFTs. In Proceedings of International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_2_1_83_1","volume-title":"Proceedings of Machine Learning and Systems (MLSys).","author":"Mattson Peter","year":"2020","unstructured":"Peter Mattson , Christine Cheng , Cody Coleman , Greg Diamos , Paulius Micikevicius , David Patterson , Hanlin Tang , Gu-Yeon Wei , Peter Bailis , Victor Bittorf , David Brooks , Dehao Chen , Debojyoti Dutta , Udit Gupta , Kim Hazelwood , Andrew Hock , Xinyuan Huang , Atsushi Ike , Bill Jia , Daniel Kang , David Kanter , Naveen Kumar , Jeffery Liao , Guokai Ma , Deepak Narayanan , Tayo Oguntebi , Gennady Pekhimenko , Lillian Pentecost , Vijay\u00a0Janapa Reddi , Taylor Robie , Tom\u00a0 St. John , Carole-Jean Wu , Lingjie Xu , Masafumi Yamazaki , Cliff Young , and Matei Zaharia . 2020 . MLPerf Training Benchmark . In Proceedings of Machine Learning and Systems (MLSys). Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojyoti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay\u00a0Janapa Reddi, Taylor Robie, Tom\u00a0St. John, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, and Matei Zaharia. 2020. MLPerf Training Benchmark. In Proceedings of Machine Learning and Systems (MLSys)."},{"key":"e_1_3_2_1_84_1","volume-title":"Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv:1602.05629","author":"McMahan Brendan","year":"2017","unstructured":"H.\u00a0 Brendan McMahan , Eider Moore , Daniel Ramage , Seth Hampson , and Blaise\u00a0Aguera Arcas . 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv:1602.05629 ( 2017 ). H.\u00a0Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise\u00a0Aguera Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv:1602.05629 (2017)."},{"key":"e_1_3_2_1_85_1","volume-title":"Human-level Control thourgh Deep Reinforcement Learning. Nature 518, 7540","author":"Mnih Volodymyr","year":"2015","unstructured":"Volodymyr Mnih , Koray Kavukcuoglu , David Silver , Andrei\u00a0 A. Rusu , Joel Veness , Marc\u00a0 G. Bellemare , Alex Graves , Martin Riedmiller , Andreas\u00a0 K. Fidjeland , Georg Ostrovski , Stig Petersen , Charles Beattie , Amir Sadik , Ioannis Antonoglou , Helen King , Dharshan Kumaran , Daan Wiersta , Shane Legg , and Demis Hassabis . 2015. Human-level Control thourgh Deep Reinforcement Learning. Nature 518, 7540 ( 2015 ), 529\u2013533. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei\u00a0A. Rusu, Joel Veness, Marc\u00a0G. Bellemare, Alex Graves, Martin Riedmiller, Andreas\u00a0K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wiersta, Shane Legg, and Demis Hassabis. 2015. Human-level Control thourgh Deep Reinforcement Learning. Nature 518, 7540 (2015), 529\u2013533."},{"key":"e_1_3_2_1_86_1","volume-title":"Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark. arXiv:2103.15332","author":"Mohanty Sharada","year":"2021","unstructured":"Sharada Mohanty , Jyotish Poonganam , Adrien Gaidon , Andrey Kolobov , Blake Wulfe , Dipam Chakraborty , Grazvydas Semetulskis , Joao Schapke , Jonas Kubilius , Jurgis Pasukonis , Linas Klimas , Matthew Hausknecht , Patrick MacAlpine , Quang\u00a0Nhat Tran , Thomas Tumiel , Xiaocheng Tang , Xinwei Chen , Christopher Hesse , Jacob Hilton , William\u00a0Hebgen Guss , Sahika Genc , John Schulman , and Karl Cobbe . 2021. Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark. arXiv:2103.15332 ( 2021 ). Sharada Mohanty, Jyotish Poonganam, Adrien Gaidon, Andrey Kolobov, Blake Wulfe, Dipam Chakraborty, Grazvydas Semetulskis, Joao Schapke, Jonas Kubilius, Jurgis Pasukonis, Linas Klimas, Matthew Hausknecht, Patrick MacAlpine, Quang\u00a0Nhat Tran, Thomas Tumiel, Xiaocheng Tang, Xinwei Chen, Christopher Hesse, Jacob Hilton, William\u00a0Hebgen Guss, Sahika Genc, John Schulman, and Karl Cobbe. 2021. Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark. arXiv:2103.15332 (2021)."},{"key":"e_1_3_2_1_87_1","volume-title":"Proceedings of Interantional Conference on Machine Learning (ICML).","author":"Mohri Mehryar","year":"2019","unstructured":"Mehryar Mohri , Gary Sivek , and Ananda\u00a0Theertha Suresh . 2019 . Agnostic Federated Learning . In Proceedings of Interantional Conference on Machine Learning (ICML). Mehryar Mohri, Gary Sivek, and Ananda\u00a0Theertha Suresh. 2019. Agnostic Federated Learning. In Proceedings of Interantional Conference on Machine Learning (ICML)."},{"key":"e_1_3_2_1_88_1","unstructured":"Monsoon. 2021. High Voltage Power Monitor. (2021). https:\/\/www.msoon.com\/high-voltage-power-monitor  Monsoon. 2021. High Voltage Power Monitor. (2021). https:\/\/www.msoon.com\/high-voltage-power-monitor"},{"key":"e_1_3_2_1_89_1","unstructured":"Motorola. 2021. Moto X Force - Technical Specs. (2021). https:\/\/support.motorola.com\/uk\/en\/solution\/MS112171  Motorola. 2021. Moto X Force - Technical Specs. (2021). https:\/\/support.motorola.com\/uk\/en\/solution\/MS112171"},{"key":"e_1_3_2_1_90_1","volume-title":"Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems. arXiv:2003.09518","author":"Naumov Maxim","year":"2020","unstructured":"Maxim Naumov , John Kim , Dheevatsa Mudigere , Srinivas Sridharan , Xiaodong Wang , Whitney Zhao , Serhat Yilmaz , Changkyu Kim , Hector Yuen , Mustafa Ozdal , Krishnakumar Nair , Isabel Gao , Bor-Ying Su , Jiyan Yang , and Mikhail Smelyanskiy . 2020. Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems. arXiv:2003.09518 ( 2020 ). Maxim Naumov, John Kim, Dheevatsa Mudigere, Srinivas Sridharan, Xiaodong Wang, Whitney Zhao, Serhat Yilmaz, Changkyu Kim, Hector Yuen, Mustafa Ozdal, Krishnakumar Nair, Isabel Gao, Bor-Ying Su, Jiyan Yang, and Mikhail Smelyanskiy. 2020. Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems. arXiv:2003.09518 (2020)."},{"key":"e_1_3_2_1_91_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2017.13"},{"key":"e_1_3_2_1_92_1","volume-title":"https:\/\/developer.nvidia.com\/tensorrt","author":"NVIDIA.","year":"2021","unstructured":"NVIDIA. 2021. NVIDIA TensorRT. ( 2021 ). https:\/\/developer.nvidia.com\/tensorrt NVIDIA. 2021. NVIDIA TensorRT. (2021). https:\/\/developer.nvidia.com\/tensorrt"},{"key":"e_1_3_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2018.2878168"},{"key":"e_1_3_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2942052"},{"key":"e_1_3_2_1_95_1","volume-title":"Proceedings of IEEE International Symposium on Workload Characterization (IISWC).","author":"Pandiyan Dhinakaran","year":"2013","unstructured":"Dhinakaran Pandiyan , Shin-Ying Lee , and Carole-Jean Wu . 2013 . Performance, Energy Characterizations and Architectural Implications of an Emerging Mobile Platform Benchmark Suit . In Proceedings of IEEE International Symposium on Workload Characterization (IISWC). Dhinakaran Pandiyan, Shin-Ying Lee, and Carole-Jean Wu. 2013. Performance, Energy Characterizations and Architectural Implications of an Emerging Mobile Platform Benchmark Suit. In Proceedings of IEEE International Symposium on Workload Characterization (IISWC)."},{"key":"e_1_3_2_1_96_1","doi-asserted-by":"publisher","DOI":"10.1109\/IISWC.2014.6983056"},{"key":"e_1_3_2_1_97_1","volume-title":"https:\/\/pytorch.org","year":"2021","unstructured":"PyTorch. 2021. PyTorch. ( 2021 ). https:\/\/pytorch.org PyTorch. 2021. PyTorch. (2021). https:\/\/pytorch.org"},{"key":"e_1_3_2_1_98_1","unstructured":"PyTorch. 2021. PyTorch Mobile. (2021). https:\/\/pytorch.org\/mobile\/home\/  PyTorch. 2021. PyTorch Mobile. (2021). https:\/\/pytorch.org\/mobile\/home\/"},{"key":"e_1_3_2_1_99_1","volume-title":"Can Federated Learning Save the Planet?arXiv:2010.06537","author":"Qiu Xinchi","year":"2020","unstructured":"Xinchi Qiu , Titouan Parcollet , Daniel\u00a0 J. Beutel , Taner Topal , Akhil Mathur , and Nicholas\u00a0 D. Lane . 2020. Can Federated Learning Save the Planet?arXiv:2010.06537 ( 2020 ). Xinchi Qiu, Titouan Parcollet, Daniel\u00a0J. Beutel, Taner Topal, Akhil Mathur, and Nicholas\u00a0D. Lane. 2020. Can Federated Learning Save the Planet?arXiv:2010.06537 (2020)."},{"key":"e_1_3_2_1_100_1","unstructured":"Qualcomm. 2021. Qualcomm Neural Processing SDK for AI. (2021). https:\/\/developer.qualcomm.com\/software\/qualcomm-neural-processing-sdk  Qualcomm. 2021. Qualcomm Neural Processing SDK for AI. (2021). https:\/\/developer.qualcomm.com\/software\/qualcomm-neural-processing-sdk"},{"key":"e_1_3_2_1_101_1","doi-asserted-by":"publisher","DOI":"10.3390\/fi10070060"},{"key":"e_1_3_2_1_102_1","volume-title":"ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning. arXiv:2104.07857","author":"Rajbhandari Samyam","year":"2021","unstructured":"Samyam Rajbhandari , Olatunji Ruwase , Jeff Rasley , Shaden Smith , and Yuxiong He. 2021. ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning. arXiv:2104.07857 ( 2021 ). Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He. 2021. ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning. arXiv:2104.07857 (2021)."},{"key":"e_1_3_2_1_103_1","doi-asserted-by":"publisher","DOI":"10.1109\/MM.2021.3066343"},{"key":"e_1_3_2_1_104_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA45697.2020.00045"},{"key":"e_1_3_2_1_105_1","volume-title":"https:\/\/www.samsung.com\/semiconductor\/minisite\/exynos\/products\/mobileprocessor\/exynos-9825\/","author":"Processors Exynos","year":"2021","unstructured":"Samsung. 2021. Exynos 9825 Processors . ( 2021 ). https:\/\/www.samsung.com\/semiconductor\/minisite\/exynos\/products\/mobileprocessor\/exynos-9825\/ Samsung. 2021. Exynos 9825 Processors. (2021). https:\/\/www.samsung.com\/semiconductor\/minisite\/exynos\/products\/mobileprocessor\/exynos-9825\/"},{"key":"e_1_3_2_1_106_1","unstructured":"Samsung. 2021. Exynos 990 Mobile Processors. (2021). https:\/\/www.samsung.com\/semiconductor\/minisite\/exynos\/products\/mobileprocessor\/exynos-990\/  Samsung. 2021. Exynos 990 Mobile Processors. (2021). https:\/\/www.samsung.com\/semiconductor\/minisite\/exynos\/products\/mobileprocessor\/exynos-990\/"},{"key":"e_1_3_2_1_107_1","volume-title":"S10, & S10+.","year":"2021","unstructured":"Samsung. 2021. Samsung Galaxy S10e , S10, & S10+. ( 2021 ). https:\/\/www.samsung.com\/global\/galaxy\/galaxy-s10 Samsung. 2021. Samsung Galaxy S10e, S10, & S10+. (2021). https:\/\/www.samsung.com\/global\/galaxy\/galaxy-s10"},{"key":"e_1_3_2_1_108_1","unstructured":"Samsung. 2021. Samsung Neural SDK. (2021). https:\/\/developer.samsung.com\/neural\/overview.html#Release-Notes  Samsung. 2021. Samsung Neural SDK. (2021). https:\/\/developer.samsung.com\/neural\/overview.html#Release-Notes"},{"key":"e_1_3_2_1_109_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00474"},{"key":"e_1_3_2_1_110_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISPASS.2018.00015"},{"key":"e_1_3_2_1_111_1","doi-asserted-by":"publisher","DOI":"10.1109\/IISWC.2015.9"},{"key":"e_1_3_2_1_112_1","volume-title":"Proceedings of the International Conference on Neural Information Processing Systems (NIPS).","author":"Smith Virginia","year":"2017","unstructured":"Virginia Smith , Chao-Kai Chiang , Maziar Sanjabi , and Ameet Talwalkar . 2017 . Federated Multi-Task Learning . In Proceedings of the International Conference on Neural Information Processing Systems (NIPS). Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet Talwalkar. 2017. Federated Multi-Task Learning. In Proceedings of the International Conference on Neural Information Processing Systems (NIPS)."},{"key":"e_1_3_2_1_113_1","volume-title":"Proceedings of the International Conference on Language Representations (ICLR).","author":"Springenberg Jost\u00a0Tobias","year":"2015","unstructured":"Jost\u00a0Tobias Springenberg , Alexey Dosovitskiy , Thomas Brox , and Martin Riedmiller . 2015 . Striving for Simplicity: The All Convolutional Net . In Proceedings of the International Conference on Language Representations (ICLR). Jost\u00a0Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. 2015. Striving for Simplicity: The All Convolutional Net. In Proceedings of the International Conference on Language Representations (ICLR)."},{"key":"e_1_3_2_1_114_1","volume-title":"MobileBERT: A Compact Task-agnostic BERT for Resource-Limited Devices. arXiv:2004.02984","author":"Sun Zhiqing","year":"2020","unstructured":"Zhiqing Sun , Hongkun Yu , Xiaodan Song , Renjie Liu , Yiming Yang , and Denny Zhou . 2020. MobileBERT: A Compact Task-agnostic BERT for Resource-Limited Devices. arXiv:2004.02984 ( 2020 ). Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: A Compact Task-agnostic BERT for Resource-Limited Devices. arXiv:2004.02984 (2020)."},{"key":"e_1_3_2_1_115_1","volume-title":"EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv:1905.11946","author":"Tan Mingxing","year":"2019","unstructured":"Mingxing Tan and Quoc\u00a0 V Le. 2019. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv:1905.11946 ( 2019 ). Mingxing Tan and Quoc\u00a0V Le. 2019. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv:1905.11946 (2019)."},{"key":"e_1_3_2_1_116_1","volume-title":"USENIX Workshop on Hot Topics in Edge Computing (HotEdge).","author":"Tao Zeyi","year":"2018","unstructured":"Zeyi Tao and Qun Li . 2018 . eSGD: Communication Efficient Distributed Deep Learning on the Edge . In USENIX Workshop on Hot Topics in Edge Computing (HotEdge). Zeyi Tao and Qun Li. 2018. eSGD: Communication Efficient Distributed Deep Learning on the Edge. In USENIX Workshop on Hot Topics in Edge Computing (HotEdge)."},{"key":"e_1_3_2_1_117_1","volume-title":"https:\/\/tensorflow.org\/lite","year":"2021","unstructured":"TensorFlow. 2021. TFLite. ( 2021 ). https:\/\/tensorflow.org\/lite TensorFlow. 2021. TFLite. (2021). https:\/\/tensorflow.org\/lite"},{"key":"e_1_3_2_1_118_1","doi-asserted-by":"publisher","DOI":"10.47738\/jads.v2i2.28"},{"key":"e_1_3_2_1_119_1","unstructured":"Marten van Dijk Nhuong\u00a0V. Nguyen Toan\u00a0N. Nguyen Lam\u00a0M. Nguyen Quoc Tran-Dinh and Phuong\u00a0Ha Nguyen. 2020. Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise. (2020).  Marten van Dijk Nhuong\u00a0V. Nguyen Toan\u00a0N. Nguyen Lam\u00a0M. Nguyen Quoc Tran-Dinh and Phuong\u00a0Ha Nguyen. 2020. Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise. (2020)."},{"key":"e_1_3_2_1_120_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2020.3023905"},{"key":"e_1_3_2_1_121_1","volume-title":"Proceedings of Conference on Neural Information Processing Systems (NeurIPS).","author":"Wang Jianyu","year":"2020","unstructured":"Jianyu Wang , Qinghua Liu , Hao Liang , Gauri Joshi , and H.\u00a0 Vincent Poor . 2020 . Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization . In Proceedings of Conference on Neural Information Processing Systems (NeurIPS). Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H.\u00a0Vincent Poor. 2020. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. In Proceedings of Conference on Neural Information Processing Systems (NeurIPS)."},{"key":"e_1_3_2_1_122_1","doi-asserted-by":"publisher","DOI":"10.1109\/MDAT.2020.2968258"},{"key":"e_1_3_2_1_123_1","doi-asserted-by":"publisher","DOI":"10.1109\/MDAT.2020.2968258"},{"key":"e_1_3_2_1_124_1","volume-title":"Proceedings of Machine Learning Systems (MLSys).","author":"Wang Yu\u00a0Emma","year":"2020","unstructured":"Yu\u00a0Emma Wang , Gu-Yeon Wei , and David Brooks . 2020 . A Systematic Methodology for Analysis of Deep Learning Hardware and Software Platforms . In Proceedings of Machine Learning Systems (MLSys). Yu\u00a0Emma Wang, Gu-Yeon Wei, and David Brooks. 2020. A Systematic Methodology for Analysis of Deep Learning Hardware and Software Platforms. In Proceedings of Machine Learning Systems (MLSys)."},{"key":"e_1_3_2_1_125_1","doi-asserted-by":"publisher","DOI":"10.1145\/3390523"},{"key":"e_1_3_2_1_126_1","volume-title":"FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search. arXiv:1812.03443","author":"Wu Bichen","year":"2018","unstructured":"Bichen Wu , Xiaoliang Dai , Peizhao Zhang , Yanghan Wang , Fei Sun , Yiming Wu , Yuandong Tian , Peter Vajda , Yangqing Jia , and Kurt Keutzer . 2018. FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search. arXiv:1812.03443 ( 2018 ). arXiv:cs.CV\/1812.03443 Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. 2018. FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search. arXiv:1812.03443 (2018). arXiv:cs.CV\/1812.03443"},{"key":"e_1_3_2_1_127_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01099"},{"key":"e_1_3_2_1_128_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2019.00048"},{"key":"e_1_3_2_1_129_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISPASS.2011.5762710"},{"key":"e_1_3_2_1_130_1","doi-asserted-by":"publisher","DOI":"10.1145\/3321408.3323080"},{"key":"e_1_3_2_1_131_1","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2783270"},{"key":"e_1_3_2_1_132_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-63076-8"},{"key":"e_1_3_2_1_133_1","volume-title":"Improving Sample Efficiency in Model-Free Reinforcement Learning from Images. arXiv:1910.01741v3","author":"Yarats Denis","year":"2019","unstructured":"Denis Yarats , Amy Zhang , Ilya Kostrikov , Brandon Amos , Joelle Pineau , and Rob Fergus . 2019. Improving Sample Efficiency in Model-Free Reinforcement Learning from Images. arXiv:1910.01741v3 ( 2019 ). Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, and Rob Fergus. 2019. Improving Sample Efficiency in Model-Free Reinforcement Learning from Images. arXiv:1910.01741v3 (2019)."},{"key":"e_1_3_2_1_134_1","volume-title":"Proceedings of Machine Learning and Systems (MLSys).","author":"Yin Chunxing","year":"2021","unstructured":"Chunxing Yin , Blige Acun , Xing Liu , and Carole-Jean Wu . 2021 . TT-Rec: Tensor Train Compression for Deep Learning Recommendation Model Embeddings . In Proceedings of Machine Learning and Systems (MLSys). Chunxing Yin, Blige Acun, Xing Liu, and Carole-Jean Wu. 2021. TT-Rec: Tensor Train Compression for Deep Learning Recommendation Model Embeddings. In Proceedings of Machine Learning and Systems (MLSys)."},{"key":"e_1_3_2_1_135_1","doi-asserted-by":"publisher","DOI":"10.1109\/IPDPS47924.2020.00033"},{"key":"e_1_3_2_1_136_1","doi-asserted-by":"crossref","unstructured":"Bingxin Zhang Guopeng Zhang Weice Sun and Kun Yang. 2020. Task Offloading with Power Control for Mobile Edge Computing Using Reinforcement Learning-based Markov Decision Process. Mobile Information Systems(2020).  Bingxin Zhang Guopeng Zhang Weice Sun and Kun Yang. 2020. Task Offloading with Power Control for Mobile Edge Computing Using Reinforcement Learning-based Markov Decision Process. Mobile Information Systems(2020).","DOI":"10.1155\/2020\/7630275"},{"key":"e_1_3_2_1_137_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11036-020-01586-4"},{"key":"e_1_3_2_1_138_1","doi-asserted-by":"publisher","DOI":"10.1145\/1878961.1878982"},{"key":"e_1_3_2_1_139_1","doi-asserted-by":"publisher","DOI":"10.1109\/GLOBECOM38437.2019.9013498"},{"key":"e_1_3_2_1_140_1","volume-title":"Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models. arXiv:2008.11364","author":"Zhang Zhengming","year":"2020","unstructured":"Zhengming Zhang , Yaoqing Yang , Zhewei Yao , Yujun Yan , Joseph\u00a0 E. Gonzalez , and Michael\u00a0 W. Mahoney . 2020. Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models. arXiv:2008.11364 ( 2020 ). Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph\u00a0E. Gonzalez, and Michael\u00a0W. Mahoney. 2020. Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models. arXiv:2008.11364 (2020)."},{"key":"e_1_3_2_1_141_1","volume-title":"Federated Learning with Non-IID Data. arXiv:1806:00582","author":"Zhao Yue","year":"2018","unstructured":"Yue Zhao , Meng Li , Liangzhen Lai , Naveen Suda , Damon Civin , and Vikas Chandra . 2018. Federated Learning with Non-IID Data. arXiv:1806:00582 ( 2018 ). Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated Learning with Non-IID Data. arXiv:1806:00582 (2018)."},{"key":"e_1_3_2_1_142_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA45697.2020.00092"},{"key":"e_1_3_2_1_143_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301278"},{"key":"e_1_3_2_1_144_1","volume-title":"Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA). 13\u201324","author":"Zhu Yuhao","year":"2013","unstructured":"Yuhao Zhu and Vijay\u00a0Janapa Reddi . 2013 . High-performance and energy-efficient mobile web browsing on big\/little systems . In Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA). 13\u201324 . Yuhao Zhu and Vijay\u00a0Janapa Reddi. 2013. High-performance and energy-efficient mobile web browsing on big\/little systems. In Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA). 13\u201324."},{"key":"e_1_3_2_1_145_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00907"}],"event":{"name":"MICRO '21: 54th Annual IEEE\/ACM International Symposium on Microarchitecture","location":"Virtual Event Greece","acronym":"MICRO '21","sponsor":["SIGMICRO ACM Special Interest Group on Microarchitectural Research and Processing"]},"container-title":["MICRO-54: 54th Annual IEEE\/ACM International Symposium on Microarchitecture"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3466752.3480129","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3466752.3480129","content-type":"text\/html","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3466752.3480129","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3466752.3480129","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:57Z","timestamp":1750191537000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3466752.3480129"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,17]]},"references-count":145,"alternative-id":["10.1145\/3466752.3480129","10.1145\/3466752"],"URL":"https:\/\/doi.org\/10.1145\/3466752.3480129","relation":{},"subject":[],"published":{"date-parts":[[2021,10,17]]},"assertion":[{"value":"2021-10-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}