{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,9]],"date-time":"2026-03-09T20:52:35Z","timestamp":1773089555372,"version":"3.50.1"},"reference-count":20,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2022,1,6]],"date-time":"2022-01-06T00:00:00Z","timestamp":1641427200000},"content-version":"vor","delay-in-days":5,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61972358"],"award-info":[{"award-number":["61972358"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62072146"],"award-info":[{"award-number":["62072146"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Wireless Communications and Mobile Computing"],"published-print":{"date-parts":[[2022,1]]},"abstract":"<jats:p>Federated learning is a new framework of machine learning, it trains models locally on multiple clients and then uploads local models to the server for model aggregation iteratively until the model converges. In most cases, the local epochs of all clients are set to the same value in federated learning. In practice, the clients are usually heterogeneous, which leads to the inconsistent training speed of clients. The faster clients will remain idle for a long time to wait for the slower clients, which prolongs the model training time. As the time cost of clients\u2019 local training can reflect the clients\u2019 training speed, and it can be used to guide the dynamic setting of local epochs, we propose a method based on deep learning to predict the training time of models on heterogeneous clients. First, a neural network is designed to extract the influence of different model features on training time. Second, we propose a dimensionality reduction rule to extract the key features which have a great impact on training time based on the influence of model features. Finally, we use the key features extracted by the dimensionality reduction rule to train the time prediction model. Our experiments show that, compared with the current prediction method, our method reduces 30% of model features and 25% of training data for the convolutional layer, 20% of model features and 20% of training data for the dense layer, while maintaining the same level of prediction error.<\/jats:p>","DOI":"10.1155\/2022\/6887040","type":"journal-article","created":{"date-parts":[[2022,1,6]],"date-time":"2022-01-06T18:05:16Z","timestamp":1641492316000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Local Epochs Inefficiency Caused by Device Heterogeneity in Federated Learning"],"prefix":"10.1155","volume":"2022","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2026-417X","authenticated-orcid":false,"given":"Yan","family":"Zeng","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2199-3879","authenticated-orcid":false,"given":"Xin","family":"Wang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4277-9967","authenticated-orcid":false,"given":"Junfeng","family":"Yuan","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0241-0727","authenticated-orcid":false,"given":"Jilin","family":"Zhang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0775-4254","authenticated-orcid":false,"given":"Jian","family":"Wan","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2022,1,6]]},"reference":[{"key":"e_1_2_9_1_2","first-page":"1273","volume-title":"Artificial intelligence and statistics","author":"McMahan H. B.","year":"2017"},{"key":"e_1_2_9_2_2","unstructured":"HardA. RaoK. MathewsR. RamaswamyS. BeaufaysF. AugensteinS. EichnerH. KiddonC. andRamageD. Federated learning for mobile keyboard prediction 2018 https:\/\/arxiv.org\/abs\/1811.03604."},{"key":"e_1_2_9_3_2","unstructured":"ai.google. Under the hood of the Pixel 2: How AI is supercharging hardware 2018 2018 https:\/\/ai.google\/stories\/ai-in-hardware\/."},{"key":"e_1_2_9_4_2","unstructured":"support.google. Your chats stay private while Messages improves suggestions 2019 2019 http:\/\/support.google.com\/messages\/answer\/9327902."},{"key":"e_1_2_9_5_2","unstructured":"Apple Private federated learning (NeurIPS 2019 Expo Talk Abstract) 2019 https:\/\/nips.cc\/ExpoConferences\/2019\/schedule?talk_id=40."},{"key":"e_1_2_9_6_2","unstructured":"LiT. SahuA. K. ZaheerM. SanjabiM. TalwalkarA. andSmithV. Federated optimization in heterogeneous networks 2018 https:\/\/arxiv.org\/abs\/1812.06127."},{"key":"e_1_2_9_7_2","unstructured":"WangJ. LiuQ. LiangH. JoshiG. andPoorH. V. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization 2020."},{"key":"e_1_2_9_8_2","doi-asserted-by":"crossref","unstructured":"JustusD. BrennanJ. BonnerS. andMcGoughA. S. Predicting the computational cost of deep learning models 2018 IEEE International Conference on Big Data (Big data) December 2018 Seattle WA USA https:\/\/doi.org\/10.1109\/bigdata.2018.8622396 2-s2.0-85062601734.","DOI":"10.1109\/BigData.2018.8622396"},{"key":"e_1_2_9_9_2","doi-asserted-by":"publisher","DOI":"10.3389\/fmed.2017.00085"},{"key":"e_1_2_9_10_2","doi-asserted-by":"publisher","DOI":"10.1111\/mice.12315"},{"key":"e_1_2_9_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2886549"},{"key":"e_1_2_9_12_2","unstructured":"HangQ. SparksE. R. andTalwalkarA. Paleo: A Performance Model for Deep Neural Networks 2016."},{"key":"e_1_2_9_13_2","doi-asserted-by":"crossref","unstructured":"PengY. BaoY. ChenY. WuC. andGuoC. Optimus: An Efficient Dynamic Resource Scheduler for Deep Learning Clusters 2018.","DOI":"10.1145\/3190508.3190517"},{"key":"e_1_2_9_14_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-019-01426-3"},{"key":"e_1_2_9_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.2981684"},{"key":"e_1_2_9_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2018.11.028"},{"key":"e_1_2_9_17_2","article-title":"Very deep convolutional networks for large-scale image recognition","author":"Simonyan K.","year":"2015","journal-title":"Computer Science"},{"key":"e_1_2_9_18_2","doi-asserted-by":"crossref","unstructured":"HeK. ZhangX. RenS. andSunJ. Deep residual learning for image recognition 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) July 2016 Las Vegas NV USA 770\u2013778 https:\/\/doi.org\/10.1109\/CVPR.2016.90 2-s2.0-84986274465.","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_9_19_2","doi-asserted-by":"crossref","unstructured":"AdolfR. RamaS. ReagenB. WeiG.-y. andBrooksD. Fathom: reference workloads for modern deep learning methods 2016 IEEE International Symposium on Workload Characterization (IISWC) September 2016 Providence RI USA https:\/\/doi.org\/10.1109\/iiswc.2016.7581275 2-s2.0-84994709898.","DOI":"10.1109\/IISWC.2016.7581275"},{"key":"e_1_2_9_20_2","article-title":"Batch Normalization: Accelerating deep network training by reducing internal covariate shift","author":"Ioffe S.","year":"2015","journal-title":"JMLR"}],"container-title":["Wireless Communications and Mobile Computing"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/wcmc\/2022\/6887040.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/wcmc\/2022\/6887040.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2022\/6887040","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,10]],"date-time":"2025-09-10T00:30:33Z","timestamp":1757464233000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/2022\/6887040"}},"subtitle":[],"editor":[{"given":"Jinbo","family":"Xiong","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2022,1]]},"references-count":20,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,1]]}},"alternative-id":["10.1155\/2022\/6887040"],"URL":"https:\/\/doi.org\/10.1155\/2022\/6887040","archive":["Portico"],"relation":{},"ISSN":["1530-8669","1530-8677"],"issn-type":[{"value":"1530-8669","type":"print"},{"value":"1530-8677","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1]]},"assertion":[{"value":"2021-08-05","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-11-29","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-01-06","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"6887040"}}