{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,30]],"date-time":"2025-12-30T15:38:34Z","timestamp":1767109114325},"reference-count":33,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2023,3,13]],"date-time":"2023-03-13T00:00:00Z","timestamp":1678665600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,3,13]],"date-time":"2023-03-13T00:00:00Z","timestamp":1678665600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Federated learning (FL) enables clients learning a shared global model from multiple distributed devices while keeping training data locally. Due to the synchronous update mode between server and devices, the straggler problem has become a significant bottleneck for efficient FL. Existing approaches attempt to tackle this issue by using asynchronous-based model aggregation. However, these researches are only from the perspective of changing global model updating manner to mitigate straggler effect. They do not investigate the intrinsic reasons for the generation of the straggler effect, which could not fundamentally solve this problem. Furthermore, asynchronous-based approaches usually ignore those slow-responding but important local updates while frequently aggregating fast-responding ones during the whole training process, which may come with degradation in model accuracy. Thus, we propose FedTCR, a novel Federated learning approach via Taming Computing Resources. FedTCR includes a coarse-grained logical computing cluster construction algorithm (LCC) and a fine-grained intra-cluster collaborative training mechanism (ICT) as part of the FL process. The computing resource heterogeneity among devices and the communication frequency between devices and the server are indirectly tamed during this process, which substantially resolves the straggler problem and significantly improves the communication efficiency for FL. Experimental results show that FedTCR achieves much faster training performance, reducing the communication cost by up to <jats:inline-formula><jats:alternatives><jats:tex-math>$$8.59\\,\\times $$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mrow>\n                    <mml:mn>8.59<\/mml:mn>\n                    <mml:mspace \/>\n                    <mml:mo>\u00d7<\/mml:mo>\n                  <\/mml:mrow>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula> while improving <jats:inline-formula><jats:alternatives><jats:tex-math>$$13.85\\%$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mrow>\n                    <mml:mn>13.85<\/mml:mn>\n                    <mml:mo>%<\/mml:mo>\n                  <\/mml:mrow>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula> model accuracy, compared to state-of-the-art FL methods.<\/jats:p>","DOI":"10.1007\/s40747-023-01006-6","type":"journal-article","created":{"date-parts":[[2023,3,26]],"date-time":"2023-03-26T22:13:59Z","timestamp":1679868839000},"page":"5199-5219","update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":14,"title":["FedTCR: communication-efficient federated learning via taming computing resources"],"prefix":"10.1007","volume":"9","author":[{"given":"Kaiju","family":"Li","sequence":"first","affiliation":[]},{"given":"Hao","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Qinghua","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,3,13]]},"reference":[{"issue":"11","key":"1006_CR1","doi-asserted-by":"publisher","first-page":"2245","DOI":"10.28991\/cej-2020-03091615","volume":"6","author":"S Arhin","year":"2020","unstructured":"Arhin S, Manandhar B, Baba-Adam H (2020) Predicting travel times of bus transit in Washington DC using artificial neural networks. Civ Eng J 6(11):2245\u20132261","journal-title":"Civ Eng J"},{"key":"1006_CR2","doi-asserted-by":"publisher","DOI":"10.1093\/comjnl\/bxac062","author":"K Li","year":"2022","unstructured":"Li K, Wang H (2022) Federated learning communication-efficiency framework via corset construction [J]. Comput J. https:\/\/doi.org\/10.1093\/comjnl\/bxac062","journal-title":"Comput J"},{"issue":"4","key":"1006_CR3","doi-asserted-by":"publisher","first-page":"5572","DOI":"10.1109\/JSYST.2021.3119152","volume":"16","author":"K Li","year":"2021","unstructured":"Li K, Xiao CH (2021) CBFL: a communication-efficient federated learning framework from data redundancy perspective [J]. IEEE Syst J 16(4):5572\u20135583","journal-title":"IEEE Syst J"},{"key":"1006_CR4","unstructured":"McMahan B, Moore E, Ramage D, Hampson S, y Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics, PMLR, pp 1273\u20131282"},{"issue":"3","key":"1006_CR5","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1109\/MSP.2020.2975749","volume":"37","author":"T Li","year":"2020","unstructured":"Li T, Sahu AK, Talwalkar A, Smith V (2020) Federated learning: challenges, methods, and future directions. IEEE Signal Process Mag 37(3):50\u201360","journal-title":"IEEE Signal Process Mag"},{"key":"1006_CR6","unstructured":"Hamer J, Mohri M, Suresh AT Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977"},{"issue":"33","key":"1006_CR7","first-page":"1","volume":"1","author":"S Wang","year":"2019","unstructured":"Wang S, Tuor T, Salonidis T (2019) Adaptive federated learning in resource-constrained edge computing systems. IEEE J Sel Areas Commun 1(33):1\u20131","journal-title":"IEEE J Sel Areas Commun"},{"issue":"99","key":"1006_CR8","first-page":"1","volume":"1","author":"Y Chen","year":"2020","unstructured":"Chen Y, Sun X, Jin Y (2020) Communication-efficient federated deep learning with asynchronous model update and temporally weighted aggregation. IEEE Trans Neural Netw Learn Syst 1(99):1\u201310","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1006_CR9","unstructured":"Kone\u010dn\u1ef3 J, McMahan HB, Yu FX, Richt\u00e1rik P, Suresh AT, Bacon D Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492"},{"issue":"4","key":"1006_CR10","first-page":"1","volume":"2","author":"K Li","year":"2021","unstructured":"Li K, Xiao C (2021) CBFL: a communication-efficient federated learning framework from data redundancy perspective. IEEE Syst J 2(4):1\u201312","journal-title":"IEEE Syst J"},{"key":"1006_CR11","unstructured":"Li X, Huang K, Yang W, Wang S, Zhang Z (2019b) On the convergence of FedAvg on non-IID data. In International conference on learning representations"},{"key":"1006_CR12","doi-asserted-by":"crossref","unstructured":"Nishio T, Yonetani R (2019) Client selection for federated learning with heterogeneous resources in mobile edge. arXiv preprint arXiv:1804.08333","DOI":"10.1109\/ICC.2019.8761315"},{"key":"1006_CR13","unstructured":"Michael R, Amir J, Marco S, Catalin C, Moritz N, Lyman D, Michael K (2019) Asynchronous federated learning for geospatial applications. In: International conference on European conference on machine learning and principles and practice of knowledge discovery in databases (ECML PKDD), pp 21\u201328"},{"key":"1006_CR14","unstructured":"Xie C, Koyejo S, Gupta I (2019) Asynchronous federated optimization. In: Submitted to international conference on learning representations. arXiv preprint arXiv:1903.03934"},{"key":"1006_CR15","unstructured":"Chen M, Mao B, Ma T (2019) efficient and robust asynchronous federated learning with straggle. In: Submitted to international conference on learning representations"},{"key":"1006_CR16","doi-asserted-by":"crossref","unstructured":"Chen Y, Ning Y, Rangwala H (2019) Asynchronous online federated learning for edge devices. In: Submitted to international conference on learning representations. arXiv preprint arXiv:1911.02134","DOI":"10.1109\/BigData50022.2020.9378161"},{"issue":"99","key":"1006_CR17","first-page":"1","volume":"PP","author":"Y Wei","year":"2020","unstructured":"Wei Y, Luong NC, Hoang DT (2020) Federated learning in mobile edge networks: a comprehensive survey. IEEE Commun Surv Tutor PP(99):1\u20131","journal-title":"IEEE Commun Surv Tutor"},{"key":"1006_CR18","doi-asserted-by":"crossref","unstructured":"Chai Z, Ali A, Zawad S, Truex S, Anwar A, Bara CN, Zhou Y, Ludwig H, Yan F, Cheng Y (2020) Tifl: a tier-based federated learning system. In Proceedings of the 29th international symposium on high-performance parallel and distributed computing (HPDC), pp 125\u2013136","DOI":"10.1145\/3369583.3392686"},{"key":"1006_CR19","doi-asserted-by":"crossref","unstructured":"Yao X, Huang C, Sun L (2018) Two-stream federated learning: reduce the communication costs. In: 2018 IEEE visual communications and image processing (VCIP). IEEE","DOI":"10.1109\/VCIP.2018.8698609"},{"key":"1006_CR20","unstructured":"Amiri MM, Gunduz D, Kulkarni SR, Poor HV Federated learning with quantized global model updates. arXiv preprint arXiv:2020.10672"},{"key":"1006_CR21","unstructured":"Hsieh K, Harlap A, Vijaykumar N Gaia: geo-distributed machine learning approaching LAN speeds. In: 14th USENIX symposium on networked systems design and implementation (NSDI 17), USENIX, pp 629\u2013647"},{"key":"1006_CR22","doi-asserted-by":"crossref","unstructured":"Wang LP, Wang W, Li B (2019) CMFL: mitigating communication overhead for federated learning. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS). IEEE, pp 954\u2013964","DOI":"10.1109\/ICDCS.2019.00099"},{"key":"1006_CR23","unstructured":"Tao Z, Li Q (2018) esgd: Communication efficient distributed deep learning on the edge. In: (2018) USENIX workshop on hot topics in edge computing 755 (HotEdge 18). USENIX, pp 1\u20136"},{"key":"1006_CR24","unstructured":"Dai X, Yan X, Zhou K, Yang H, Ng KK, Cheng J, Fan Y Hyper-sphere quantization: communication-efficient sgd for federated learning. arXiv preprint arXiv:1911.04655"},{"key":"1006_CR25","unstructured":"Reisizadeh A, Mokhtari A, Hassani H (2020) FedPAQ: a communication-efficient federated learning method with periodic averaging and quantization. In: International conference on artificial intelligence and statistics, pp 2021\u20132031"},{"key":"1006_CR26","unstructured":"Reisizadeh A, Mokhtari A, Hassani H, Jadbabaie A, Pedarsani R (2020) Fedpaq: a communication-efficient federated learning method with periodic averaging and quantization. In: International conference on artificial intelligence and statistics, PMLR, pp 2021\u20132031"},{"issue":"9","key":"1006_CR27","doi-asserted-by":"publisher","first-page":"3400","DOI":"10.1109\/TNNLS.2019.2944481","volume":"31","author":"F Sattler","year":"2019","unstructured":"Sattler F, Wiedemann S, M\u00fcller K-R, Samek W (2019) Robust and communication-efficient federated learning from non-iid data. IEEE Trans Neural Netw Learn Syst 31(9):3400\u20133413","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1006_CR28","unstructured":"Li A, Sun J, Wang B (2020) Lotteryfl: personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets. arXiv preprint arXiv:2008.03371"},{"issue":"31","key":"1006_CR29","doi-asserted-by":"publisher","first-page":"1310","DOI":"10.1109\/TNNLS.2019.2919699","volume":"4","author":"H Zhu","year":"2020","unstructured":"Zhu H, Jin Y (2020) Multi-objective evolutionary federated learning. IEEE Trans Neural Netw Learn Syst 4(31):1310\u20131322","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1006_CR30","unstructured":"Chen Y, Ning Y, Rangwala H Asynchronous online federated learning for edge devices. arXiv preprint arXiv:1911.02134"},{"issue":"7553","key":"1006_CR31","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y Lecun","year":"2015","unstructured":"Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436","journal-title":"Nature"},{"key":"1006_CR32","doi-asserted-by":"crossref","unstructured":"LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, vol 86(11)","DOI":"10.1109\/5.726791"},{"key":"1006_CR33","unstructured":"Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Technical report, Citeseer"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01006-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01006-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01006-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,9,22]],"date-time":"2023-09-22T17:17:37Z","timestamp":1695403057000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01006-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,13]]},"references-count":33,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2023,10]]}},"alternative-id":["1006"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01006-6","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,3,13]]},"assertion":[{"value":"31 March 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 February 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 March 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}