{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,20]],"date-time":"2026-03-20T16:30:38Z","timestamp":1774024238840,"version":"3.50.1"},"reference-count":27,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2022,9,12]],"date-time":"2022-09-12T00:00:00Z","timestamp":1662940800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["DMS-1555072"],"award-info":[{"award-number":["DMS-1555072"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["DMS-2053746"],"award-info":[{"award-number":["DMS-2053746"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["DMS-2134209"],"award-info":[{"award-number":["DMS-2134209"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["DE-SC0021142"],"award-info":[{"award-number":["DE-SC0021142"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Brookhaven National Laboratory","award":["DMS-1555072"],"award-info":[{"award-number":["DMS-1555072"]}]},{"name":"Brookhaven National Laboratory","award":["DMS-2053746"],"award-info":[{"award-number":["DMS-2053746"]}]},{"name":"Brookhaven National Laboratory","award":["DMS-2134209"],"award-info":[{"award-number":["DMS-2134209"]}]},{"name":"Brookhaven National Laboratory","award":["DE-SC0021142"],"award-info":[{"award-number":["DE-SC0021142"]}]},{"DOI":"10.13039\/100000015","name":"U.S. Department of Energy (DOE) Office of Science Advanced Scientific Computing Research","doi-asserted-by":"publisher","award":["DMS-1555072"],"award-info":[{"award-number":["DMS-1555072"]}],"id":[{"id":"10.13039\/100000015","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000015","name":"U.S. Department of Energy (DOE) Office of Science Advanced Scientific Computing Research","doi-asserted-by":"publisher","award":["DMS-2053746"],"award-info":[{"award-number":["DMS-2053746"]}],"id":[{"id":"10.13039\/100000015","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000015","name":"U.S. Department of Energy (DOE) Office of Science Advanced Scientific Computing Research","doi-asserted-by":"publisher","award":["DMS-2134209"],"award-info":[{"award-number":["DMS-2134209"]}],"id":[{"id":"10.13039\/100000015","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000015","name":"U.S. Department of Energy (DOE) Office of Science Advanced Scientific Computing Research","doi-asserted-by":"publisher","award":["DE-SC0021142"],"award-info":[{"award-number":["DE-SC0021142"]}],"id":[{"id":"10.13039\/100000015","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized location. Such a strategy, however, limits our ability to secure data privacy or use high-performance distributed\/parallel computing platforms. To alleviate such limitations, in this paper, we study the federated training of DeepONets for the first time. That is, we develop a framework, which we refer to as Fed-DeepONet, that allows multiple clients to train DeepONets collaboratively under the coordination of a centralized server. To achieve Fed-DeepONets, we propose an efficient stochastic gradient-based algorithm that enables the distributed optimization of the DeepONet parameters by averaging first-order estimates of the DeepONet loss gradient. Then, to accelerate the training convergence of Fed-DeepONets, we propose a moment-enhanced (i.e., adaptive) stochastic gradient-based strategy. Finally, we verify the performance of Fed-DeepONet by learning, for different configurations of the number of clients and fractions of available clients, (i) the solution operator of a gravity pendulum and (ii) the dynamic response of a parametric library of pendulums.<\/jats:p>","DOI":"10.3390\/a15090325","type":"journal-article","created":{"date-parts":[[2022,9,12]],"date-time":"2022-09-12T20:52:25Z","timestamp":1663015945000},"page":"325","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":14,"title":["Fed-DeepONet: Stochastic Gradient-Based Federated Training of Deep Operator Networks"],"prefix":"10.3390","volume":"15","author":[{"given":"Christian","family":"Moya","sequence":"first","affiliation":[{"name":"Department of Mathematics, Purdue University, West Lafayette, IN 47906, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0976-1987","authenticated-orcid":false,"given":"Guang","family":"Lin","sequence":"additional","affiliation":[{"name":"Department of Mathematics and School of Mechanical Engineering, Purdue University, West Lafayette, IN 47906, USA"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,12]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep learning","volume":"521","author":"LeCun","year":"2015","journal-title":"Nature"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"620","DOI":"10.1016\/j.jcp.2019.06.042","article-title":"Data driven governing equations approximation using deep neural networks","volume":"395","author":"Qin","year":"2019","journal-title":"J. Comput. Phys."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"A1607","DOI":"10.1137\/20M1342859","article-title":"Data-driven learning of nonautonomous systems","volume":"43","author":"Qin","year":"2021","journal-title":"SIAM J. Sci. Comput."},{"key":"ref_4","unstructured":"Raissi, M., Perdikaris, P., and Karniadakis, G.E. (2018). Multistep neural networks for data-driven discovery of nonlinear dynamical systems. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"686","DOI":"10.1016\/j.jcp.2018.10.045","article-title":"Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations","volume":"378","author":"Raissi","year":"2019","journal-title":"J. Comput. Phys."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"3932","DOI":"10.1073\/pnas.1517384113","article-title":"Discovering governing equations from data by sparse identification of nonlinear dynamical systems","volume":"113","author":"Brunton","year":"2016","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"710","DOI":"10.1016\/j.ifacol.2016.10.249","article-title":"Sparse identification of nonlinear dynamics with control (SINDYc)","volume":"49","author":"Brunton","year":"2016","journal-title":"IFAC-PapersOnLine"},{"key":"ref_8","first-page":"20160446","article-title":"Learning partial differential equations via data discovery and sparse optimization","volume":"473","author":"Schaeffer","year":"2017","journal-title":"Proc. R. Soc. A Math. Phys. Eng. Sci."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1785","DOI":"10.1109\/JPROC.2020.2998530","article-title":"Digital twin in the IoT context: A survey on technical features, scenarios, and architectural models","volume":"108","author":"Minerva","year":"2020","journal-title":"Proc. IEEE"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"218","DOI":"10.1038\/s42256-021-00302-5","article-title":"Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators","volume":"3","author":"Lu","year":"2021","journal-title":"Nat. Mach. Intell."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"911","DOI":"10.1109\/72.392253","article-title":"Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems","volume":"6","author":"Chen","year":"1995","journal-title":"IEEE Trans. Neural Netw."},{"key":"ref_12","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"110296","DOI":"10.1016\/j.jcp.2021.110296","article-title":"DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks","volume":"436","author":"Cai","year":"2021","journal-title":"J. Comput. Phys."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Moya, C., Zhang, S., Yue, M., and Lin, G. (2022). DeepONet-Grid-UQ: A Trustworthy Deep Operator Framework for Predicting the Power Grid\u2019s Post-Fault Trajectories. arXiv.","DOI":"10.1016\/j.neucom.2023.03.015"},{"key":"ref_15","unstructured":"Li, G., Moya, C., and Zhang, Z. (2022). On Learning the Dynamical Response of Nonlinear Control Systems with Deep Operator Networks. arXiv."},{"key":"ref_16","unstructured":"McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B.A. (2017, January 20\u201322). Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR, Fort Lauderdale, FL, USA."},{"key":"ref_17","first-page":"50","article-title":"Federated Learning: Challenges, Methods, and Future Directions","volume":"37","author":"Li","year":"2020","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_18","unstructured":"McMahan, H.B., Moore, E., Ramage, D., and y Arcas, B.A. (2016). Federated learning of deep networks using model averaging. arXiv."},{"key":"ref_19","unstructured":"Huang, B., Li, X., Song, Z., and Yang, X. (2021, January 18\u201324). Fl-ntk: A neural tangent kernel-based framework for federated learning analysis. Proceedings of the International Conference on Machine Learning, PMLR, Virtual Event."},{"key":"ref_20","unstructured":"Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., and Yang, K. (2012, January 3\u20136). Large scale distributed deep networks. Proceedings of the 26th Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NV, USA."},{"key":"ref_21","unstructured":"Li, X., Huang, K., Yang, W., Wang, S., and Zhang, Z. (2019). On the convergence of fedavg on non-iid data. arXiv."},{"key":"ref_22","unstructured":"Li, X., Jiang, M., Zhang, X., Kamp, M., and Dou, Q. (2021). Fedbn: Federated learning on non-iid features via local batch normalization. arXiv."},{"key":"ref_23","unstructured":"Khaled, A., Mishchenko, K., and Richt\u00e1rik, P. (2019). First analysis of local gd on heterogeneous data. arXiv."},{"key":"ref_24","unstructured":"Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., and Suresh, A.T. (2020, January 13\u201318). Scaffold: Stochastic controlled averaging for federated learning. Proceedings of the International Conference on Machine Learning, PMLR, Virtual Event."},{"key":"ref_25","unstructured":"Deng, W., Ma, Y.A., Song, Z., Zhang, Q., and Lin, G. (2021). On convergence of federated averaging Langevin dynamics. arXiv."},{"key":"ref_26","unstructured":"Lin, G., Moya, C., and Zhang, Z. (2021). Accelerated replica exchange stochastic gradient Langevin diffusion enhanced Bayesian DeepONet for solving noisy parametric PDEs. arXiv."},{"key":"ref_27","unstructured":"Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning, MIT Press."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/9\/325\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:29:58Z","timestamp":1760142598000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/9\/325"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,12]]},"references-count":27,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2022,9]]}},"alternative-id":["a15090325"],"URL":"https:\/\/doi.org\/10.3390\/a15090325","relation":{},"ISSN":["1999-4893"],"issn-type":[{"value":"1999-4893","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,12]]}}}