{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,14]],"date-time":"2026-03-14T06:14:56Z","timestamp":1773468896391,"version":"3.50.1"},"reference-count":45,"publisher":"Springer Science and Business Media LLC","issue":"20","license":[{"start":{"date-parts":[[2024,7,29]],"date-time":"2024-07-29T00:00:00Z","timestamp":1722211200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,7,29]],"date-time":"2024-07-29T00:00:00Z","timestamp":1722211200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100017142","name":"GNCS","doi-asserted-by":"crossref","award":["Progetti Annuali"],"award-info":[{"award-number":["Progetti Annuali"]}],"id":[{"id":"10.13039\/100017142","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100003407","name":"Ministero dell\u2019Istruzione, dell\u2019Universit\u00e0 e della Ricerca","doi-asserted-by":"publisher","award":["PRIN 2022 2022N3ZNAX"],"award-info":[{"award-number":["PRIN 2022 2022N3ZNAX"]}],"id":[{"id":"10.13039\/501100003407","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003407","name":"Ministero dell\u2019Istruzione, dell\u2019Universit\u00e0 e della Ricerca","doi-asserted-by":"publisher","award":["PRIN 2022 PNRR P2022WC2ZZ"],"award-info":[{"award-number":["PRIN 2022 PNRR P2022WC2ZZ"]}],"id":[{"id":"10.13039\/501100003407","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Soft Comput"],"published-print":{"date-parts":[[2024,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Artificial Neural Networks (ANNs) are a tool in approximation theory widely used to solve interpolation problems. In fact, ANNs can be assimilated to functions since they take an input and return an output. The structure of the specifically adopted network determines the underlying approximation space, while the form of the function is selected by fixing the parameters of the network. In the present paper, we consider one-hidden layer ANNs with a feedforward architecture, also referred to as shallow or two-layer networks, so that the structure is determined by the number and types of neurons. The determination of the parameters that define the function, called training, is done via the resolution of the approximation problem, so by imposing the interpolation through a set of specific nodes. We present the case where the parameters are trained using a procedure that is referred to as Extreme Learning Machine (ELM) that leads to a linear interpolation problem. In such hypotheses, the existence of an ANN interpolating function is guaranteed. Given that the ANN is interpolating, the error incurred occurs outside the sampling interpolation nodes provided by the user. In this study, various choices of nodes are analyzed: equispaced, Chebychev, and randomly selected ones. Then, the focus is on regular target functions, for which it is known that interpolation can lead to spurious oscillations, a phenomenon that in the ANN literature is referred to as overfitting. We obtain good accuracy of the ANN interpolating function in all tested cases using these different types of interpolating nodes and different types of neurons. The following study is conducted starting from the well-known bell-shaped Runge example, which makes it clear that the construction of a global interpolating polynomial is accurate only if trained on suitably chosen nodes, ad example the Chebychev ones. In order to evaluate the behavior when the number of interpolation nodes increases, we increase the number of neurons in our network and compare it with the interpolating polynomial. We test using Runge\u2019s function and other well-known examples with different regularities. As expected, the accuracy of the approximation with a global polynomial increases only if the Chebychev nodes are considered. Instead, the error for the ANN interpolating function always decays, and in most cases we observe that the convergence follows what is observed in the polynomial case on Chebychev nodes, despite the set of nodes used for training. Then we can conclude that the use of such an ANN defeats the Runge phenomenon. Our results show the power of ANNs to achieve excellent approximations when interpolating regular functions also starting from uniform and random nodes, particularly for Runge\u2019s function.<\/jats:p>","DOI":"10.1007\/s00500-024-09918-2","type":"journal-article","created":{"date-parts":[[2024,7,29]],"date-time":"2024-07-29T14:03:08Z","timestamp":1722261788000},"page":"11767-11785","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["On the accuracy of interpolation based on single-layer artificial neural networks with a focus on defeating the Runge phenomenon"],"prefix":"10.1007","volume":"28","author":[{"given":"Ferdinando","family":"Auricchio","sequence":"first","affiliation":[]},{"given":"Maria Roberta","family":"Belardo","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1484-0217","authenticated-orcid":false,"given":"Francesco","family":"Calabr\u00f2","sequence":"additional","affiliation":[]},{"given":"Gianluca","family":"Fabiani","sequence":"additional","affiliation":[]},{"given":"Ariel F.","family":"Pascaner","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,7,29]]},"reference":[{"issue":"3","key":"9918_CR1","doi-asserted-by":"publisher","first-page":"1360","DOI":"10.1093\/imanum\/dry024","volume":"39","author":"B Adcock","year":"2018","unstructured":"Adcock B, Platte RB, Shadrin A (2018) Optimal sampling rates for approximating analytic functions from pointwise samples. IMA J Numer Anal 39(3):1360\u20131390 (05)","journal-title":"IMA J Numer Anal"},{"issue":"3","key":"9918_CR2","doi-asserted-by":"publisher","first-page":"930","DOI":"10.1109\/18.256500","volume":"39","author":"AR Barron","year":"1993","unstructured":"Barron AR (1993) Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inf Theory 39(3):930\u2013945","journal-title":"IEEE Trans Inf Theory"},{"issue":"5","key":"9918_CR3","doi-asserted-by":"publisher","first-page":"1743","DOI":"10.1137\/S1064827503430126","volume":"25","author":"Z Battles","year":"2004","unstructured":"Battles Z, Trefethen LN (2004) An extension of Matlab to continuous functions and operators. SIAM J Sci Comput 25(5):1743\u20131770","journal-title":"SIAM J Sci Comput"},{"key":"9918_CR4","volume-title":"Dynamic programming","author":"RE Bellman","year":"1957","unstructured":"Bellman RE (1957) Dynamic programming. Princeton University Press, Princeton"},{"key":"9918_CR5","volume-title":"Pattern recognition and machine learning","author":"CM Bishop","year":"2006","unstructured":"Bishop CM (2006) Pattern recognition and machine learning. Springer, New York"},{"issue":"2\u20134","key":"9918_CR6","first-page":"484","volume":"5","author":"JP Boyd","year":"2009","unstructured":"Boyd JP, Ong JR (2009) Exponentially-convergent strategies for defeating the Runge phenomenon for the approximation of non-periodic functions, part i: single-interval schemes. Comput Phys 5(2\u20134):484\u2013497","journal-title":"Comput Phys"},{"key":"9918_CR7","unstructured":"Broomhead D, Lowe D (1988) Radial basis functions, multi-variable functional interpolation and adaptive networks. Royal Signals and Radar Establishment Malvern (UK), 4148, 03"},{"issue":"1","key":"9918_CR8","doi-asserted-by":"publisher","first-page":"120","DOI":"10.1016\/j.cam.2008.10.022","volume":"229","author":"F Calabr\u00f2","year":"2009","unstructured":"Calabr\u00f2 F, Esposito AC (2009) An evaluation of Clenshaw\u2013Curtis quadrature rule for integration wrt singular measures. J Comput Appl Math 229(1):120\u2013128","journal-title":"J Comput Appl Math"},{"key":"9918_CR9","doi-asserted-by":"publisher","DOI":"10.1016\/j.cma.2021.114188","volume":"387","author":"F Calabr\u00f2","year":"2021","unstructured":"Calabr\u00f2 F, Fabiani G, Siettos C (2021) Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients. Comput Methods Appl Mech Eng 387:114188","journal-title":"Comput Methods Appl Mech Eng"},{"issue":"1","key":"9918_CR10","doi-asserted-by":"publisher","first-page":"231","DOI":"10.1137\/18M1181985","volume":"62","author":"RM Corless","year":"2020","unstructured":"Corless RM, Sevyeri LR (2020) The Runge example for interpolation and Wilkinson\u2019s examples for rootfinding. SIAM Rev 62(1):231\u2013243","journal-title":"SIAM Rev"},{"key":"9918_CR11","unstructured":"Cyr EC, Gulian MA, Patel RG, Perego M, Trask NA (2020) Robust training and initialization of deep neural networks: An adaptive basis viewpoint. In: Mathematical and scientific machine learning, PMLR. pp 512\u2013536"},{"key":"9918_CR12","doi-asserted-by":"publisher","first-page":"103","DOI":"10.1007\/s10462-013-9405-z","volume":"44","author":"S Ding","year":"2015","unstructured":"Ding S, Zhao H, Zhang Y, Xu X, Nie R (2015) Extreme learning machine: algorithm, theory and applications. Artif Intell Rev 44:103\u2013115","journal-title":"Artif Intell Rev"},{"key":"9918_CR13","doi-asserted-by":"publisher","DOI":"10.1016\/j.jcp.2022.111290","volume":"463","author":"S Dong","year":"2022","unstructured":"Dong S, Yang J (2022) On computing the hyperparameter of extreme learning machines: algorithm and application to computational PDEs, and comparison with classical and high-order finite elements. J Comput Phys 463:111290","journal-title":"J Comput Phys"},{"key":"9918_CR14","volume-title":"Chebfun guide","author":"TA Driscoll","year":"2014","unstructured":"Driscoll TA, Hale N, Trefethen LN (2014) Chebfun guide. Pafnuty Publications, Oxford"},{"key":"9918_CR15","unstructured":"Fornasier M, Klock T, Mondelli M, Rauchensteiner M (2022) Finite sample identification of wide shallow neural networks with biases. Preprint arXiv:2211.04589"},{"issue":"2","key":"9918_CR16","doi-asserted-by":"publisher","first-page":"869","DOI":"10.1137\/09076756X","volume":"33","author":"B Fornberg","year":"2011","unstructured":"Fornberg B, Larsson E, Flyer N (2011) Stable computations with Gaussian radial basis functions. SIAM J Sci Comput 33(2):869\u2013892","journal-title":"SIAM J Sci Comput"},{"issue":"34","key":"9918_CR17","doi-asserted-by":"publisher","first-page":"8505","DOI":"10.1073\/pnas.1718942115","volume":"115","author":"J Han","year":"2018","unstructured":"Han J, Jentzen A, Weinan E (2018) Solving high-dimensional partial differential equations using deep learning. Proc Natl Acad Sci 115(34):8505\u20138510","journal-title":"Proc Natl Acad Sci"},{"issue":"4","key":"9918_CR18","doi-asserted-by":"publisher","first-page":"860","DOI":"10.1137\/18M1165748","volume":"61","author":"CF Higham","year":"2019","unstructured":"Higham CF, Higham DJ (2019) Deep learning: an introduction for applied mathematicians. SIAM Rev 61(4):860\u2013891","journal-title":"SIAM Rev"},{"issue":"5","key":"9918_CR19","doi-asserted-by":"publisher","first-page":"551","DOI":"10.1016\/0893-6080(90)90005-6","volume":"3","author":"K Hornik","year":"1990","unstructured":"Hornik K, Stinchcombe M, White H (1990) Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural netw 3(5):551\u2013560","journal-title":"Neural netw"},{"key":"9918_CR20","unstructured":"Hryniowski A, Wong A (2019) Deeplabnet: End-to-end learning of deep radial basis networks with fully learnable basis functions. arXiv preprint arXiv:1911.09257"},{"issue":"1\u20133","key":"9918_CR21","doi-asserted-by":"publisher","first-page":"489","DOI":"10.1016\/j.neucom.2005.12.126","volume":"70","author":"G-B Huang","year":"2006","unstructured":"Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1\u20133):489\u2013501","journal-title":"Neurocomputing"},{"key":"9918_CR22","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1016\/j.neunet.2014.10.001","volume":"61","author":"G Huang","year":"2015","unstructured":"Huang G, Huang G-B, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61:32\u201348","journal-title":"Neural Netw"},{"key":"9918_CR23","doi-asserted-by":"crossref","unstructured":"Jagtap AD, Shin Y, Kawaguchi K, Karniadakis GE (2021) Deep kronecker neural networks: a general framework for neural networks with adaptive activation functions. arXiv preprint arXiv:2105.09513","DOI":"10.1016\/j.neucom.2021.10.036"},{"issue":"9","key":"9918_CR24","doi-asserted-by":"publisher","first-page":"4509","DOI":"10.1109\/TIP.2017.2713099","volume":"26","author":"KH Jin","year":"2017","unstructured":"Jin KH, McCann MT, Froustey E, Unser M (2017) Deep convolutional neural network for inverse problems in imaging. IEEE Trans Image Process 26(9):4509\u20134522","journal-title":"IEEE Trans Image Process"},{"key":"9918_CR25","doi-asserted-by":"publisher","DOI":"10.1038\/s42254-021-00314-5","author":"GE Karniadakis","year":"2021","unstructured":"Karniadakis GE, Kevrekidis IG, Lu L, Perdikaris P, Wang S, Yang L (2021) Physics-informed machine learning. Nat Rev Phys. https:\/\/doi.org\/10.1038\/s42254-021-00314-5","journal-title":"Nat Rev Phys"},{"issue":"5\u20136","key":"9918_CR26","doi-asserted-by":"publisher","first-page":"435","DOI":"10.1007\/s10472-020-09723-1","volume":"89","author":"A Kratsios","year":"2021","unstructured":"Kratsios A (2021) The universal approximation property: characterizations, existence, and a canonical topology for deep-learning. Ann Math Artif Intell 89(5\u20136):435\u2013469","journal-title":"Ann Math Artif Intell"},{"issue":"6","key":"9918_CR27","doi-asserted-by":"publisher","first-page":"861","DOI":"10.1016\/S0893-6080(05)80131-5","volume":"6","author":"M Leshno","year":"1993","unstructured":"Leshno M, Lin VY, Pinkus A, Schocken S (1993) Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw 6(6):861\u2013867","journal-title":"Neural Netw"},{"issue":"3","key":"9918_CR28","doi-asserted-by":"publisher","first-page":"218","DOI":"10.1038\/s42256-021-00302-5","volume":"3","author":"L Lu","year":"2021","unstructured":"Lu L, Jin P, Pang G, Zhang Z, Karniadakis GE (2021) Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat Mach Intell 3(3):218\u2013229","journal-title":"Nat Mach Intell"},{"issue":"1","key":"9918_CR29","doi-asserted-by":"publisher","first-page":"369","DOI":"10.1007\/s00365-021-09549-y","volume":"55","author":"WEC Ma","year":"2022","unstructured":"Ma WEC, Wu L (2022) The Barron space and the flow-induced function spaces for neural network models. Constr Approx 55(1):369\u2013406","journal-title":"Constr Approx"},{"key":"9918_CR30","unstructured":"Ma WEC, Wojtowytsch S, Wu L (2020) Towards a mathematical understanding of neural network-based machine learning: what we know and what we don\u2019t. arXiv:2009.10713v3"},{"key":"9918_CR31","doi-asserted-by":"publisher","first-page":"981","DOI":"10.1093\/imanum\/drab032","volume":"42","author":"S Mishra","year":"2021","unstructured":"Mishra S, Molinaro R (2021) Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs. IMA J Numer Anal 42:981\u20131022","journal-title":"IMA J Numer Anal"},{"key":"9918_CR32","unstructured":"Neufeld A, Schmocker P (2023) Universal approximation property of random neural networks. arXiv preprint arXiv:2312.08410"},{"issue":"2","key":"9918_CR33","doi-asserted-by":"publisher","first-page":"246","DOI":"10.1162\/neco.1991.3.2.246","volume":"3","author":"J Park","year":"1991","unstructured":"Park J, Sandberg IW (1991) Universal approximation using radial-basis-function networks. Neural Comput 3(2):246\u2013257","journal-title":"Neural Comput"},{"key":"9918_CR34","doi-asserted-by":"publisher","first-page":"143","DOI":"10.1017\/S0962492900002919","volume":"8","author":"A Pinkus","year":"1999","unstructured":"Pinkus A (1999) Approximation theory of the MLP model. Acta Numer 8:143\u2013195","journal-title":"Acta Numer"},{"key":"9918_CR35","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9781316408124","volume-title":"Ridge functions","author":"A Pinkus","year":"2015","unstructured":"Pinkus A (2015) Ridge functions, vol 205. Cambridge University Press, Cambridge"},{"issue":"2","key":"9918_CR36","doi-asserted-by":"publisher","first-page":"308","DOI":"10.1137\/090774707","volume":"53","author":"RB Platte","year":"2011","unstructured":"Platte RB, Trefethen LN, Kuijlaars ABJ (2011) Impossibility of fast stable approximation of analytic functions from equispaced samples. SIAM Rev 53(2):308\u2013318","journal-title":"SIAM Rev"},{"key":"9918_CR37","doi-asserted-by":"publisher","first-page":"826","DOI":"10.1016\/j.neucom.2015.11.009","volume":"175","author":"B Qu","year":"2016","unstructured":"Qu B, Lang BF, Liang JJ, Qin AK, Crisalle OD (2016) Two-hidden-layer extreme learning machine for regression and classification. Neurocomputing 175:826\u2013834","journal-title":"Neurocomputing"},{"key":"9918_CR38","doi-asserted-by":"crossref","unstructured":"Siegel JW, Xu J (2020) Approximation rates for neural networks with general activation functions. Neural Netw 128:313\u2013321","DOI":"10.1016\/j.neunet.2020.05.019"},{"key":"9918_CR39","doi-asserted-by":"crossref","unstructured":"Siegel JW, Xu J (2022) High-order approximation rates for shallow neural networks with cosine and ReLU$$^k$$ activation functions. Appl Comput Harmon Anal 58:1\u201326","DOI":"10.1016\/j.acha.2021.12.005"},{"issue":"1","key":"9918_CR40","doi-asserted-by":"publisher","first-page":"67","DOI":"10.1137\/060659831","volume":"50","author":"LN Trefethen","year":"2008","unstructured":"Trefethen LN (2008) Is Gauss quadrature better than Clenshaw\u2013Curtis? SIAM Rev 50(1):67\u201387","journal-title":"SIAM Rev"},{"key":"9918_CR41","doi-asserted-by":"crossref","unstructured":"Trefethen LN (2019) Approximation theory and approximation practice, extended edition. SIAM","DOI":"10.1137\/1.9781611975949"},{"key":"9918_CR42","unstructured":"Vidal R, Bruna J, Giryes R, Soatto S (2017) Mathematics of deep learning. arXiv preprint arXiv:1712.04741"},{"issue":"16","key":"9918_CR43","doi-asserted-by":"publisher","first-page":"2483","DOI":"10.1016\/j.neucom.2010.11.030","volume":"74","author":"Y Wang","year":"2011","unstructured":"Wang Y, Cao F, Yuan Y (2011) A study on effectiveness of extreme learning machine. Neurocomputing 74(16):2483\u20132490","journal-title":"Neurocomputing"},{"issue":"29","key":"9918_CR44","doi-asserted-by":"publisher","first-page":"41611","DOI":"10.1007\/s11042-021-11007-7","volume":"81","author":"J Wang","year":"2022","unstructured":"Wang J, Lu S, Wang S-H, Zhang Y-D (2022) A review on extreme learning machine. Multimedia Tools Appl 81(29):41611\u201341660","journal-title":"Multimedia Tools Appl"},{"issue":"16","key":"9918_CR45","doi-asserted-by":"publisher","first-page":"2475","DOI":"10.1016\/j.neucom.2010.12.037","volume":"74","author":"Y Yuan","year":"2011","unstructured":"Yuan Y, Wang Y, Cao F (2011) Optimization approximation solution for regression problem based on extreme learning machine. Neurocomputing 74(16):2475\u20132482","journal-title":"Neurocomputing"}],"container-title":["Soft Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00500-024-09918-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00500-024-09918-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00500-024-09918-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,23]],"date-time":"2024-10-23T01:07:13Z","timestamp":1729645633000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00500-024-09918-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,29]]},"references-count":45,"journal-issue":{"issue":"20","published-print":{"date-parts":[[2024,10]]}},"alternative-id":["9918"],"URL":"https:\/\/doi.org\/10.1007\/s00500-024-09918-2","relation":{},"ISSN":["1432-7643","1433-7479"],"issn-type":[{"value":"1432-7643","type":"print"},{"value":"1433-7479","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,29]]},"assertion":[{"value":"7 May 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 July 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article does not contain any studies with human participants or animals performed by any of the authors.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}}]}}