{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:36:53Z","timestamp":1760143013041,"version":"build-2065373602"},"reference-count":42,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2024,1,22]],"date-time":"2024-01-22T00:00:00Z","timestamp":1705881600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>One of the main disadvantages of the traditional mean square error (MSE)-based constructive networks is their poor performance in the presence of non-Gaussian noises. In this paper, we propose a new incremental constructive network based on the correntropy objective function (correntropy-based constructive neural network (C2N2)), which is robust to non-Gaussian noises. In the proposed learning method, input and output side optimizations are separated. It is proved theoretically that the new hidden node, which is obtained from the input side optimization problem, is not orthogonal to the residual error function. Regarding this fact, it is proved that the correntropy of the residual error converges to its optimum value. During the training process, the weighted linear least square problem is iteratively applied to update the parameters of the newly added node. Experiments on both synthetic and benchmark datasets demonstrate the robustness of the proposed method in comparison with the MSE-based constructive network, the radial basis function (RBF) network. Moreover, the proposed method outperforms other robust learning methods including the cascade correntropy network (CCOEN), Multi-Layer Perceptron based on the Minimum Error Entropy objective function (MLPMEE), Multi-Layer Perceptron based on the correntropy objective function (MLPMCC) and the Robust Least Square Support Vector Machine (RLS-SVM).<\/jats:p>","DOI":"10.3390\/a17010049","type":"journal-article","created":{"date-parts":[[2024,1,22]],"date-time":"2024-01-22T12:01:13Z","timestamp":1705924873000},"page":"49","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Correntropy-Based Constructive One Hidden Layer Neural Network"],"prefix":"10.3390","volume":"17","author":[{"given":"Mojtaba","family":"Nayyeri","sequence":"first","affiliation":[{"name":"Institute for Artificial Intelligence, University of Stuttgart, 70569 Stuttgart, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2423-6715","authenticated-orcid":false,"given":"Modjtaba","family":"Rouhani","sequence":"additional","affiliation":[{"name":"Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad 1696700, Iran"}]},{"given":"Hadi Sadoghi","family":"Yazdi","sequence":"additional","affiliation":[{"name":"Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad 1696700, Iran"}]},{"given":"Marko M.","family":"M\u00e4kel\u00e4","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Statistics, University of Turku, 20014 Turku, Finland"}]},{"given":"Alaleh","family":"Maskooki","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Statistics, University of Turku, 20014 Turku, Finland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6409-9423","authenticated-orcid":false,"given":"Yury","family":"Nikulin","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Statistics, University of Turku, 20014 Turku, Finland"}]}],"member":"1968","published-online":{"date-parts":[[2024,1,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1780","DOI":"10.1109\/TSP.2002.1011217","article-title":"An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems","volume":"50","author":"Erdogmus","year":"2002","journal-title":"Signal Process. IEEE Trans."},{"key":"ref_2","unstructured":"Fahlman, S.E., and Lebiere, C. (1989, January 27\u201330). The cascade-correlation learning architecture. Proceedings of the Advances in Neural Information Processing Systems 2, NIPS Conference, Denver, CO, USA."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1131","DOI":"10.1109\/72.623214","article-title":"Objective functions for training new hidden units in constructive neural networks","volume":"8","author":"Kwok","year":"1997","journal-title":"Neural Netw. IEEE Trans."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"2629","DOI":"10.1109\/TCSI.2012.2189060","article-title":"Orthogonal least squares algorithm for training cascade neural networks","volume":"59","author":"Huang","year":"2012","journal-title":"Circuits Syst. Regul. Pap. IEEE Trans."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"589","DOI":"10.1016\/j.neunet.2004.02.002","article-title":"New training strategies for constructive neural networks with application to regression problems","volume":"17","author":"Ma","year":"2004","journal-title":"Neural Netw."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"821","DOI":"10.1109\/TNN.2005.851786","article-title":"Constructive feedforward neural networks using Hermite polynomial activation functions","volume":"16","author":"Ma","year":"2005","journal-title":"Neural Netw. IEEE Trans."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"740","DOI":"10.1109\/72.248452","article-title":"Pruning algorithms-a survey","volume":"4","author":"Reed","year":"1993","journal-title":"Neural Netw. IEEE Trans."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"519","DOI":"10.1109\/72.572092","article-title":"An iterative pruning algorithm for feedforward neural networks","volume":"8","author":"Castellano","year":"1997","journal-title":"Neural Netw. IEEE Trans."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1386","DOI":"10.1109\/72.963775","article-title":"A new pruning heuristic based on variance analysis of sensitivity information","volume":"12","author":"Engelbrecht","year":"2001","journal-title":"Neural Netw. IEEE Trans."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"825","DOI":"10.1016\/j.neucom.2005.04.010","article-title":"Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure","volume":"69","author":"Zeng","year":"2006","journal-title":"Neurocomputing"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"291","DOI":"10.1109\/12.210172","article-title":"Growing and pruning neural tree networks","volume":"42","author":"Sakar","year":"1993","journal-title":"Comput. IEEE Trans."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1109\/TNN.2004.836241","article-title":"A generalized growing and pruning RBF (GGAPRBF) neural network for function approximation","volume":"16","author":"Huang","year":"2005","journal-title":"Neural Netw. IEEE Trans."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"2284","DOI":"10.1109\/TSMCB.2004.834428","article-title":"An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks","volume":"34","author":"Huang","year":"2004","journal-title":"Syst. Man. Cybern. Part Cybern. IEEE Trans."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1659","DOI":"10.1109\/TNNLS.2014.2350957","article-title":"A Hybrid Constructive Algorithm for Single-Layer Feedforward Networks Learning","volume":"26","author":"Wu","year":"2014","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"2187","DOI":"10.1109\/TSP.2006.872524","article-title":"Generalized correlation function: Definition, properties, and application to blind equalization","volume":"54","author":"Pokharel","year":"2006","journal-title":"Signal Process. IEEE Trans."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"5286","DOI":"10.1109\/TSP.2007.896065","article-title":"Correntropy: Properties and applications in non-Gaussian signal processing","volume":"55","author":"Liu","year":"2007","journal-title":"Signal Process. IEEE Trans."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1657","DOI":"10.1109\/TPWRS.2009.2030291","article-title":"Entropy and correntropy against minimum square error in offline and online three-day ahead wind power forecasting","volume":"24","author":"Bessa","year":"2009","journal-title":"Power Syst. IEEE Trans."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Singh, A., and Principe, J.C. (2009, January 14\u201319). Using correntropy as a cost function in linear adaptive filters. Proceedings of the 2009 International Joint Conference on Neural Networks, Atlanta, GA, USA.","DOI":"10.1109\/IJCNN.2009.5178823"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1385","DOI":"10.1109\/LSP.2014.2337899","article-title":"Convex Combination of Adaptive Filters under the Maximum Correntropy Criterion in Impulsive Interference","volume":"21","author":"Shi","year":"2014","journal-title":"Signal Process. Lett. IEEE"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Zhao, S., Chen, B., and Principe, J.C. (August, January 31). Kernel adaptive filtering with maximum correntropy criterion. Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA.","DOI":"10.1109\/IJCNN.2011.6033473"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"7149","DOI":"10.3390\/e17107149","article-title":"Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion","volume":"17","author":"Wu","year":"2015","journal-title":"Entropy"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"1723","DOI":"10.1109\/LSP.2015.2428713","article-title":"Convergence of a fixed-point algorithm under Maximum Correntropy Criterion","volume":"22","author":"Chen","year":"2015","journal-title":"Signal Process. Lett. IEEE"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"880","DOI":"10.1109\/LSP.2014.2319308","article-title":"Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion","volume":"21","author":"Chen","year":"2014","journal-title":"Signal Process. Lett. IEEE"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1019","DOI":"10.1007\/s00521-015-1916-x","article-title":"Efficient and robust deep learning with Correntropyinduced loss function","volume":"27","author":"Chen","year":"2015","journal-title":"Neural Comput. Appl."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Singh, A., and Principe, J.C. (2010, January 18\u201323). A loss function for classification based on a robust similarity metric. Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain.","DOI":"10.1109\/IJCNN.2010.5596485"},{"key":"ref_26","first-page":"993","article-title":"Learning with the maximum correntropy criterion induced losses for regression","volume":"16","author":"Feng","year":"2015","journal-title":"J. Mach. Learn. Res."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"491","DOI":"10.1109\/LSP.2012.2204435","article-title":"Maximum correntropy estimation is a smoothed MAP estimation","volume":"19","author":"Chen","year":"2012","journal-title":"Signal Process. Lett. IEEE"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"4515","DOI":"10.1109\/TNNLS.2017.2753725","article-title":"Universal Approximation by Using the Correntropy Objective Function","volume":"29","author":"Nayyeri","year":"2018","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_29","unstructured":"Athreya, K.B., and Lahiri, S.N. (2006). Measure Theory and Probability Theory, Springer Science & Business Media."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"707","DOI":"10.1007\/s00440-014-0583-7","article-title":"On the rate of convergence in Wasserstein distance of the empirical measure","volume":"162","author":"Fournier","year":"2015","journal-title":"Probab. Theory Relat. Fields"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"861","DOI":"10.1016\/S0893-6080(05)80131-5","article-title":"Multilayer feedforward networks with a nonpolynomial activation function can approximate any function","volume":"6","author":"Leshno","year":"1993","journal-title":"Neural Netw."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Yuan, X.-T., and Hu, B.-G. (2009, January 14\u201318). Robust feature extraction via information theoretic learning. Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada.","DOI":"10.1145\/1553374.1553526"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Klenke, A. (2013). Probability Theory: A Comprehensive Course, Springer Science & Business Media.","DOI":"10.1007\/978-1-4471-5361-0"},{"key":"ref_34","unstructured":"Rudin, W. (1964). Principles of Mathematical Analysis, McGraw-Hill."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1016\/j.neucom.2014.03.037","article-title":"A robust least squares support vector machine for regression and classification with noise","volume":"140","author":"Yang","year":"2014","journal-title":"Neurocomputing"},{"key":"ref_36","unstructured":"Newman, D., Hettich, S., Blake, C., Merz, C., and Aha, D. (1998). UCI Repository of Machine Learning Databases, Department of Information and Computer Science, University of California. Available online: https:\/\/archive.ics.uci.edu\/."},{"key":"ref_37","unstructured":"Meyer, M., and Vlachos, P. (2023, November 29). Statlib. Available online: https:\/\/lib.stat.cmu.edu\/datasets\/."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1902","DOI":"10.1016\/j.sigpro.2009.03.027","article-title":"A low complexity robust detector in impulsive noise","volume":"89","author":"Pokharel","year":"2009","journal-title":"Signal Process."},{"key":"ref_39","first-page":"1","article-title":"A Statistical Learning Approach to Modal Regression","volume":"21","author":"Feng","year":"2020","journal-title":"J. Mach. Learn. Res."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"157","DOI":"10.1162\/neco_a_01334","article-title":"New Insights into Learning with Correntropy-Based Regression","volume":"33","author":"Feng","year":"2021","journal-title":"Neural Comput."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"2485","DOI":"10.1007\/s11071-021-06759-8","article-title":"Cross-sample entropy estimation for time series analysis: A nonparametric approach","volume":"105","year":"2021","journal-title":"Nonlinear Dyn."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Bagirov, A., Karmitsa, N., and M\u00e4kel\u00e4, M.M. (2014). Introduction to Nonsmooth Optimization: Theory, Practice and Software, Springer International Publishing.","DOI":"10.1007\/978-3-319-08114-4"}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/17\/1\/49\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T13:47:27Z","timestamp":1760104047000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/17\/1\/49"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,22]]},"references-count":42,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,1]]}},"alternative-id":["a17010049"],"URL":"https:\/\/doi.org\/10.3390\/a17010049","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2024,1,22]]}}}