{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T07:24:07Z","timestamp":1740122647414,"version":"3.37.3"},"reference-count":68,"publisher":"Springer Science and Business Media LLC","license":[{"start":{"date-parts":[[2022,7,2]],"date-time":"2022-07-02T00:00:00Z","timestamp":1656720000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,7,2]],"date-time":"2022-07-02T00:00:00Z","timestamp":1656720000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100009398","name":"Medizinische Fakult\u00e4t, RWTH Aachen University","doi-asserted-by":"publisher","award":["FP7\/2007-2013"],"award-info":[{"award-number":["FP7\/2007-2013"]}],"id":[{"id":"10.13039\/501100009398","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100007210","name":"RWTH Aachen University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100007210","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Artificial neural networks are referred to as universal<jats:italic>approximators<\/jats:italic>due to their inherent ability to reconstruct complex linear and nonlinear output maps conceived as input-output relationships from data sets. This can be done by reducing large networks via regularization in order to establish compact models containing fewer parameters aimed at describing vital dependencies in data sets. In situations where the data sets contain non-informative input features, devising a continuous, optimal input feature selection technique can lead to improved prediction or classification. We propose a continuous input selection technique through a dimensional reduction mechanism using a \u2018structured\u2019<jats:italic>l<\/jats:italic><jats:sub>2<\/jats:sub>\u2212 norm regularization. The implementation is done by identifying the most informative feature subsets from a given data set via an adaptive training mechanism. The adaptation involves introducing a novel, modified gradient approach during training to deal with the<jats:italic>non-differentiability<\/jats:italic>associated with the gradient of the structured norm penalty. When the method is applied to process data sets, results indicate that the most informative inputs of artificial neural networks can be selected using a structured<jats:italic>l<\/jats:italic><jats:sub>2<\/jats:sub>\u2212 norm penalization.<\/jats:p>","DOI":"10.1007\/s10489-022-03539-8","type":"journal-article","created":{"date-parts":[[2022,7,2]],"date-time":"2022-07-02T21:32:16Z","timestamp":1656797536000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Neural network input feature selection using structured l2 \u2212 norm penalization"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8587-4000","authenticated-orcid":false,"given":"Nathaniel","family":"Egwu","sequence":"first","affiliation":[]},{"given":"Thomas","family":"Mrziglod","sequence":"additional","affiliation":[]},{"given":"Andreas","family":"Schuppert","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,7,2]]},"reference":[{"issue":"3","key":"3539_CR1","doi-asserted-by":"publisher","first-page":"1110","DOI":"10.1109\/TNNLS.2020.2980383","volume":"32","author":"J Wang","year":"2021","unstructured":"Wang J, Zhang H, Wang J, Pu Y, Pal YR (2021) Feature selection using a neural network with group lasso regularization and controlled redundancy. IEEE Trans Neural Netw Learn Syst 32(3):1110\u20131123","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"issue":"1","key":"3539_CR2","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1016\/j.compeleceng.2013.11.024","volume":"40","author":"C Chandrashekar","year":"2014","unstructured":"Chandrashekar C, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40(1):16\u201328","journal-title":"Comput Electr Eng"},{"key":"3539_CR3","doi-asserted-by":"crossref","unstructured":"Dhal P, Azad C (2021) A comprehensive survey on feature selection in the various fields of machine learning. Appl Intell","DOI":"10.1007\/s10489-021-02550-9"},{"issue":"11","key":"3539_CR4","doi-asserted-by":"publisher","first-page":"1323","DOI":"10.1016\/S0167-8655(02)00081-8","volume":"23","author":"A Verikas","year":"2002","unstructured":"Verikas A, Bacauskiene M (2002) Feature selection with neural networks. Pattern Recogn Lett 23(11):1323\u20131335","journal-title":"Pattern Recogn Lett"},{"key":"3539_CR5","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"55","author":"Y LeCun","year":"2015","unstructured":"LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 55:436\u2013444","journal-title":"Nature"},{"issue":"10","key":"3539_CR6","doi-asserted-by":"publisher","first-page":"78","DOI":"10.1145\/2347736.2347755","volume":"55","author":"P Domingos","year":"2012","unstructured":"Domingos P (2012) A few useful things about machine learning. Commun ACM 55(10):78\u201387","journal-title":"Commun ACM"},{"key":"3539_CR7","doi-asserted-by":"publisher","first-page":"85","DOI":"10.1016\/j.neunet.2014.09.003","volume":"61","author":"J Schmidhuber","year":"2015","unstructured":"Schmidhuber J (2015) Deep learning in neural networks: An overview. Neural Netw 61:85\u2013117","journal-title":"Neural Netw"},{"key":"3539_CR8","doi-asserted-by":"publisher","first-page":"533","DOI":"10.1038\/323533a0","volume":"323","author":"D Rumelhart","year":"1986","unstructured":"Rumelhart D, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323:533\u2013536","journal-title":"Nature"},{"issue":"3","key":"3539_CR9","doi-asserted-by":"publisher","first-page":"930","DOI":"10.1109\/18.256500","volume":"39","author":"AR Barron","year":"1993","unstructured":"Barron AR (1993) Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inf Theory 39(3):930\u2013945","journal-title":"IEEE Trans Inf Theory"},{"issue":"5","key":"3539_CR10","doi-asserted-by":"publisher","first-page":"359","DOI":"10.1016\/0893-6080(89)90020-8","volume":"2","author":"K Hornik","year":"1989","unstructured":"Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward neural networks are universal approximators. Neural Netw 2(5):359\u2013366","journal-title":"Neural Netw"},{"issue":"3","key":"3539_CR11","doi-asserted-by":"publisher","first-page":"501","DOI":"10.1016\/0893-6080(92)90012-8","volume":"5","author":"V Kurkova","year":"1992","unstructured":"Kurkova V (1992) Kolmogorov\u2019s theorem and multilayer neural networks. Neural Netw 5 (3):501\u2013506","journal-title":"Neural Netw"},{"key":"3539_CR12","doi-asserted-by":"publisher","first-page":"245","DOI":"10.1016\/S0004-3702(97)00063-5","volume":"97","author":"AL Blum","year":"1997","unstructured":"Blum AL, Langley P (1997) Selection of relevant features and examples in machine learning. Artif Intell 97:245\u2013271","journal-title":"Artif Intell"},{"key":"3539_CR13","doi-asserted-by":"crossref","unstructured":"Nguyen BH, Xue B, Zhang M (2020) A survey on swarm intelligence approaches to feature selection in data mining. Swarm and Evolutionary Computation, 54(100663)","DOI":"10.1016\/j.swevo.2020.100663"},{"key":"3539_CR14","doi-asserted-by":"crossref","unstructured":"Chen CW, Tsai YH, Chang FR, Lin WC (2020) Ensemble feature selection in medical datasets: Combining filter, wrapper, and embedded feature selection results. Expert Syst","DOI":"10.1111\/exsy.12553"},{"key":"3539_CR15","doi-asserted-by":"publisher","first-page":"385","DOI":"10.1260\/1748-3018.6.3.385","volume":"6","author":"E Blessie","year":"2012","unstructured":"Blessie E, Eswaramurthy K (2012) Sigmis: A feature selection algorithm using correlation based method. J Algorithm Comput Technol 6:385\u2013394","journal-title":"J Algorithm Comput Technol"},{"key":"3539_CR16","doi-asserted-by":"publisher","first-page":"389","DOI":"10.1023\/A:1012487302797","volume":"46","author":"I Guyon","year":"2002","unstructured":"Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machines. Mach Learn 46:389\u2013422","journal-title":"Mach Learn"},{"key":"3539_CR17","doi-asserted-by":"publisher","first-page":"273","DOI":"10.1016\/S0004-3702(97)00043-X","volume":"97","author":"R Kohavi","year":"1997","unstructured":"Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97:273\u2013324","journal-title":"Artif Intell"},{"key":"3539_CR18","unstructured":"Xiaoping L, Yadi W, Ruben R (2020) A survey on sparse learning models for feature selection. IEEE Transactions on Cybernetics, pp 1\u201319"},{"key":"3539_CR19","doi-asserted-by":"crossref","unstructured":"Got A, Moussaoui A, Zouache D (2021) Hybrid filter-wrapper feature selection using whale optimization algorithm: A multi-objective approach. Expert Syst Appl, 183","DOI":"10.1016\/j.eswa.2021.115312"},{"key":"3539_CR20","doi-asserted-by":"crossref","unstructured":"Kira K, Rendell LA (1992) A practical approach to feature selection. Machine Learning Proceedings 1992, pp 249\u2013256","DOI":"10.1016\/B978-1-55860-247-2.50037-1"},{"key":"3539_CR21","doi-asserted-by":"crossref","unstructured":"Kononenko I (1994) Estimating attributes: Analysis and extensions of relief. Machine Learning: ECML-94, pp 171\u2013182","DOI":"10.1007\/3-540-57868-4_57"},{"key":"3539_CR22","first-page":"77","volume":"27","author":"H Peng","year":"2005","unstructured":"Peng H, Long F, Ding C (2005) Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27:77\u201393","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"3539_CR23","doi-asserted-by":"publisher","first-page":"77","DOI":"10.1023\/B:AMAI.0000018580.96245.c6","volume":"41","author":"LE Raileanu","year":"2004","unstructured":"Raileanu LE, Stoffel K (2004) Theoretical comparison between the gini index and information gain criteria. Ann Math Artif Intell 41:77\u201393","journal-title":"Ann Math Artif Intell"},{"issue":"1","key":"3539_CR24","doi-asserted-by":"publisher","first-page":"81","DOI":"10.1016\/j.neucom.2017.02.029","volume":"241","author":"S Scardapane","year":"2017","unstructured":"Scardapane S, Hussain A, Uncini A (2017) Group sparse regularization for deep neural networks. Neurocomputing 241(1):81\u201389","journal-title":"Neurocomputing"},{"issue":"258","key":"3539_CR25","first-page":"241","volume":"24","author":"D Kong","year":"2014","unstructured":"Kong D, Fujimaki R, Liu J, Nie F, Ding C (2014) Exclusive feature learning on arbitrary structures via l1,2 - norm. Advan Neural Process Syst (NIPS) 24(258):241\u2013258","journal-title":"Advan Neural Process Syst (NIPS)"},{"key":"3539_CR26","unstructured":"Labach A, Salehinejad H, Valaee S (2019) Survey of dropout methods for deep neural networks. CoRR abs\/1904.13310"},{"key":"3539_CR27","doi-asserted-by":"crossref","unstructured":"May R, Dandy G, Maier H (2011) Review of input variable selection methods for artificial neural networks. Methodological Advances and Biomedical Applications","DOI":"10.5772\/16004"},{"key":"3539_CR28","first-page":"1929","volume":"15","author":"N Srivastava","year":"2014","unstructured":"Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929\u20131958","journal-title":"J Mach Learn Res"},{"key":"3539_CR29","unstructured":"Stalin S, Sreenivas TV (2002) Vectorized backpropagation and automatic pruning for mlp network optimization. IEEE International Conference on Neural Networks"},{"issue":"3","key":"3539_CR30","doi-asserted-by":"publisher","first-page":"241","DOI":"10.1007\/s11063-011-9196-7","volume":"34","author":"MA Gethsiyal","year":"2011","unstructured":"Gethsiyal MA, Kathirvalavakumer T (2011) A novel prunning algorithm for optimizing feedforward neural networks of classification problems. Neural Process Lett 34(3):241\u2013258","journal-title":"Neural Process Lett"},{"key":"3539_CR31","first-page":"265","volume":"2","author":"Z Hui","year":"2006","unstructured":"Hui Z, Hastie T, Tibshirani R (2006) Sparse principal component analysis. J Comput Graph Stat 2:265\u2013286","journal-title":"J Comput Graph Stat"},{"key":"3539_CR32","doi-asserted-by":"publisher","first-page":"77","DOI":"10.1109\/TEVC.2012.2185847","volume":"1","author":"DK Saxena","year":"2013","unstructured":"Saxena DK, Duro JA, Tiwari A, Deb K, Zhang Q (2013) Objective reduction in many-objective optimization: linear and nonlinear algorithms. IEEE Trans Evolut Comput 1:77\u201399","journal-title":"IEEE Trans Evolut Comput"},{"key":"3539_CR33","unstructured":"Ioannou Y, Robertson D, Shotton J, Cipolla R, Criminisi A (2015) Training cnns with low\u2013rank filters for efficient image classification. coRR abs\/1511.06744"},{"key":"3539_CR34","doi-asserted-by":"publisher","first-page":"696","DOI":"10.1162\/neco.1994.6.4.696","volume":"6","author":"K Doya","year":"1994","unstructured":"Doya K, Selverston A (1994) Dimension reduction of biological neuron models by artificial neural networks. Neural Comput 6:696\u2013717","journal-title":"Neural Comput"},{"issue":"4","key":"3539_CR35","doi-asserted-by":"publisher","first-page":"450","DOI":"10.1214\/12-STS394","volume":"27","author":"FR Bach","year":"2012","unstructured":"Bach FR, Jenatton R, Mairal J, Obozinski G (2012) Structured sparsity through convex optimization. Stat Sci 27(4):450\u2013468","journal-title":"Stat Sci"},{"issue":"258","key":"3539_CR36","first-page":"2082","volume":"24","author":"W Wen","year":"2016","unstructured":"Wen W, Wu C, Wang Y, Chen Y, Li H (2016) Learning structured sparsity in deep neural networks. Proceedings of the 30th International Conference on Neural Information Processing Systems 24(258):2082\u20132090","journal-title":"Proceedings of the 30th International Conference on Neural Information Processing Systems"},{"issue":"3","key":"3539_CR37","doi-asserted-by":"publisher","first-page":"1095","DOI":"10.1214\/12-AOAS549","volume":"6","author":"S Kim","year":"2012","unstructured":"Kim S, Xing EP (2012) Tree-guided lasso for multi-response regression with structured sparsity, with an application to eqtl mapping. Ann Appl Stat 6(3):1095\u20131117","journal-title":"Ann Appl Stat"},{"issue":"258","key":"3539_CR38","first-page":"59","volume":"2","author":"JA Cruz","year":"2007","unstructured":"Cruz JA, Wishart DS (2007) Application of machine learning in cancer prediction and prognosis. Cancer Informat 2(258):59\u201377","journal-title":"Cancer Informat"},{"issue":"258","key":"3539_CR39","doi-asserted-by":"publisher","first-page":"8","DOI":"10.1016\/j.csbj.2014.11.005","volume":"13","author":"K Kourou","year":"2015","unstructured":"Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI (2015) Machine learning in cancer prognosis and prediction. Comput Struct Biotechnol J 13(258):8\u201317","journal-title":"Comput Struct Biotechnol J"},{"issue":"20","key":"3539_CR40","first-page":"80","volume":"33","author":"L Goerlitz","year":"2010","unstructured":"Goerlitz L, Mrziglod T, Loosen R (2010) Topology optimization of artificial neural networks using l1 \u2212 penalization. Proc Work Comput Intell 33(20):80\u201387","journal-title":"Proc Work Comput Intell"},{"issue":"1","key":"3539_CR41","doi-asserted-by":"crossref","first-page":"267","DOI":"10.1111\/j.2517-6161.1996.tb02080.x","volume":"58","author":"R Tibshirani","year":"1996","unstructured":"Tibshirani R (1996) Regression shrinkage and selection via lasso. J Stat Soc B 58(1):267\u2013288","journal-title":"J Stat Soc B"},{"issue":"1\u201312","key":"3539_CR42","doi-asserted-by":"publisher","first-page":"385","DOI":"10.1002\/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3","volume":"16","author":"R Tibshirani","year":"1997","unstructured":"Tibshirani R (1997) Regression shrinkage and selection via lasso. Stat Med 16(1\u201312):385\u2013395","journal-title":"Stat Med"},{"issue":"4","key":"3539_CR43","doi-asserted-by":"publisher","first-page":"782","DOI":"10.1198\/106186007X255676","volume":"16","author":"Y Liu","year":"2007","unstructured":"Liu Y, Wu Y (2007) Variable selection via a combination of the l0 and l1 penalties. J Comput Graph Stat 16(4):782\u2013798","journal-title":"J Comput Graph Stat"},{"key":"3539_CR44","first-page":"2777","volume":"12","author":"R Jenatton","year":"2011","unstructured":"Jenatton R, Audibert JY, Bach F (2011) Structure variable selection with sparsity-inducing norms. J Mach Learn Res 12:2777\u20132824","journal-title":"J Mach Learn Res"},{"issue":"5","key":"3539_CR45","first-page":"1","volume":"22","author":"I Lemhadri","year":"2021","unstructured":"Lemhadri I, Ruan F, Abraham L, R T (2021) Lassonet: A neural network with feature sparsity. J Mach Learn Res 22(5):1\u201329","journal-title":"J Mach Learn Res"},{"key":"3539_CR46","doi-asserted-by":"crossref","unstructured":"Du G, Zhang J, Luo Z, Ma F, Ma L, Li S (2020) Joint imbalanced classification and feature selection for hospital readmissions. Knowledge Based Systems, 200","DOI":"10.1016\/j.knosys.2020.106020"},{"key":"3539_CR47","doi-asserted-by":"publisher","first-page":"101663","DOI":"10.1016\/j.compmedimag.2019.101663","volume":"80","author":"W shao","year":"2021","unstructured":"shao W, Peng Y, Zu C, Wang M, Zhang D (2021) Hypergraph based multi-task feature selection for multimodal classification of alzheimer\u2019s disease. Comput Med Imaging Graph 80:101663","journal-title":"Comput Med Imaging Graph"},{"key":"3539_CR48","doi-asserted-by":"crossref","unstructured":"Amini F, Hu G (2021) A two-layer feature selection method using genetic algorithm and elastic net. Expert Systems With Applications, 166","DOI":"10.1016\/j.eswa.2020.114072"},{"key":"3539_CR49","doi-asserted-by":"crossref","unstructured":"Zhang X, Fan M, Wang D, Zhou P, Tao D (2021) Top-k feature selection framework using robust 0-1 integer programming. IEEE Transactions on Neural Networks and Learning Systems, 32(7)","DOI":"10.1109\/TNNLS.2020.3009209"},{"issue":"35","key":"3539_CR50","first-page":"2173","volume":"5","author":"H Zou","year":"2000","unstructured":"Zou H, Hastie T, Tibshirani R (2000) On the degrees of freedom of the lasso. The Annals of Statistics 5(35):2173\u20132192","journal-title":"The Annals of Statistics"},{"key":"3539_CR51","unstructured":"Dehua W, Yang Z, Yi Z (2017) Lightgbm: An effective mirna classification method in breast cancer patients. Proceedings of the 2017 International Conference on Computational Biology and Bioinformatics, pp 7\u201311"},{"key":"3539_CR52","doi-asserted-by":"crossref","unstructured":"Dreiseitl S, Ohno-Machado L (2002) Logistic regression and artificial neural network classification models: A methodology review. J Med Inform, pp 352\u2013359","DOI":"10.1016\/S1532-0464(03)00034-0"},{"key":"3539_CR53","doi-asserted-by":"crossref","unstructured":"Chen T, Guestrin C (2016) Xgboost: A scalable tree boosting system. CoRR, abs\/1603.02754, pp 785\u2013794","DOI":"10.1145\/2939672.2939785"},{"issue":"2","key":"3539_CR54","doi-asserted-by":"publisher","first-page":"431","DOI":"10.1137\/0111030","volume":"11","author":"DW Marquardt","year":"1963","unstructured":"Marquardt DW (1963) An algorithm for least\u2013squares estimation of nonlinear parameters. J Soc Ind Appl Math 11(2):431\u2013441","journal-title":"J Soc Ind Appl Math"},{"key":"3539_CR55","doi-asserted-by":"crossref","unstructured":"Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning. Springer Series in Statistics Springer New York Inc","DOI":"10.1007\/978-0-387-21606-5"},{"key":"3539_CR56","doi-asserted-by":"crossref","unstructured":"Kim IY, de Weck OL (2006) Adaptive weighted sum method for multiobjective optimization: A new method for pareto front generation. Struct Multidiscip Optim","DOI":"10.1007\/s00158-005-0557-6"},{"key":"3539_CR57","unstructured":"Dheeru D, Casey G (2017) UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences. http:\/\/archive.ics.uci.edu\/ml"},{"key":"3539_CR58","doi-asserted-by":"publisher","first-page":"181","DOI":"10.1109\/TNSRE.2013.2293575","volume":"22","author":"A Tsanas","year":"2014","unstructured":"Tsanas A, Little MA, Fox C, Ramig LO (2014) Objective automatic assessment of rehabilitative speech treatment in parkinson\u2019s disease. IEEE Trans Neural Syst Rehabil Eng 22:181\u2013190","journal-title":"IEEE Trans Neural Syst Rehabil Eng"},{"issue":"Oct","key":"3539_CR59","first-page":"2825","volume":"12","author":"F Pedregosa","year":"2011","unstructured":"Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V et al (2011) Scikit-learn: Machine learning in python. J Mach Learn Res 12(Oct):2825\u20132830","journal-title":"J Mach Learn Res"},{"key":"3539_CR60","unstructured":"Van Rossum G, Drake Jr FL (1995) Python tutorial. Centrum voor Wiskunde en Informatica Amsterdam, The Netherlands"},{"key":"3539_CR61","unstructured":"Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, I. Jia I, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Man\u00e9 D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Vi\u00e9gas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2015) Tensorflow: Large-scale machine learning on heterogeneous systems. TensorFlow"},{"key":"3539_CR62","first-page":"211","volume":"13","author":"J Begastra","year":"2012","unstructured":"Begastra J, Bengio Y (2012) Random search for hyper-parameter optimization. J Mach Learn Res 13:211\u2013305","journal-title":"J Mach Learn Res"},{"issue":"489","key":"3539_CR63","doi-asserted-by":"publisher","first-page":"312","DOI":"10.1198\/jasa.2009.tm08013","volume":"105","author":"Y Zhang","year":"2010","unstructured":"Zhang Y, Li R, Tsai CL (2010) Regularization parameter selection via generalized information criterion. J Am Stat Assoc 105(489):312\u2013323","journal-title":"J Am Stat Assoc"},{"key":"3539_CR64","doi-asserted-by":"crossref","unstructured":"Luc DT (2008) Pareto optimality. Pareto Optimality, Game Theory and Equilibria. Springer Optimization and Its Applications, 17","DOI":"10.1007\/978-0-387-77247-9_18"},{"key":"3539_CR65","doi-asserted-by":"crossref","unstructured":"Legriel J, Guernic CL, Cotton S, Maler O (2010) Approximating the pareto front of multi-criteria optimization problems. Tools and Algorithms for the Construction and Analysis of Systems, pp 69\u201383","DOI":"10.1007\/978-3-642-12002-2_6"},{"issue":"3","key":"3539_CR66","first-page":"18","volume":"2","author":"A Liaw","year":"2002","unstructured":"Liaw A, Wiener M (2002) Classification and regression by randomforest. R News 2(3):18\u201322","journal-title":"R News"},{"key":"3539_CR67","doi-asserted-by":"publisher","first-page":"301","DOI":"10.1111\/j.1467-9868.2005.00503.x","volume":"67","author":"H Zou","year":"2005","unstructured":"Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B 67:301\u2013320","journal-title":"J R Stat Soc Ser B"},{"key":"3539_CR68","doi-asserted-by":"crossref","unstructured":"Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. The Annals of Statistics, 32(2)","DOI":"10.1214\/009053604000000067"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03539-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-022-03539-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03539-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,9,28]],"date-time":"2024-09-28T10:16:40Z","timestamp":1727518600000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-022-03539-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,7,2]]},"references-count":68,"alternative-id":["3539"],"URL":"https:\/\/doi.org\/10.1007\/s10489-022-03539-8","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"type":"print","value":"0924-669X"},{"type":"electronic","value":"1573-7497"}],"subject":[],"published":{"date-parts":[[2022,7,2]]},"assertion":[{"value":"22 March 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 July 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"<!--Emphasis Type='Bold' removed-->Conflict of Interests"}}]}}