{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T05:06:51Z","timestamp":1775797611928,"version":"3.50.1"},"reference-count":16,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2022,9,2]],"date-time":"2022-09-02T00:00:00Z","timestamp":1662076800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms, we should not expect considerable gains from hyperparameter tuning on average; however, there may be some datasets for which default hyperparameters perform poorly, especially for some algorithms. By defining a single hp_score value, which combines an algorithm\u2019s accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study shall serve ML practitioners at large.<\/jats:p>","DOI":"10.3390\/a15090315","type":"journal-article","created":{"date-parts":[[2022,9,5]],"date-time":"2022-09-05T20:48:25Z","timestamp":1662410905000},"page":"315","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":37,"title":["High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1811-472X","authenticated-orcid":false,"given":"Moshe","family":"Sipper","sequence":"first","affiliation":[{"name":"Department of Computer Science, Ben-Gurion University, Beer-Sheva 8410501, Israel"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,2]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Bergstra, J., Yamins, D., and Cox, D.D. (2013, January 11\u201317). Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. Proceedings of the 12th Python in Science Conference, Austin, TX, USA.","DOI":"10.25080\/Majora-8b375195-003"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019, January 4\u20138). Optuna: A Next-Generation Hyperparameter Optimization Framework. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA.","DOI":"10.1145\/3292500.3330701"},{"key":"ref_3","first-page":"100243","article-title":"AddGBoost: A gradient boosting-style algorithm based on strong learners","volume":"7","author":"Sipper","year":"2022","journal-title":"Mach. Learn. Appl."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s42979-021-00885-1","article-title":"Neural networks with \u00e0 la carte selection of activation functions","volume":"2","author":"Sipper","year":"2021","journal-title":"SN Comput. Sci."},{"key":"ref_5","unstructured":"Bischl, B., Binder, M., Lang, M., Pielok, T., Richter, J., Coors, S., Thomas, J., Ullmann, T., Becker, M., and Boulesteix, A.L. (2021). Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges. arXiv."},{"key":"ref_6","first-page":"1","article-title":"Tunability: Importance of Hyperparameters of Machine Learning Algorithms","volume":"20","author":"Probst","year":"2019","journal-title":"J. Mach. Learn. Res."},{"key":"ref_7","unstructured":"Weerts, H.J.P., Mueller, A.C., and Vanschoren, J. (2020). Importance of Tuning Hyperparameters of Machine Learning Algorithms. arXiv."},{"key":"ref_8","unstructured":"Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., and Guyon, I. (2020, January 6\u201312). Bayesian Optimization is Superior to Random Search for Machine Learning Hyperparameter Tuning: Analysis of the Black-Box Optimization Challenge 2020. Proceedings of the NeurIPS 2020 Competition and Demonstration Track, Virtual Event\/Vancouver, BC, Canada."},{"key":"ref_9","first-page":"281","article-title":"Random Search for Hyper-Parameter Optimization","volume":"13","author":"Bergstra","year":"2012","journal-title":"J. Mach. Learn. Res."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Romano, J.D., Le, T.T., La Cava, W., Gregg, J.T., Goldberg, D.J., Chakraborty, P., Ray, N.L., Himmelstein, D., Fu, W., and Moore, J.H. (2021). PMLB v1.0: An open source dataset collection for benchmarking machine learning methods. arXiv.","DOI":"10.1093\/bioinformatics\/btab727"},{"key":"ref_11","first-page":"2825","article-title":"Scikit-learn: Machine Learning in Python","volume":"12","author":"Pedregosa","year":"2011","journal-title":"J. Mach. Learn. Res."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Chen, T., and Guestrin, C. (2016, January 13\u201317). XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939785"},{"key":"ref_13","first-page":"3146","article-title":"LightGBM: A highly efficient gradient boosting decision tree","volume":"30","author":"Ke","year":"2017","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_14","unstructured":"(2022, June 22). Scikit-Learn: Machine Learning in Python. Available online: https:\/\/scikit-learn.org\/."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"75","DOI":"10.1016\/j.jpdc.2019.07.007","article-title":"Estimation of energy consumption in machine learning","volume":"134","author":"Rodrigues","year":"2019","journal-title":"J. Parallel Distrib. Comput."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"035022","DOI":"10.1088\/2632-2153\/abee59","article-title":"Efficient hyperparameter tuning for kernel ridge regression with Bayesian optimization","volume":"2","author":"Stuke","year":"2021","journal-title":"Mach. Learn. Sci. Technol."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/9\/315\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:22:10Z","timestamp":1760142130000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/9\/315"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,2]]},"references-count":16,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2022,9]]}},"alternative-id":["a15090315"],"URL":"https:\/\/doi.org\/10.3390\/a15090315","relation":{},"ISSN":["1999-4893"],"issn-type":[{"value":"1999-4893","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,2]]}}}