{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,8]],"date-time":"2026-04-08T16:36:22Z","timestamp":1775666182308,"version":"3.50.1"},"reference-count":89,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2025,3,11]],"date-time":"2025-03-11T00:00:00Z","timestamp":1741651200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2025,4,30]]},"abstract":"<jats:p>\n            Time-series forecasting models are invariably used in a variety of domains for crucial decision-making. Traditionally these models are constructed by experts with considerable manual effort. Unfortunately, this approach has poor scalability while generating accurate forecasts for new datasets belonging to diverse applications. Without access to skilled domain-knowledge, one approach is to train all the models on the new time-series data and then select the best one. However, this approach is nonviable in practice. In this work, we develop techniques for fast automatic selection of the best forecasting model for a new unseen time-series dataset, without having to first train (or evaluate) all the models on the new time-series data to select the best one. In particular, we develop a forecasting meta-learning approach called\n            <jats:sc>AutoForecast<\/jats:sc>\n            that allows for the quick inference of the best time-series forecasting model for an unseen dataset. Our approach learns both forecasting models\u2019 performances over time horizon of the same dataset and task similarity across different datasets. The experiments demonstrate the effectiveness of the approach over state-of-the-art (SOTA) single and ensemble methods and several SOTA meta-learners (adapted to our problem) in terms of selecting better forecasting models (i.e., 2\n            <jats:inline-formula content-type=\"math\/tex\">\n              <jats:tex-math notation=\"LaTeX\" version=\"MathJax\">\\(\\times\\)<\/jats:tex-math>\n            <\/jats:inline-formula>\n            gain) for unseen tasks for univariate and multivariate testbeds.\n            <jats:sc>AutoForecast<\/jats:sc>\n            has also significant reduction in inference time compared to the na\u00efve approach (doing inference using all possible models and then selecting the best one), with median of 42\n            <jats:inline-formula content-type=\"math\/tex\">\n              <jats:tex-math notation=\"LaTeX\" version=\"MathJax\">\\(\\times\\)<\/jats:tex-math>\n            <\/jats:inline-formula>\n            across the two testbeds. We release our meta-learning database corpus (348 datasets), performances of the 322 forecasting models on the database corpus, meta-features, and source codes for the community to access them for forecasting model selection and to build on them with new datasets and models which can help advance automating time-series forecasting problem. In our released database corpus, we unveil new traces of Adobe computing cluster usage for production workloads.\n          <\/jats:p>","DOI":"10.1145\/3715149","type":"journal-article","created":{"date-parts":[[2025,1,24]],"date-time":"2025-01-24T15:38:07Z","timestamp":1737733087000},"page":"1-41","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Evaluation-free Time-series Forecasting Model Selection via Meta-learning"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9554-9260","authenticated-orcid":false,"given":"Mustafa","family":"Abdallah","sequence":"first","affiliation":[{"name":"Purdue University in Indianapolis, Indianapolis, IN, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9758-0635","authenticated-orcid":false,"given":"Ryan A.","family":"Rossi","sequence":"additional","affiliation":[{"name":"Adobe Systems Inc, San Jose, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6780-4199","authenticated-orcid":false,"given":"Kanak","family":"Mahadik","sequence":"additional","affiliation":[{"name":"Adobe Systems Inc, San Jose, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3580-5290","authenticated-orcid":false,"given":"Sungchul","family":"Kim","sequence":"additional","affiliation":[{"name":"Adobe Systems Inc, San Jose, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3775-2954","authenticated-orcid":false,"given":"Handong","family":"Zhao","sequence":"additional","affiliation":[{"name":"Adobe Systems Inc, San Jose, CA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4239-5632","authenticated-orcid":false,"given":"Saurabh","family":"Bagchi","sequence":"additional","affiliation":[{"name":"Department of Electrical and Computer Engineering, Purdue Univ, West Lafayette, IN, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,3,11]]},"reference":[{"key":"e_1_3_2_2_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cie.2020.106435"},{"key":"e_1_3_2_3_1","unstructured":"Mustafa Abdallah Wo Jae Lee Nithin Raghunathan Charilaos Mousoulis John W. Sutherland and Saurabh Bagchi. 2021. Anomaly detection through transfer learning in agriculture and manufacturing IoT systems. arXiv:2102.05814. Retrieved from https:\/\/arxiv.org\/abs\/2102.05814"},{"key":"e_1_3_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3511808.3557241"},{"key":"e_1_3_2_5_1","doi-asserted-by":"publisher","unstructured":"Salisu Mamman Abdulrahman Pavel Brazdil Jan N. van Rijn and Joaquin Vanschoren. 2018. Speeding up algorithm selection using average ranking and active testing by introducing runtime. Machine Learning 107 1 (2018) 79\u2013108. DOI:10.1007\/s10994-017-5687-8","DOI":"10.1007\/s10994-017-5687-8"},{"key":"e_1_3_2_6_1","first-page":"116","article-title":"GluonTS: Probabilistic and neural time series modeling in Python","volume":"21","author":"Alexandrov Alexander","year":"2020","unstructured":"Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Sundar Rangapuram, David Salinas, Jasper Schulz, et\u00a0al. 2020. GluonTS: Probabilistic and neural time series modeling in Python. Journal of Machine Learning Research 21, 116 (2020), 1\u20136.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_7_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0305-0548(96)00062-7"},{"key":"e_1_3_2_8_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2011.12.028"},{"key":"e_1_3_2_9_1","first-page":"1","article-title":"Algorithms for hyper-parameter optimization","author":"Bergstra James","year":"2011","unstructured":"James Bergstra, R\u00e9mi Bardenet, Yoshua Bengio, and Bal\u00e1zs K\u00e9gl. 2011. Algorithms for hyper-parameter optimization. In 25th Annual Conference on Neural Information Processing Systems (NIPS 2011), Vol. 24. Neural Information Processing Systems Foundation, 1\u20139.","journal-title":"25th Annual Conference on Neural Information Processing Systems (NIPS 2011)"},{"issue":"2","key":"e_1_3_2_10_1","first-page":"281","article-title":"Random search for hyper-parameter optimization","volume":"13","author":"Bergstra James","year":"2012","unstructured":"James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research 13, 2 (2012), 281\u2013305.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2020.07.007"},{"key":"e_1_3_2_12_1","volume-title":"Metalearning: Applications to Data Mining","author":"Brazdil Pavel","year":"2008","unstructured":"Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta. 2008. Metalearning: Applications to Data Mining. Springer Science & Business Media."},{"key":"e_1_3_2_13_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-020-05910-7"},{"key":"e_1_3_2_14_1","unstructured":"Vitor Cerqueira Luis Torgo and Carlos Soares. 2021. Model selection for time series forecasting: Empirical analysis of different estimators. arXiv:2104.00584. Retrieved from https:\/\/arxiv.org\/abs\/2104.00584"},{"key":"e_1_3_2_15_1","doi-asserted-by":"publisher","DOI":"10.2307\/2347162"},{"key":"e_1_3_2_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.3036087"},{"key":"e_1_3_2_17_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.03.067"},{"key":"e_1_3_2_18_1","unstructured":"Christophmark. 2020. Python implementation of FFORMA. Retrieved from https:\/\/github.com\/christophmark\/fforma"},{"key":"e_1_3_2_19_1","first-page":"636","article-title":"Model averaging","author":"Clyde Merlise","year":"2003","unstructured":"Merlise Clyde. 2003. Model averaging. In Subjective and Objective Bayesian Statistics, 636\u2013642.","journal-title":"Subjective and Objective Bayesian Statistics,"},{"key":"e_1_3_2_20_1","doi-asserted-by":"publisher","DOI":"10.5555\/2912955.2912957"},{"key":"e_1_3_2_21_1","doi-asserted-by":"publisher","DOI":"10.1198\/jasa.2011.tm09771"},{"key":"e_1_3_2_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/BigData.2018.8621990"},{"key":"e_1_3_2_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-05318-5_1"},{"key":"e_1_3_2_24_1","article-title":"Efficient and robust automated machine learning","volume":"28","author":"Feurer Matthias","year":"2015","unstructured":"Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems. C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.), Vol. 28. Curran Associates, Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2015\/file\/11d0e6287202fced83f79975ec59a3a6-Paper.pdf","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_25_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v29i1.9354"},{"key":"e_1_3_2_26_1","first-page":"1126","volume-title":"International Conference on Machine Learning","author":"Finn Chelsea","year":"2017","unstructured":"Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning. PMLR, 1126\u20131135."},{"key":"e_1_3_2_27_1","unstructured":"Jean-Yves Franceschi Aymeric Dieuleveut and Martin Jaggi. 2019. Unsupervised scalable representation learning for multivariate Time series. arXiv:1901.10738. Retrieved from https:\/\/arxiv.org\/abs\/1901.10738"},{"key":"e_1_3_2_28_1","doi-asserted-by":"crossref","unstructured":"Ben D. Fulcher Carl H. Lubba Sarab S. Sethi and Nick S. Jones. 2019. CompEngine: A self-organizing living library of time-series data. arXiv:1905.01042. Retrieved from https:\/\/arxiv.org\/abs\/1905.01042","DOI":"10.1038\/s41597-020-0553-0"},{"key":"e_1_3_2_29_1","doi-asserted-by":"publisher","DOI":"10.1002\/for.3980040103"},{"key":"e_1_3_2_30_1","article-title":"Monash time series forecasting archive","author":"Godahewa Rakshitha","year":"2021","unstructured":"Rakshitha Godahewa, Christoph Bergmeir, Geoffrey I. Webb, Rob J. Hyndman, and Pablo Montero-Manso. 2021. Monash time series forecasting archive. In Neural Information Processing Systems Track on Datasets and Benchmarks.","journal-title":"Neural Information Processing Systems Track on Datasets and Benchmarks"},{"key":"e_1_3_2_31_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2022.01.008"},{"key":"e_1_3_2_32_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2020.06.008"},{"key":"e_1_3_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3307772.3328284"},{"issue":"0","key":"e_1_3_2_34_1","article-title":"tsfeatures: Time series feature extraction","volume":"1","author":"Hyndman Rob","year":"2019","unstructured":"Rob Hyndman, Yanfei Kang, Pablo Montero-Manso, Mitchell O\u2019Hara-Wild, Thiyanga Talagala, Earo Wang, Yangzhuoran Yang. 2019. tsfeatures: Time series feature extraction. R package version 1, 0.","journal-title":"R package version"},{"key":"e_1_3_2_35_1","first-page":"177","volume-title":"Business Forecasting: Practical Problems and Solutions","author":"Hyndman Rob J.","year":"2014","unstructured":"Rob J. Hyndman. 2014. Measuring forecast accuracy. In Business Forecasting: Practical Problems and Solutions, 177\u2013183."},{"key":"e_1_3_2_36_1","doi-asserted-by":"publisher","DOI":"10.18637\/jss.v027.i03"},{"key":"e_1_3_2_37_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0169-2070(01)00110-8"},{"key":"e_1_3_2_38_1","first-page":"751","article-title":"ISAC-instance-specific algorithm configuration","volume":"215","author":"Kadioglu Serdar","year":"2010","unstructured":"Serdar Kadioglu, Yuri Malitsky, Meinolf Sellmann, and Kevin Tierney. 2010. ISAC-instance-specific algorithm configuration. In European Conference on Artificial Intelligence, Vol. 215. Citeseer, 751\u2013756.","journal-title":"European Conference on Artificial Intelligence"},{"key":"e_1_3_2_39_1","unstructured":"Kaggle. 2021. Time series forecasting datasets. Retrieved May 21 2021 from https:\/\/www.kaggle.com\/search?q=time+series+forecasting+in%3Adatasets"},{"key":"e_1_3_2_40_1","first-page":"13","article-title":"Time series forecasting using Holt-Winters exponential smoothing","volume":"4329008","author":"Kalekar Prajakta S.","year":"2004","unstructured":"Prajakta S. Kalekar. 2004. Time series forecasting using Holt-Winters exponential smoothing. Kanwal School of Information Technology 4329008, 13 (2004), 1\u201313.","journal-title":"Kanwal School of Information Technology"},{"key":"e_1_3_2_41_1","volume-title":"Algorithm Selection via Meta-learning","author":"Kalousis Alexandros","year":"2002","unstructured":"Alexandros Kalousis. 2002. Algorithm Selection via Meta-learning. Ph.D. Dissertation. University of Geneva."},{"key":"e_1_3_2_42_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2010.04.006"},{"key":"e_1_3_2_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2016.7727376"},{"key":"e_1_3_2_44_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2009.09.020"},{"key":"e_1_3_2_45_1","doi-asserted-by":"publisher","DOI":"10.1016\/0047-259X(85)90027-2"},{"key":"e_1_3_2_46_1","doi-asserted-by":"publisher","DOI":"10.5555\/3122009.3242042"},{"issue":"3","key":"e_1_3_2_47_1","first-page":"18","article-title":"Classification and regression by randomForest","volume":"2","author":"Liaw Andy","year":"2002","unstructured":"Andy Liaw and Matthew Wiener. 2002. Classification and regression by randomForest. R news 2, 3 (2002), 18\u201322.","journal-title":"R news"},{"key":"e_1_3_2_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/3071178.3071208"},{"key":"e_1_3_2_49_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10618-019-00647-x"},{"key":"e_1_3_2_50_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2013.01.047"},{"key":"e_1_3_2_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00876"},{"key":"e_1_3_2_52_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2019.02.011"},{"key":"e_1_3_2_53_1","doi-asserted-by":"publisher","DOI":"10.1002\/0471667196.ess5050"},{"key":"e_1_3_2_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3371158.3371162"},{"key":"e_1_3_2_55_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-011-9290-2"},{"key":"e_1_3_2_56_1","unstructured":"Boris N. Oreshkin Dmitri Carpov Nicolas Chapados and Yoshua Bengio. 2020. Meta-learning framework with applications to zero-shot time-series forecasting. arXiv:2002.02887. Retrieved from https:\/\/arxiv.org\/abs\/2002.02887"},{"key":"e_1_3_2_57_1","volume-title":"International Conference on Learning Representations","author":"Oreshkin Boris N.","year":"2020","unstructured":"Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. 2020. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. In International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=r1ecqn4YwB"},{"key":"e_1_3_2_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2020.2995855"},{"key":"e_1_3_2_59_1","doi-asserted-by":"publisher","DOI":"10.5555\/1953048.2078195"},{"key":"e_1_3_2_60_1","doi-asserted-by":"publisher","DOI":"10.3390\/s21051590"},{"key":"e_1_3_2_61_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2004.03.008"},{"key":"e_1_3_2_62_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0377-2217(00)00171-5"},{"key":"e_1_3_2_63_1","volume-title":"International Conference on Learning Representations","author":"Raghu Aniruddh","year":"2020","unstructured":"Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. 2020. Rapid learning or feature reuse? Towards understanding the effectiveness of MAML. In International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=rkgMkCEtPB"},{"key":"e_1_3_2_64_1","volume-title":"5th International Conference on Learning Representations (ICLR \u201917)","author":"Ravi Sachin","year":"2017","unstructured":"Sachin Ravi and Hugo Larochelle. 2017. Optimization as a model for few-shot learning. In 5th International Conference on Learning Representations (ICLR \u201917). Retrieved from https:\/\/openreview.net\/forum?id=rJY0-Kcll"},{"key":"e_1_3_2_65_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.enbuild.2018.01.034"},{"key":"e_1_3_2_66_1","volume-title":"International Conference on Learning Representations","author":"Rusu Andrei A.","year":"2019","unstructured":"Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. 2019. Meta-learning with latent embedding optimization. In International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=BJgklhAcK7"},{"key":"e_1_3_2_67_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2019.07.001"},{"key":"e_1_3_2_68_1","doi-asserted-by":"publisher","DOI":"10.5555\/2749482.2749833"},{"key":"e_1_3_2_69_1","doi-asserted-by":"publisher","DOI":"10.25080\/Majora-92bf1922-011"},{"key":"e_1_3_2_70_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448016.3457557"},{"key":"e_1_3_2_71_1","first-page":"1168","article-title":"Unbounded Bayesian optimization via regularization","author":"Shahriari Bobak","year":"2016","unstructured":"Bobak Shahriari, Alexandre Bouchard-C\u00f4t\u00e9, and Nando Freitas. 2016. Unbounded Bayesian optimization via regularization. In Artificial Intelligence and Statistics. PMLR, 1168\u20131176.","journal-title":"Artificial Intelligence and Statistics"},{"key":"e_1_3_2_72_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.aap.2015.12.001"},{"key":"e_1_3_2_73_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2019.03.017"},{"key":"e_1_3_2_74_1","unstructured":"Jake Snell Kevin Swersky and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems I. Guyon U. V. Luxburg S. Bengio H. Wallach R. Fergus S. Vishwanathan and R. Garnett (Eds.) Vol. 30. Curran Associates Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/cb8da6767461f2812ae4290eac7cbc42-Paper.pdf"},{"key":"e_1_3_2_75_1","doi-asserted-by":"publisher","DOI":"10.1002\/for.2963"},{"key":"e_1_3_2_76_1","first-page":"18","article-title":"Meta-learning how to forecast time series","volume":"6","author":"Talagala Thiyanga S.","year":"2018","unstructured":"Thiyanga S. Talagala, Rob J. Hyndman, and George Athanasopoulos. 2018. Meta-learning how to forecast time series. Monash Econometrics Working Papers 6 (2018), 18.","journal-title":"Monash Econometrics Working Papers"},{"key":"e_1_3_2_77_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2021.07.002"},{"key":"e_1_3_2_78_1","doi-asserted-by":"publisher","DOI":"10.1080\/00031305.2017.1380080"},{"key":"e_1_3_2_79_1","unstructured":"Evaldas Vaiciukynas Paulius Danenas Vilius Kontrimas and Rimantas Butleris. 2020. Meta-learning for time series forecasting ensemble. arXiv:2011.10545. Retrieved from https:\/\/arxiv.org\/abs\/2011.10545"},{"key":"e_1_3_2_80_1","unstructured":"Joaquin Vanschoren. 2018. Meta-learning: A survey. arXiv:1810.03548. Retrieved from https:\/\/arxiv.org\/abs\/1810.03548"},{"key":"e_1_3_2_81_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2008.10.017"},{"key":"e_1_3_2_82_1","first-page":"6607","volume-title":"International Conference on Machine Learning","author":"Wang Yuyang","year":"2019","unstructured":"Yuyang Wang, Alex Smola, Danielle Maddix, Jan Gasthaus, Dean Foster, and Tim Januschowski. 2019. Deep factors for forecasting. In International Conference on Machine Learning. PMLR, 6607\u20136617."},{"key":"e_1_3_2_83_1","unstructured":"Tailai Wen and Roy Keyes. 2019. Time series anomaly detection using convolutional neural networks and transfer learning. arXiv:1905.13628. Retrieved from https:\/\/arxiv.org\/abs\/1905.13628"},{"key":"e_1_3_2_84_1","first-page":"57","article-title":"Model selection using dimensionality reduction of time series characteristics","author":"Widodo Agus","year":"2013","unstructured":"Agus Widodo and Indra Budi. 2013. Model selection using dimensionality reduction of time series characteristics. In International Symposium on Forecasting, 57\u2013118.","journal-title":"International Symposium on Forecasting,"},{"key":"e_1_3_2_85_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-017-5684-y"},{"key":"e_1_3_2_86_1","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1996.8.7.1341"},{"key":"e_1_3_2_87_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2009.5178729"},{"key":"e_1_3_2_88_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2018.05.021"},{"key":"e_1_3_2_89_1","unstructured":"Yue Zhao Ryan A. Rossi and Leman Akoglu. 2020. Automating outlier detection via meta-learning. arXiv:2009.10606. Retrieved from https:\/\/arxiv.org\/abs\/2009.10606"},{"key":"e_1_3_2_90_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3358106"}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3715149","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3715149","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:18Z","timestamp":1750295898000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3715149"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,11]]},"references-count":89,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,4,30]]}},"alternative-id":["10.1145\/3715149"],"URL":"https:\/\/doi.org\/10.1145\/3715149","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,11]]},"assertion":[{"value":"2022-11-21","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-12-23","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-11","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}