{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T09:59:27Z","timestamp":1775815167006,"version":"3.50.1"},"reference-count":55,"publisher":"Association for Computing Machinery (ACM)","issue":"2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2024,10]]},"abstract":"<jats:p>The expanding instrumentation of processes throughout society with sensors yields a proliferation of time series data that may in turn enable important applications, e.g., related to transportation infrastructures or power grids. Machine-learning based methods are increasingly being used to extract value from such data. We provide means of reducing the resulting considerable computational and data storage costs. We achieve this by providing means of condensing large time series datasets such that models trained on the condensed data achieve performance comparable to those trained on the original, large data. Specifically, we propose a time series dataset condensation framework, TimeDC, that employs two-fold modal matching, encompassing frequency matching and training trajectory matching. Thus, TimeDC performs time series feature extraction and decomposition-driven frequency matching to preserve complex temporal dependencies in the reduced time series. Further, TimeDC employs curriculum training trajectory matching to ensure effective and generalized time series dataset condensation. To avoid memory overflow and to reduce the cost of dataset condensation, the framework includes an expert buffer storing pre-computed expert trajectories. Extensive experiments on real data offer insight into the effectiveness and efficiency of the proposed solutions.<\/jats:p>","DOI":"10.14778\/3705829.3705841","type":"journal-article","created":{"date-parts":[[2025,2,28]],"date-time":"2025-02-28T23:21:06Z","timestamp":1740784866000},"page":"226-238","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":14,"title":["Less is More: Efficient Time Series Dataset Condensation via Two-Fold Modal Matching"],"prefix":"10.14778","volume":"18","author":[{"given":"Hao","family":"Miao","sequence":"first","affiliation":[{"name":"Aalborg University, Denmark"}]},{"given":"Ziqiao","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, China"}]},{"given":"Yan","family":"Zhao","sequence":"additional","affiliation":[{"name":"Aalborg University, Denmark"}]},{"given":"Chenjuan","family":"Guo","sequence":"additional","affiliation":[{"name":"East China Normal University, China"}]},{"given":"Bin","family":"Yang","sequence":"additional","affiliation":[{"name":"East China Normal University, China"}]},{"given":"Kai","family":"Zheng","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, China"}]},{"given":"Christian S.","family":"Jensen","sequence":"additional","affiliation":[{"name":"Aalborg University, Denmark"}]}],"member":"320","published-online":{"date-parts":[[2025,2,28]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/1008731.1008736"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/1331904.1331906"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2007.190645"},{"key":"e_1_2_1_4_1","first-page":"193","article-title":"Time2Feat: learning interpretable representations for multivariate time series clustering","volume":"16","author":"Bonifati Angela","year":"2022","unstructured":"Angela Bonifati, Francesco Del Buono, Francesco Guerra, and Donato Tiano. 2022. Time2Feat: learning interpretable representations for multivariate time series clustering. PVLDB 16, 2 (2022), 193--201.","journal-title":"PVLDB"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.14778\/3681954.3681957"},{"key":"e_1_2_1_6_1","doi-asserted-by":"crossref","unstructured":"George Cazenavette Tongzhou Wang Antonio Torralba Alexei A Efros and Jun-Yan Zhu. 2022. Dataset distillation by matching training trajectories. In CVPR. 4750--4759.","DOI":"10.1109\/CVPR52688.2022.01045"},{"key":"e_1_2_1_7_1","first-page":"1","article-title":"GoodCore: Data-effective and Data-efficient Machine Learning through Coreset Selection over Incomplete Data","volume":"1","author":"Chai Chengliang","year":"2023","unstructured":"Chengliang Chai, Jiabin Liu, Nan Tang, Ju Fan, Dongjing Miao, Jiayi Wang, Yuyu Luo, and Guoliang Li. 2023. GoodCore: Data-effective and Data-efficient Machine Learning through Coreset Selection over Incomplete Data. SIGMOD 1, 2 (2023), 1--27.","journal-title":"SIGMOD"},{"key":"e_1_2_1_8_1","volume-title":"Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting. In ICLR.","author":"Chen Peng","year":"2024","unstructured":"Peng Chen, Yingying Zhang, Yunyao Cheng, Yang Shu, Yihang Wang, Qingsong Wen, Bin Yang, and Chenjuan Guo. 2024. Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting. In ICLR."},{"key":"e_1_2_1_9_1","first-page":"766","article-title":"Weakly guided adaptation for robust time series forecasting","volume":"17","author":"Cheng Yunyao","year":"2024","unstructured":"Yunyao Cheng, Peng Chen, Chenjuan Guo, Kai Zhao, Qingsong Wen, Bin Yang, and Christian S Jensen. 2024. Weakly guided adaptation for robust time series forecasting. PVLDB 17, 4 (2024), 766--779.","journal-title":"PVLDB"},{"key":"e_1_2_1_10_1","volume-title":"Jensen","author":"Cheng Yunyao","year":"2024","unstructured":"Yunyao Cheng, Chenjuan Guo, Bin Yang, Haomin Yu, Kai Zhao, and Christian S. Jensen. 2024. A Memory Guided Transformer for Time Series Forecasting. PVLDB 18 (2024)."},{"key":"e_1_2_1_11_1","doi-asserted-by":"crossref","unstructured":"Zhixuan Chu Ruopeng Li Stephen Rathbun and Sheng Li. 2023. Continual causal inference with incremental observational data. In ICDE. 3430--3439.","DOI":"10.1109\/ICDE55515.2023.00263"},{"key":"e_1_2_1_12_1","first-page":"3","article-title":"STL: A seasonal-trend decomposition","volume":"6","author":"Cleveland Robert B","year":"1990","unstructured":"Robert B Cleveland, William S Cleveland, Jean E McRae, and Irma Terpenning. 1990. STL: A seasonal-trend decomposition. J. Off. Stat 6, 1 (1990), 3--73.","journal-title":"J. Off. Stat"},{"key":"e_1_2_1_13_1","doi-asserted-by":"crossref","unstructured":"Yuchen Fang Jiandong Xie Yan Zhao Lu Chen Yunjun Gao and Kai Zheng. 2024. Temporal-Frequency Masked Autoencoders for Time Series Anomaly Detection. In ICDE. 1228--1241.","DOI":"10.1109\/ICDE60146.2024.00099"},{"key":"e_1_2_1_14_1","volume-title":"Facility location: concepts, models, algorithms and case studies","author":"Farahani Reza Zanjirani","unstructured":"Reza Zanjirani Farahani and Masoud Hekmatfar. 2009. Facility location: concepts, models, algorithms and case studies. Springer Science & Business Media."},{"key":"e_1_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Dan Feldman and Michael Langberg. 2011. A unified framework for approximating and clustering data. In STOC. 569--578.","DOI":"10.1145\/1993636.1993712"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1137\/18M1209854"},{"key":"e_1_2_1_17_1","first-page":"1468","article-title":"Efficient construction of approximate ad-hoc ML models through materialization and reuse","volume":"11","author":"Hasani Sona","year":"2018","unstructured":"Sona Hasani, Saravanan Thirumuruganathan, Abolfazl Asudeh, Nick Koudas, and Gautam Das. 2018. Efficient construction of approximate ad-hoc ML models through materialization and reuse. PVLDB 11, 11 (2018), 1468--1481.","journal-title":"PVLDB"},{"key":"e_1_2_1_18_1","first-page":"1688","article-title":"Modelardb: Modular model-based time series management with spark and cassandra","volume":"11","author":"Jensen S\u00f8ren Kejser","year":"2018","unstructured":"S\u00f8ren Kejser Jensen, Torben Bach Pedersen, and Christian Thomsen. 2018. Modelardb: Modular model-based time series management with spark and cassandra. PVLDB 11, 11 (2018), 1688--1701.","journal-title":"PVLDB"},{"key":"e_1_2_1_19_1","volume-title":"TEAM: Topological Evolution-aware Framework for Traffic Forecasting. PVLDB 18","author":"Kieu Duc","year":"2024","unstructured":"Duc Kieu, Tung Kieu, Peng Han, Bin Yang, Christian S. Jensen, and Bac Le. 2024. TEAM: Topological Evolution-aware Framework for Traffic Forecasting. PVLDB 18 (2024)."},{"key":"e_1_2_1_20_1","doi-asserted-by":"crossref","unstructured":"Tung Kieu Bin Yang Chenjuan Guo Christian S. Jensen Yan Zhao Feiteng Huang and Kai Zheng. 2022. Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection. In ICDE. 3038--3050.","DOI":"10.1109\/ICDE53745.2022.00273"},{"key":"e_1_2_1_21_1","first-page":"1","article-title":"LightCTS: A Lightweight Framework for Correlated Time Series Forecasting","volume":"1","author":"Lai Zhichen","year":"2023","unstructured":"Zhichen Lai, Dalin Zhang, Huan Li, Christian S Jensen, Hua Lu, and Yan Zhao. 2023. LightCTS: A Lightweight Framework for Correlated Time Series Forecasting. SIGMOD 1, 2 (2023), 1--26.","journal-title":"SIGMOD"},{"key":"e_1_2_1_22_1","volume-title":"Camel: Managing Data for Efficient Stream Learning. In SIGMOD. 1271--1285.","author":"Li Yiming","year":"2022","unstructured":"Yiming Li, Yanyan Shen, and Lei Chen. 2022. Camel: Managing Data for Efficient Stream Learning. In SIGMOD. 1271--1285."},{"key":"e_1_2_1_23_1","doi-asserted-by":"crossref","unstructured":"Chi Harold Liu Chengzhe Piao Xiaoxin Ma Ye Yuan Jian Tang Guoren Wang and Kin K Leung. 2021. Modeling citywide crowd flows using attentive convolutional LSTM. In ICDE. 217--228.","DOI":"10.1109\/ICDE51399.2021.00026"},{"key":"e_1_2_1_24_1","doi-asserted-by":"crossref","unstructured":"Ziqiao Liu Hao Miao Yan Zhao Chenxi Liu Kai Zheng and Huan Li. 2024. LightTR: A Lightweight Framework for Federated Trajectory Recovery. In ICDE. 4422--4434.","DOI":"10.1109\/ICDE60146.2024.00337"},{"key":"e_1_2_1_25_1","first-page":"1","article-title":"Training gaussian mixture models at scale via coresets","volume":"18","author":"Lucic Mario","year":"2018","unstructured":"Mario Lucic, Matthew Faulkner, Andreas Krause, and Dan Feldman. 2018. Training gaussian mixture models at scale via coresets. J. Mach. Learn. Res 18, 160 (2018), 1--25.","journal-title":"J. Mach. Learn. Res"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2022.3179781"},{"key":"e_1_2_1_27_1","volume-title":"Jensen","author":"Miao Hao","year":"2024","unstructured":"Hao Miao, Yan Zhao, Chenjuan Guo, Bin Yang, Kai Zheng, Feiteng Huang, Jiandong Xie, and Christian S. Jensen. 2024. A Unified Replay-Based Continuous Learning Framework for Spatio-Temporal Prediction on Streaming Data. In ICDE. 1050--1062."},{"key":"e_1_2_1_28_1","unstructured":"Timothy Nguyen Zhourong Chen and Jaehoon Lee. 2020. Dataset Meta-Learning from Kernel Ridge-Regression. In ICLR."},{"key":"e_1_2_1_29_1","unstructured":"Yuqi Nie Nam H. Nguyen Phanwadee Sinthong and Jayant Kalagnanam. 2023. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. In ICLR."},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2019.07.001"},{"key":"e_1_2_1_31_1","first-page":"116","article-title":"Adaptive seasonal time series models for forecasting short-term traffic flow","volume":"2024","author":"Shekhar Shashank","year":"2007","unstructured":"Shashank Shekhar and Billy M Williams. 2007. Adaptive seasonal time series models for forecasting short-term traffic flow. TRR 2024, 1 (2007), 116--125.","journal-title":"TRR"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.14778\/3611479.3611536"},{"key":"e_1_2_1_33_1","doi-asserted-by":"crossref","unstructured":"Kai Sheng Tai Vatsal Sharan Peter Bailis and Gregory Valiant. 2018. Sketching linear classifiers over data streams. In SIGMOD. 757--772.","DOI":"10.1145\/3183713.3196930"},{"key":"e_1_2_1_34_1","article-title":"Visualizing data using t-SNE","volume":"9","author":"der Maaten Laurens Van","year":"2008","unstructured":"Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. J Mach Learn Res 9, 11 (2008).","journal-title":"J Mach Learn Res"},{"key":"e_1_2_1_35_1","volume-title":"Attention is all you need. NeurIPS 30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. NeurIPS 30 (2017)."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.14778\/3415478.3415504"},{"key":"e_1_2_1_37_1","first-page":"480","article-title":"iEDeaL: A Deep Learning Framework for Detecting Highly Imbalanced Interictal Epileptiform Discharges","volume":"16","author":"Wang Qitong","year":"2022","unstructured":"Qitong Wang, Stephen Whitmarsh, Vincent Navarro, and Themis Palpanas. 2022. iEDeaL: A Deep Learning Framework for Detecting Highly Imbalanced Interictal Epileptiform Discharges. PVLDB 16, 3 (2022), 480--490.","journal-title":"PVLDB"},{"key":"e_1_2_1_38_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3469087","article-title":"Multivariate correlation-aware spatio-temporal graph convolutional networks for multi-scale traffic prediction","volume":"13","author":"Wang Senzhang","year":"2022","unstructured":"Senzhang Wang, Meiyue Zhang, Hao Miao, Zhaohui Peng, and Philip S Yu. 2022. Multivariate correlation-aware spatio-temporal graph convolutional networks for multi-scale traffic prediction. TIST 13, 3 (2022), 1--22.","journal-title":"TIST"},{"key":"e_1_2_1_39_1","doi-asserted-by":"crossref","unstructured":"Max Welling. 2009. Herding dynamical weights to learn. In ICML. 1121--1128.","DOI":"10.1145\/1553374.1553517"},{"key":"e_1_2_1_40_1","doi-asserted-by":"crossref","unstructured":"Qingsong Wen Kai He Liang Sun Yingying Zhang Min Ke and Huan Xu. 2021. RobustPeriod: Robust time-frequency mining for multiple periodicity detection. In SIGMOD. 2328--2337.","DOI":"10.1145\/3448016.3452779"},{"key":"e_1_2_1_41_1","first-page":"22419","article-title":"Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting","volume":"34","author":"Wu Haixu","year":"2021","unstructured":"Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. 2021. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. NeurIPS 34 (2021), 22419--22430.","journal-title":"NeurIPS"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00778-024-00872-x"},{"key":"e_1_2_1_43_1","volume-title":"Jensen","author":"Wu Xinle","year":"2024","unstructured":"Xinle Wu, Xingjian Wu, Dalin Zhang, Miao Zhang, Chenjuan Guo, Bin Yang, and Christian S. Jensen. 2024. Fully Automated Correlated Time Series Forecasting in Minutes. PVLDB. 18 (2024)."},{"key":"e_1_2_1_44_1","first-page":"1","article-title":"AutoCTS+: Joint Neural Architecture and Hyperparameter Search for Correlated Time Series Forecasting","volume":"1","author":"Wu Xinle","year":"2023","unstructured":"Xinle Wu, Dalin Zhang, Miao Zhang, Chenjuan Guo, Bin Yang, and Christian S Jensen. 2023. AutoCTS+: Joint Neural Architecture and Hyperparameter Search for Correlated Time Series Forecasting. SIGMOD 1, 1 (2023), 1--26.","journal-title":"SIGMOD"},{"key":"e_1_2_1_45_1","first-page":"2148","article-title":"Time series data encoding for efficient storage: A comparative analysis in apache IoTDB","volume":"15","author":"Xiao Jinzhao","year":"2022","unstructured":"Jinzhao Xiao, Yuxiang Huang, Changyu Hu, Shaoxu Song, Xiangdong Huang, and Jianmin Wang. 2022. Time series data encoding for efficient storage: A comparative analysis in apache IoTDB. PVLDB 15, 10 (2022), 2148--2160.","journal-title":"PVLDB"},{"key":"e_1_2_1_46_1","doi-asserted-by":"crossref","unstructured":"Ronghui Xu Hao Miao Senzhang Wang Philip S Yu and Jianxin Wang. 2024. PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection. In SIGKDD. 3621--3632.","DOI":"10.1145\/3637528.3671753"},{"key":"e_1_2_1_47_1","doi-asserted-by":"crossref","unstructured":"Xinyang Yu Yanqing Peng Feifei Li Sheng Wang Xiaowei Shen Huijun Mai and Yue Xie. 2020. Two-level data compression using machine learning in time series database. In ICDE. 1333--1344.","DOI":"10.1109\/ICDE48307.2020.00119"},{"key":"e_1_2_1_48_1","volume-title":"Konda Reddy Mopuri, and Hakan Bilen","author":"Zhao Bo","year":"2021","unstructured":"Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2021. Dataset Condensation with Gradient Matching. In ICLR."},{"key":"e_1_2_1_49_1","doi-asserted-by":"crossref","unstructured":"Ganlong Zhao Guanbin Li Yipeng Qin and Yizhou Yu. 2023. Improved distribution matching for dataset condensation. In CVPR. 7856--7865.","DOI":"10.1109\/CVPR52729.2023.00759"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.14778\/3636218.3636230"},{"key":"e_1_2_1_51_1","volume-title":"Jensen","author":"Zhao Yan","year":"2022","unstructured":"Yan Zhao, Xuanhao Chen, Liwei Deng, Tung Kieu, Chenjuan Guo, Bin Yang, Kai Zheng, and Christian S. Jensen. 2022. Outlier Detection for Streaming Task Assignment in Crowdsourcing. In WWW. 1933--1943."},{"key":"e_1_2_1_52_1","volume-title":"Beng Chin Ooi, and Jinyang Gao.","author":"Zheng Kaiping","year":"2021","unstructured":"Kaiping Zheng, Gang Chen, Melanie Herschel, Kee Yuan Ngiam, Beng Chin Ooi, and Jinyang Gao. 2021. PACE: learning effective task decomposition for human-in-the-loop healthcare delivery. In SIGMOD. 2156--2168."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i12.17325"},{"key":"e_1_2_1_54_1","volume-title":"Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In ICML. 27268--27286.","author":"Zhou Tian","year":"2022","unstructured":"Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. 2022. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In ICML. 27268--27286."},{"key":"e_1_2_1_55_1","first-page":"9813","article-title":"Dataset distillation using neural feature regression","volume":"35","author":"Zhou Yongchao","year":"2022","unstructured":"Yongchao Zhou, Ehsan Nezhadarya, and Jimmy Ba. 2022. Dataset distillation using neural feature regression. NeurIPS 35 (2022), 9813--9827.","journal-title":"NeurIPS"}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3705829.3705841","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,2,28]],"date-time":"2025-02-28T23:22:57Z","timestamp":1740784977000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3705829.3705841"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10]]},"references-count":55,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,10]]}},"alternative-id":["10.14778\/3705829.3705841"],"URL":"https:\/\/doi.org\/10.14778\/3705829.3705841","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2024,10]]},"assertion":[{"value":"2025-02-28","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}