{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T23:08:17Z","timestamp":1776208097250,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":42,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T00:00:00Z","timestamp":1691107200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,8,6]]},"DOI":"10.1145\/3580305.3599533","type":"proceedings-article","created":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T18:13:58Z","timestamp":1691172838000},"page":"459-469","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":194,"title":["TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-5824-9826","authenticated-orcid":false,"given":"Vijay","family":"Ekambaram","sequence":"first","affiliation":[{"name":"IBM Research, Bangalore, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9498-8536","authenticated-orcid":false,"given":"Arindam","family":"Jati","sequence":"additional","affiliation":[{"name":"IBM Research, Bangalore, India"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-6981-4463","authenticated-orcid":false,"given":"Nam","family":"Nguyen","sequence":"additional","affiliation":[{"name":"IBM Research, Yorktown Heights, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-4423-3860","authenticated-orcid":false,"given":"Phanwadee","family":"Sinthong","sequence":"additional","affiliation":[{"name":"IBM Research, Yorktown Heights, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-5051-2606","authenticated-orcid":false,"given":"Jayant","family":"Kalagnanam","sequence":"additional","affiliation":[{"name":"IBM Research, Yorktown Heights, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,8,4]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.aei.2017.11.002"},{"key":"e_1_3_2_2_2_1","volume-title":"Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting.","author":"Anonymous","year":"2023","unstructured":"Anonymous . 2023 . Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting. (2023). https:\/\/openreview.net\/forum?id=vSVLM2j9eie under review. Anonymous. 2023. Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting. (2023). https:\/\/openreview.net\/forum?id=vSVLM2j9eie under review."},{"key":"e_1_3_2_2_3_1","volume-title":"Jamie Ryan Kiros, and Geoffrey E Hinton","author":"Ba Jimmy Lei","year":"2016","unstructured":"Jimmy Lei Ba , Jamie Ryan Kiros, and Geoffrey E Hinton . 2016 . Layer normalization. arXiv preprint arXiv:1607.06450 (2016). Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016)."},{"key":"e_1_3_2_2_4_1","volume-title":"Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473","author":"Bahdanau Dzmitry","year":"2014","unstructured":"Dzmitry Bahdanau , Kyunghyun Cho , and Yoshua Bengio . 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 ( 2014 ). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)."},{"key":"e_1_3_2_2_5_1","volume-title":"BEiT: BERT Pre-Training of Image Transformers. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=p-BhZSz59o4","author":"Bao Hangbo","year":"2022","unstructured":"Hangbo Bao , Li Dong , Songhao Piao , and Furu Wei . 2022 . BEiT: BERT Pre-Training of Image Transformers. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=p-BhZSz59o4 Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. 2022. BEiT: BERT Pre-Training of Image Transformers. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=p-BhZSz59o4"},{"key":"e_1_3_2_2_6_1","unstructured":"Rishi Bommasani Drew A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill etal 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).  Rishi Bommasani Drew A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021)."},{"key":"#cr-split#-e_1_3_2_2_7_1.1","unstructured":"Shoufa Chen Enze Xie Chongjian Ge Runjian Chen Ding Liang and Ping Luo. 2021. CycleMLP: A MLP-like Architecture for Dense Prediction. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2107.10224 10.48550\/ARXIV.2107.10224"},{"key":"#cr-split#-e_1_3_2_2_7_1.2","unstructured":"Shoufa Chen Enze Xie Chongjian Ge Runjian Chen Ding Liang and Ping Luo. 2021. CycleMLP: A MLP-like Architecture for Dense Prediction. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2107.10224"},{"key":"e_1_3_2_2_8_1","volume-title":"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR , Vol. abs\/ 1810 .04805 (2018). showeprint[arXiv]1810.04805 http:\/\/arxiv.org\/abs\/1810.04805 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR, Vol. abs\/1810.04805 (2018). showeprint[arXiv]1810.04805 http:\/\/arxiv.org\/abs\/1810.04805"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/324"},{"key":"e_1_3_2_2_10_1","unstructured":"Albert Gu Karan Goel and Christopher R\u00e9. 2021. Efficiently Modeling Long Sequences with Structured State Spaces. (2021). https:\/\/arxiv.org\/abs\/2111.00396  Albert Gu Karan Goel and Christopher R\u00e9. 2021. Efficiently Modeling Long Sequences with Structured State Spaces. (2021). https:\/\/arxiv.org\/abs\/2111.00396"},{"key":"#cr-split#-e_1_3_2_2_11_1.1","unstructured":"Jianyuan Guo Yehui Tang Kai Han Xinghao Chen Han Wu Chao Xu Chang Xu and Yunhe Wang. 2021. Hire-MLP: Vision MLP via Hierarchical Rearrangement. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2108.13341 10.48550\/ARXIV.2108.13341"},{"key":"#cr-split#-e_1_3_2_2_11_1.2","unstructured":"Jianyuan Guo Yehui Tang Kai Han Xinghao Chen Han Wu Chao Xu Chang Xu and Yunhe Wang. 2021. Hire-MLP: Vision MLP via Hierarchical Rearrangement. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2108.13341"},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_2_13_1","unstructured":"R.J. Hyndman and G. Athanasopoulos (Eds.). 2021. Forecasting: principles and practice. OTexts: Melbourne Australia. OTexts.com\/fpp3.  R.J. Hyndman and G. Athanasopoulos (Eds.). 2021. Forecasting: principles and practice. OTexts: Melbourne Australia. OTexts.com\/fpp3."},{"key":"#cr-split#-e_1_3_2_2_14_1.1","unstructured":"Arindam Jati Vijay Ekambaram Shaonli Pal Brian Quanz Wesley M. Gifford Pavithra Harsha Stuart Siegel Sumanta Mukherjee and Chandra Narayanaswami. 2022. Hierarchy-guided Model Selection for Time Series Forecasting. https:\/\/doi.org\/10.48550\/ARXIV.2211.15092 10.48550\/ARXIV.2211.15092"},{"key":"#cr-split#-e_1_3_2_2_14_1.2","unstructured":"Arindam Jati Vijay Ekambaram Shaonli Pal Brian Quanz Wesley M. Gifford Pavithra Harsha Stuart Siegel Sumanta Mukherjee and Chandra Narayanaswami. 2022. Hierarchy-guided Model Selection for Time Series Forecasting. https:\/\/doi.org\/10.48550\/ARXIV.2211.15092"},{"key":"e_1_3_2_2_15_1","volume-title":"International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=cGDAkQo1C0p","author":"Kim Taesung","year":"2022","unstructured":"Taesung Kim , Jinhee Kim , Yunwon Tae , Cheonbok Park , Jang-Ho Choi , and Jaegul Choo . 2022 . Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift . In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=cGDAkQo1C0p Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. 2022. Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=cGDAkQo1C0p"},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2022\/297"},{"key":"e_1_3_2_2_17_1","first-page":"9204","article-title":"Pay attention to mlps","volume":"34","author":"Liu Hanxiao","year":"2021","unstructured":"Hanxiao Liu , Zihang Dai , David So , and Quoc V Le . 2021 . Pay attention to mlps . Advances in Neural Information Processing Systems , Vol. 34 (2021), 9204 -- 9215 . Hanxiao Liu, Zihang Dai, David So, and Quoc V Le. 2021. Pay attention to mlps. Advances in Neural Information Processing Systems, Vol. 34 (2021), 9204--9215.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_18_1","volume-title":"Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting. In International Conference on Learning Representations.","author":"Liu Shizhan","year":"2022","unstructured":"Shizhan Liu , Hang Yu , Cong Liao , Jianguo Li , Weiyao Lin , Alex X Liu , and Schahram Dustdar . 2022 . Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting. In International Conference on Learning Representations. Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X Liu, and Schahram Dustdar. 2022. Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting. In International Conference on Learning Representations."},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijforecast.2021.11.013"},{"key":"#cr-split#-e_1_3_2_2_20_1.1","unstructured":"Yuqi Nie Nam H. Nguyen Phanwadee Sinthong and Jayant Kalagnanam. 2022. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. https:\/\/doi.org\/10.48550\/ARXIV.2211.14730 10.48550\/ARXIV.2211.14730"},{"key":"#cr-split#-e_1_3_2_2_20_1.2","unstructured":"Yuqi Nie Nam H. Nguyen Phanwadee Sinthong and Jayant Kalagnanam. 2022. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. https:\/\/doi.org\/10.48550\/ARXIV.2211.14730"},{"key":"e_1_3_2_2_21_1","volume-title":"Smith and Nicholay Topin","author":"Leslie","year":"2017","unstructured":"Leslie N. Smith and Nicholay Topin . 2017 . Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates . (2017). https:\/\/doi.org\/10.48550\/ARXIV.1708.07120 10.48550\/ARXIV.1708.07120 Leslie N. Smith and Nicholay Topin. 2017. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. (2017). https:\/\/doi.org\/10.48550\/ARXIV.1708.07120"},{"key":"#cr-split#-e_1_3_2_2_22_1.1","unstructured":"Yehui Tang Kai Han Jianyuan Guo Chang Xu Yanxi Li Chao Xu and Yunhe Wang. 2021. An Image Patch is a Wave: Phase-Aware Vision MLP. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2111.12294 10.48550\/ARXIV.2111.12294"},{"key":"#cr-split#-e_1_3_2_2_22_1.2","doi-asserted-by":"crossref","unstructured":"Yehui Tang Kai Han Jianyuan Guo Chang Xu Yanxi Li Chao Xu and Yunhe Wang. 2021. An Image Patch is a Wave: Phase-Aware Vision MLP. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2111.12294","DOI":"10.1109\/CVPR52688.2022.01066"},{"key":"e_1_3_2_2_23_1","first-page":"24261","article-title":"Mlp-mixer: An all-mlp architecture for vision","volume":"34","author":"Tolstikhin Ilya O","year":"2021","unstructured":"Ilya O Tolstikhin , Neil Houlsby , Alexander Kolesnikov , Lucas Beyer , Xiaohua Zhai , Thomas Unterthiner , Jessica Yung , Andreas Steiner , Daniel Keysers , Jakob Uszkoreit , 2021 . Mlp-mixer: An all-mlp architecture for vision . Advances in Neural Information Processing Systems , Vol. 34 (2021), 24261 -- 24272 . Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. 2021. Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, Vol. 34 (2021), 24261--24272.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_24_1","volume-title":"Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=8qDwejCuCN","author":"Tonekaboni Sana","year":"2021","unstructured":"Sana Tonekaboni , Danny Eytan , and Anna Goldenberg . 2021 . Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=8qDwejCuCN Sana Tonekaboni, Danny Eytan, and Anna Goldenberg. 2021. Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=8qDwejCuCN"},{"key":"e_1_3_2_2_25_1","volume-title":"Resmlp: Feedforward networks for image classification with data-efficient training","author":"Touvron Hugo","year":"2022","unstructured":"Hugo Touvron , Piotr Bojanowski , Mathilde Caron , Matthieu Cord , Alaaeldin El-Nouby , Edouard Grave , Gautier Izacard , Armand Joulin , Gabriel Synnaeve , Jakob Verbeek , 2022 . Resmlp: Feedforward networks for image classification with data-efficient training . IEEE Transactions on Pattern Analysis and Machine Intelligence ( 2022). Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Gautier Izacard, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, et al. 2022. Resmlp: Feedforward networks for image classification with data-efficient training. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022)."},{"key":"e_1_3_2_2_26_1","volume-title":"Representation Learning with Contrastive Predictive Coding. CoRR","author":"van den Oord A\u00e4ron","year":"2018","unstructured":"A\u00e4ron van den Oord , Yazhe Li , and Oriol Vinyals . 2018. Representation Learning with Contrastive Predictive Coding. CoRR , Vol. abs\/ 1807 .03748 ( 2018 ). showeprint[arXiv]1807.03748 http:\/\/arxiv.org\/abs\/1807.03748 A\u00e4ron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation Learning with Contrastive Predictive Coding. CoRR, Vol. abs\/1807.03748 (2018). showeprint[arXiv]1807.03748 http:\/\/arxiv.org\/abs\/1807.03748"},{"key":"e_1_3_2_2_27_1","volume-title":"Advances in Neural Information Processing Systems","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , Lukasz Kaiser , and Illia Polosukhin . 2017 . Attention is All you Need . In Advances in Neural Information Processing Systems , Vol. 30 . https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, Vol. 30. https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"},{"key":"e_1_3_2_2_28_1","volume-title":"Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. In Advances in Neural Information Processing Systems.","author":"Wu Haixu","year":"2021","unstructured":"Haixu Wu , Jiehui Xu , Jianmin Wang , and Mingsheng Long . 2021 . Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. In Advances in Neural Information Processing Systems. Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. 2021. Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. In Advances in Neural Information Processing Systems."},{"key":"e_1_3_2_2_29_1","unstructured":"Ling Yang and Shenda Hong. 2022. Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion. In ICML.  Ling Yang and Shenda Hong. 2022. Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion. In ICML."},{"key":"e_1_3_2_2_30_1","volume-title":"SMLP: Spatial-Shift MLP Architecture for Vision.","author":"Yu Tan","year":"2021","unstructured":"Tan Yu , Xu Li , Yunfeng Cai , Mingming Sun , and Ping Li . 2021 . SMLP: Spatial-Shift MLP Architecture for Vision. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2106.07477 10.48550\/ARXIV.2106.07477 Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, and Ping Li. 2021. SMLP: Spatial-Shift MLP Architecture for Vision. (2021). https:\/\/doi.org\/10.48550\/ARXIV.2106.07477"},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i8.20881"},{"key":"e_1_3_2_2_32_1","volume-title":"Are Transformers Effective for Time Series Forecasting? arXiv preprint arXiv:2205.13504","author":"Zeng Ailing","year":"2022","unstructured":"Ailing Zeng , Muxi Chen , Lei Zhang , and Qiang Xu. 2022a. Are Transformers Effective for Time Series Forecasting? arXiv preprint arXiv:2205.13504 ( 2022 ). https:\/\/arxiv.org\/pdf\/2205.13504.pdf Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. 2022a. Are Transformers Effective for Time Series Forecasting? arXiv preprint arXiv:2205.13504 (2022). https:\/\/arxiv.org\/pdf\/2205.13504.pdf"},{"key":"e_1_3_2_2_33_1","volume-title":"Github Repo: Are Transformers Effective for Time Series Forecasting? arXiv preprint arXiv:2205.13504","author":"Zeng Ailing","year":"2022","unstructured":"Ailing Zeng , Muxi Chen , Lei Zhang , and Qiang Xu . 2022 b. Github Repo: Are Transformers Effective for Time Series Forecasting? arXiv preprint arXiv:2205.13504 (2022). https:\/\/github.com\/cure-lab\/LTSF-Linear Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. 2022b. Github Repo: Are Transformers Effective for Time Series Forecasting? arXiv preprint arXiv:2205.13504 (2022). https:\/\/github.com\/cure-lab\/LTSF-Linear"},{"key":"#cr-split#-e_1_3_2_2_34_1.1","unstructured":"Tianping Zhang Yizhuo Zhang Wei Cao Jiang Bian Xiaohan Yi Shun Zheng and Jian Li. 2022. Less Is More: Fast Multivariate Time Series Forecasting with Light Sampling-oriented MLP Structures. https:\/\/doi.org\/10.48550\/ARXIV.2207.01186 10.48550\/ARXIV.2207.01186"},{"key":"#cr-split#-e_1_3_2_2_34_1.2","doi-asserted-by":"crossref","unstructured":"Tianping Zhang Yizhuo Zhang Wei Cao Jiang Bian Xiaohan Yi Shun Zheng and Jian Li. 2022. Less Is More: Fast Multivariate Time Series Forecasting with Light Sampling-oriented MLP Structures. https:\/\/doi.org\/10.48550\/ARXIV.2207.01186","DOI":"10.1155\/2022\/5596676"},{"key":"e_1_3_2_2_35_1","volume-title":"Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. In The Thirty-Fifth AAAI Conference on Artificial Intelligence","volume":"35","author":"Zhou Haoyi","year":"2021","unstructured":"Haoyi Zhou , Shanghang Zhang , Jieqi Peng , Shuai Zhang , Jianxin Li , Hui Xiong , and Wancai Zhang . 2021 . Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. In The Thirty-Fifth AAAI Conference on Artificial Intelligence , Vol. 35 . 11106--11115. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 2021. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. In The Thirty-Fifth AAAI Conference on Artificial Intelligence, Vol. 35. 11106--11115."},{"key":"e_1_3_2_2_36_1","volume-title":"Proc. 39th International Conference on Machine Learning","author":"Zhou Tian","year":"2022","unstructured":"Tian Zhou , Ziqing Ma , Qingsong Wen , Xue Wang , Liang Sun , and Rong Jin . 2022 . FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting . In Proc. 39th International Conference on Machine Learning ( Baltimore, Maryland). Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. 2022. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proc. 39th International Conference on Machine Learning (Baltimore, Maryland)."}],"event":{"name":"KDD '23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","location":"Long Beach CA USA","acronym":"KDD '23","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"]},"container-title":["Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3580305.3599533","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3580305.3599533","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:52Z","timestamp":1750178272000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3580305.3599533"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,4]]},"references-count":42,"alternative-id":["10.1145\/3580305.3599533","10.1145\/3580305"],"URL":"https:\/\/doi.org\/10.1145\/3580305.3599533","relation":{},"subject":[],"published":{"date-parts":[[2023,8,4]]},"assertion":[{"value":"2023-08-04","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}