{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,7]],"date-time":"2025-07-07T19:40:09Z","timestamp":1751917209946,"version":"3.41.2"},"reference-count":44,"publisher":"Association for Computing Machinery (ACM)","issue":"1","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["SIGKDD Explor. Newsl."],"published-print":{"date-parts":[[2025,7,7]]},"abstract":"<jats:p>Large Language Models (LLMs) exhibit exceptional proficiency in comprehending human language. Despite their significant success across a wide array of tasks, understanding tabular data remains a challenging task. Especially, tabular data lacks an intrinsic order of the different features (table fields), whereas LLMs take only sequential inputs. Consequently, an artificial order is imposed, the impact of which on the performance of LLMs has not yet been thoroughly investigated. Surprisingly, as discovered in this work, this artificially induced order bias dramatically influences the performance of LLMs on tasks related to tabular data. Mitigating the order bias presents a significant challenge. To address this, we propose a simple and cost-effective method, Re-Ordering Tabular feATures fOR LLM (ROTATOR-LLM), to conduct test-time compute without fine-tuning the base LLM. Aiming at optimizing the feature order of tabular data and boosting LLMs' capability to better understand the data semantics, ROTATOR-LLM re-frames the ordering problem as a feature trajectory generation task. A dynamic programming based meta-controller is trained to auto-regressively generate an individualized feature trajectory for each data instance via accumulative value estimation of the serialized feature input through the LLM's final performance metrics. Model performance is maximized by iteratively selecting features across different steps. Experimental results on multiple datasets and LLMs show close to or over 20% performance boosts via features reordered by ROTATOR-LLM against the un-ordered counterpart. Meanwhile, it outperforms stateof- the-Art tabular LLM methods with significant margin.<\/jats:p>","DOI":"10.1145\/3748239.3748248","type":"journal-article","created":{"date-parts":[[2025,7,7]],"date-time":"2025-07-07T19:19:42Z","timestamp":1751915982000},"page":"112-123","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Advancing Table Understanding of Large Language Models via Feature Re-ordering"],"prefix":"10.1145","volume":"27","author":[{"given":"Guanchu","family":"Wang","sequence":"first","affiliation":[{"name":"Rice University"}]},{"given":"Yuzhong","family":"Chen","sequence":"additional","affiliation":[{"name":"Visa Research"}]},{"given":"Huiyuan","family":"Chen","sequence":"additional","affiliation":[{"name":"Visa Research"}]},{"given":"Xiran","family":"Fan","sequence":"additional","affiliation":[{"name":"Visa Research"}]},{"given":"Junpeng","family":"Wang","sequence":"additional","affiliation":[{"name":"Visa Research"}]},{"given":"Xiaoting","family":"Li","sequence":"additional","affiliation":[{"name":"Visa Research"}]},{"given":"Mingzhi","family":"Hu","sequence":"additional","affiliation":[{"name":"Worcester Polytechnic Institute"}]},{"given":"Chia-Yuan","family":"Chang","sequence":"additional","affiliation":[{"name":"Texas A&amp;M University"}]},{"given":"Xia","family":"Hu","sequence":"additional","affiliation":[{"name":"Rice University"}]}],"member":"320","published-online":{"date-parts":[[2025,7,7]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Uci machine learning repository","author":"Asuncion Arthur","year":"2007","unstructured":"Arthur Asuncion, David Newman, et al. Uci machine learning repository, 2007."},{"key":"e_1_2_1_2_1","volume-title":"Guanchu Wang, Mingzhi Hu, Zhichao Xu, Yan Zheng, Mahashweta Das, et al. Main-rag: Multi-agent filtering retrievalaugmented generation. arXiv preprint arXiv:2501.00332","author":"Chang Chia-Yuan","year":"2024","unstructured":"Chia-Yuan Chang, Zhimeng Jiang, Vineeth Rakesh, Menghai Pan, Chin-Chia Michael Yeh, Guanchu Wang, Mingzhi Hu, Zhichao Xu, Yan Zheng, Mahashweta Das, et al. Main-rag: Multi-agent filtering retrievalaugmented generation. arXiv preprint arXiv:2501.00332, 2024."},{"key":"e_1_2_1_3_1","volume-title":"Large language models are few (1)-shot table reasoners. arXiv preprint arXiv:2210.06710","author":"Chen Wenhu","year":"2022","unstructured":"Wenhu Chen. Large language models are few (1)-shot table reasoners. arXiv preprint arXiv:2210.06710, 2022."},{"key":"e_1_2_1_4_1","volume-title":"Premise order matters in reasoning with large language models. arXiv preprint arXiv:2402.08939","author":"Chen Xinyun","year":"2024","unstructured":"Xinyun Chen, Ryan A Chi, Xuezhi Wang, and Denny Zhou. Premise order matters in reasoning with large language models. arXiv preprint arXiv:2402.08939, 2024."},{"key":"e_1_2_1_5_1","volume-title":"Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E Gonzalez, et al. Chatbot arena: An open platform for evaluating llms by human preference. arXiv preprint arXiv:2403.04132","author":"Chiang Wei-Lin","year":"2024","unstructured":"Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E Gonzalez, et al. Chatbot arena: An open platform for evaluating llms by human preference. arXiv preprint arXiv:2403.04132, 2024."},{"key":"e_1_2_1_6_1","volume-title":"Faithlm: Towards faithful explanations for large language models. arXiv preprint arXiv:2402.04678","author":"Chuang Yu-Neng","year":"2024","unstructured":"Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Ruixiang Tang, Shaochen Zhong, Fan Yang, Mengnan Du, Xuanting Cai, and Xia Hu. Faithlm: Towards faithful explanations for large language models. arXiv preprint arXiv:2402.04678, 2024."},{"key":"e_1_2_1_7_1","volume-title":"Confident or seek stronger: Exploring uncertainty-based on-device llm routing from benchmarking to generalization. arXiv preprint arXiv:2502.04428","author":"Chuang Yu-Neng","year":"2025","unstructured":"Yu-Neng Chuang, Leisheng Yu, Guanchu Wang, Lizhe Zhang, Zirui Liu, Xuanting Cai, Yang Sui, Vladimir Braverman, and Xia Hu. Confident or seek stronger: Exploring uncertainty-based on-device llm routing from benchmarking to generalization. arXiv preprint arXiv:2502.04428, 2025."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3661384"},{"key":"e_1_2_1_9_1","unstructured":"Xi Fang Weijie Xu Fiona Anting Tan Jiani Zhang Ziqing Hu Yanjun Jane Qi Scott Nickleach Diego Socolinsky Srinivasan Sengamedu Christos Faloutsos et al. Large language models (llms) on tabular data: Prediction generation and understanding-a survey. 2024."},{"key":"e_1_2_1_10_1","first-page":"5549","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Hegselmann Stefan","year":"2023","unstructured":"Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. Tabllm: Few-shot classification of tabular data with large language models. In International Conference on Artificial Intelligence and Statistics, pages 5549--5581. PMLR, 2023."},{"key":"e_1_2_1_11_1","volume-title":"Annotatedtables: A large tabular dataset with language model annotations. arXiv preprint arXiv:2406.16349","author":"Hu Yaojie","year":"2024","unstructured":"Yaojie Hu, Ilias Fountalis, Jin Tian, and Nikolaos Vasiloglou. Annotatedtables: A large tabular dataset with language model annotations. arXiv preprint arXiv:2406.16349, 2024."},{"key":"e_1_2_1_12_1","volume-title":"Towards better serialization of tabular data for few-shot classification. arXiv preprint arXiv:2312.12464","author":"Jaitly Sukriti","year":"2023","unstructured":"Sukriti Jaitly, Tanay Shah, Ashish Shugani, and Razik Singh Grewal. Towards better serialization of tabular data for few-shot classification. arXiv preprint arXiv:2312.12464, 2023."},{"key":"e_1_2_1_13_1","volume-title":"Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088","author":"Jiang Albert Q","year":"2024","unstructured":"Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024."},{"key":"e_1_2_1_14_1","volume-title":"Llm maybe longlm: Self-extend llm context window without tuning. arXiv preprint arXiv:2401.01325","author":"Jin Hongye","year":"2024","unstructured":"Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, and Xia Hu. Llm maybe longlm: Self-extend llm context window without tuning. arXiv preprint arXiv:2401.01325, 2024."},{"key":"e_1_2_1_15_1","first-page":"5637","volume-title":"International conference on machine learning","author":"Koh Pang Wei","year":"2021","unstructured":"Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of inthe- wild distribution shifts. In International conference on machine learning, pages 5637--5664. PMLR, 2021."},{"issue":"3","key":"e_1_2_1_16_1","first-page":"1","article-title":"Table finetuned gpt for diverse table tasks","volume":"2","author":"Li Peng","year":"2024","unstructured":"Peng Li, Yeye He, Dror Yashar, Weiwei Cui, Song Ge, Haidong Zhang, Danielle Rifinski Fainman, Dongmei Zhang, and Surajit Chaudhuri. Table-gpt: Table finetuned gpt for diverse table tasks. Proceedings of the ACM on Management of Data, 2(3):1--28, 2024.","journal-title":"Proceedings of the ACM on Management of Data"},{"key":"e_1_2_1_17_1","volume-title":"Suriya Gunasekar, and Yin Tat Lee. Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463","author":"Li Yuanzhi","year":"2023","unstructured":"Yuanzhi Li, S\u00b4ebastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463, 2023."},{"key":"e_1_2_1_18_1","volume-title":"Rethinking tabular data understanding with large language models. arXiv preprint arXiv:2312.16702","author":"Liu Tianyang","year":"2023","unstructured":"Tianyang Liu, Fei Wang, and Muhao Chen. Rethinking tabular data understanding with large language models. arXiv preprint arXiv:2312.16702, 2023."},{"key":"e_1_2_1_19_1","first-page":"3402","article-title":"Winner-take-all column row sampling for memory efficient adaptation of language model","volume":"36","author":"Liu Zirui","year":"2023","unstructured":"Zirui Liu, Guanchu Wang, Shaochen Henry Zhong, Zhaozhuo Xu, Daochen Zha, Ruixiang Ryan Tang, Zhimeng Stephen Jiang, Kaixiong Zhou, Vipin Chaudhary, Shuai Xu, et al. Winner-take-all column row sampling for memory efficient adaptation of language model. Advances in Neural Information Processing Systems, 36:3402--3424, 2023.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_20_1","volume-title":"Shaochen Zhong, Hongyi Liu, Jiayi Yuan, Yang Sui, Vladimir Braverman, Vipin Chaudhary, et al. Autol2s: Auto long-short reasoning for efficient large language models. arXiv preprint arXiv:2505.22662","author":"Luo Feng","year":"2025","unstructured":"Feng Luo, Yu-Neng Chuang, Guanchu Wang, Hoang Anh Duy Le, Shaochen Zhong, Hongyi Liu, Jiayi Yuan, Yang Sui, Vladimir Braverman, Vipin Chaudhary, et al. Autol2s: Auto long-short reasoning for efficient large language models. arXiv preprint arXiv:2505.22662, 2025."},{"key":"e_1_2_1_21_1","volume-title":"Can foundation models wrangle your data? arXiv preprint arXiv:2205.09911","author":"Narayan Avanika","year":"2022","unstructured":"Avanika Narayan, Ines Chami, Laurel Orr, Simran Arora, and Christopher R\u00b4e. Can foundation models wrangle your data? arXiv preprint arXiv:2205.09911, 2022."},{"key":"e_1_2_1_22_1","volume-title":"Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731","author":"Sagawa Shiori","year":"2019","unstructured":"Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019."},{"key":"e_1_2_1_23_1","volume-title":"Tabular representation, noisy operators, and impacts on table structure understanding tasks in llms. arXiv preprint arXiv:2310.10358","author":"Singha Ananya","year":"2023","unstructured":"Ananya Singha, Jos\u00b4e Cambronero, Sumit Gulwani, Vu Le, and Chris Parnin. Tabular representation, noisy operators, and impacts on table structure understanding tasks in llms. arXiv preprint arXiv:2310.10358, 2023."},{"key":"e_1_2_1_24_1","volume-title":"Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314","author":"Snell Charlie","year":"2024","unstructured":"Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024."},{"key":"e_1_2_1_25_1","volume-title":"Stop overthinking: A survey on efficient reasoning for large language models. arXiv preprint arXiv:2503.16419","author":"Sui Yang","year":"2025","unstructured":"Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen Zhong, Hanjie Chen, et al. Stop overthinking: A survey on efficient reasoning for large language models. arXiv preprint arXiv:2503.16419, 2025."},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3616855.3635752"},{"key":"e_1_2_1_27_1","volume-title":"Tap4llm: Table provider on sampling, augmenting, and packing semi-structured data for large language model reasoning. arXiv preprint arXiv:2312.09039","author":"Sui Yuan","year":"2023","unstructured":"Yuan Sui, Jiaru Zou, Mengyu Zhou, Xinyi He, Lun Du, Shi Han, and Dongmei Zhang. Tap4llm: Table provider on sampling, augmenting, and packing semi-structured data for large language model reasoning. arXiv preprint arXiv:2312.09039, 2023."},{"key":"e_1_2_1_28_1","volume-title":"Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023."},{"key":"e_1_2_1_29_1","volume-title":"Taylor unswift: Secured weight release for large language models via taylor expansion. arXiv preprint arXiv:2410.05331","author":"Wang Guanchu","year":"2024","unstructured":"Guanchu Wang, Yu-Neng Chuang, Ruixiang Tang, Shaochen Zhong, Jiayi Yuan, Hongye Jin, Zirui Liu, Vipin Chaudhary, Shuai Xu, James Caverlee, et al. Taylor unswift: Secured weight release for large language models via taylor expansion. arXiv preprint arXiv:2410.05331, 2024."},{"key":"e_1_2_1_30_1","volume-title":"Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171","author":"Wang Xuezhi","year":"2022","unstructured":"Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022."},{"key":"e_1_2_1_31_1","volume-title":"Meditab: Scaling medical tabular data predictors via data consolidation, enrichment, and refinement. arXiv preprint arXiv:2305.12081","author":"Wang Zifeng","year":"2023","unstructured":"Zifeng Wang, Chufan Gao, Cao Xiao, and Jimeng Sun. Meditab: Scaling medical tabular data predictors via data consolidation, enrichment, and refinement. arXiv preprint arXiv:2305.12081, 2023."},{"key":"e_1_2_1_32_1","volume-title":"Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824--24837","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824--24837, 2022."},{"key":"e_1_2_1_33_1","volume-title":"Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771","author":"Wolf Thomas","year":"2019","unstructured":"Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00b4emi Louf, Morgan Funtowicz, et al. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019."},{"key":"e_1_2_1_34_1","volume-title":"In-context example ordering guided by label distributions. arXiv preprint arXiv:2402.11447","author":"Xu Zhichao","year":"2024","unstructured":"Zhichao Xu, Daniel Cohen, Bei Wang, and Vivek Srikumar. In-context example ordering guided by label distributions. arXiv preprint arXiv:2402.11447, 2024."},{"key":"e_1_2_1_35_1","volume-title":"Self-ensemble: Mitigating confidence distortion for large language models. arXiv preprint arXiv:2506.01951","author":"Xu Zicheng","year":"2025","unstructured":"Zicheng Xu, Guanchu Wang, Guangyao Zheng, Yu-Neng Chuang, Alexander Szalay, Xia Hu, and Vladimir Braverman. Self-ensemble: Mitigating confidence distortion for large language models. arXiv preprint arXiv:2506.01951, 2025."},{"key":"e_1_2_1_36_1","volume-title":"Harnessing the power of llms in practice: A survey on chatgpt and beyond. ACM Transactions on Knowledge Discovery from Data, 18(6):1--32","author":"Yang Jingfeng","year":"2024","unstructured":"Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Shaochen Zhong, Bing Yin, and Xia Hu. Harnessing the power of llms in practice: A survey on chatgpt and beyond. ACM Transactions on Knowledge Discovery from Data, 18(6):1--32, 2024."},{"key":"e_1_2_1_37_1","first-page":"36","article-title":"Tree of thoughts: Deliberate problem solving with large language models","author":"Yao Shunyu","year":"2024","unstructured":"Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36, 2024.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_38_1","volume-title":"Unified language representation for question answering over text, tables, and images. arXiv preprint arXiv:2306.16762","author":"Yu Bowen","year":"2023","unstructured":"Bowen Yu, Cheng Fu, Haiyang Yu, Fei Huang, and Yongbin Li. Unified language representation for question answering over text, tables, and images. arXiv preprint arXiv:2306.16762, 2023."},{"key":"e_1_2_1_39_1","volume-title":"Kv cache compression, but what must we give in return? a comprehensive benchmark of long context capable approaches. arXiv preprint arXiv:2407.01527","author":"Yuan Jiayi","year":"2024","unstructured":"Jiayi Yuan, Hongyi Liu, Yu-Neng Chuang, Songchen Li, Guanchu Wang, Duy Le, Hongye Jin, Vipin Chaudhary, Zhaozhuo Xu, Zirui Liu, et al. Kv cache compression, but what must we give in return? a comprehensive benchmark of long context capable approaches. arXiv preprint arXiv:2407.01527, 2024."},{"key":"e_1_2_1_40_1","volume-title":"Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158","author":"Zha Daochen","year":"2023","unstructured":"Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158, 2023."},{"key":"e_1_2_1_41_1","volume-title":"et al. Tablegpt: Towards unifying tables, nature language and commands into one gpt. arXiv preprint arXiv:2307.08674","author":"Zha Liangyu","year":"2023","unstructured":"Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, et al. Tablegpt: Towards unifying tables, nature language and commands into one gpt. arXiv preprint arXiv:2307.08674, 2023."},{"key":"e_1_2_1_42_1","volume-title":"I3s: Importance sampling subspace selection for low-rank optimization in llm pretraining. arXiv preprint arXiv:2502.05790","author":"Zhang Haochen","year":"2025","unstructured":"Haochen Zhang, Junze Yin, Guanchu Wang, Zirui Liu, Tianyi Zhang, Anshumali Shrivastava, Lin Yang, and Vladimir Braverman. I3s: Importance sampling subspace selection for low-rank optimization in llm pretraining. arXiv preprint arXiv:2502.05790, 2025."},{"key":"e_1_2_1_43_1","volume-title":"Tablellama: Towards open large generalist models for tables. arXiv preprint arXiv:2311.09206","author":"Zhang Tianshu","year":"2023","unstructured":"Tianshu Zhang, Xiang Yue, Yifei Li, and Huan Sun. Tablellama: Towards open large generalist models for tables. arXiv preprint arXiv:2311.09206, 2023."},{"key":"e_1_2_1_44_1","volume-title":"et al. Tablellm: Enabling tabular data manipulation by llms in real office usage scenarios. arXiv preprint arXiv:2403.19318","author":"Zhang Xiaokang","year":"2024","unstructured":"Xiaokang Zhang, Jing Zhang, Zeyao Ma, Yang Li, Bohan Zhang, Guanlin Li, Zijun Yao, Kangli Xu, Jinchang Zhou, Daniel Zhang-Li, et al. Tablellm: Enabling tabular data manipulation by llms in real office usage scenarios. arXiv preprint arXiv:2403.19318, 2024."}],"container-title":["ACM SIGKDD Explorations Newsletter"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3748239.3748248","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,7]],"date-time":"2025-07-07T19:21:09Z","timestamp":1751916069000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3748239.3748248"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,7]]},"references-count":44,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,7,7]]}},"alternative-id":["10.1145\/3748239.3748248"],"URL":"https:\/\/doi.org\/10.1145\/3748239.3748248","relation":{},"ISSN":["1931-0145","1931-0153"],"issn-type":[{"value":"1931-0145","type":"print"},{"value":"1931-0153","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,7]]},"assertion":[{"value":"2025-07-07","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}