{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T16:06:10Z","timestamp":1776182770606,"version":"3.50.1"},"reference-count":127,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2025,3,13]],"date-time":"2025-03-13T00:00:00Z","timestamp":1741824000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"funder":[{"DOI":"10.13039\/\"https:\/\/doi.org\/10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["2240347"],"award-info":[{"award-number":["2240347"]}],"id":[{"id":"10.13039\/\"https:\/\/doi.org\/10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Manage. Inf. Syst."],"published-print":{"date-parts":[[2025,6,30]]},"abstract":"<jats:p>The use of machine learning (ML) to detect depression in online settings has emerged as an important health and wellness use case. In particular, the use of deep learning methods for depression detection from textual content posted on social media has garnered considerable attention. Conversely, there has been relatively limited evaluation of depression detection in clinical environments involving text generated from remote interviews. In this research, we review state-of-the-art feature-based ML, deep learning, and large language models for depression detection. We use a multidimensional analysis framework to benchmark various language models on a novel testbed comprising speech-to-text transcriptions of remote interviews. Our framework considers the impact of different transcription types and interview segments on depression detection performance. Finally, we summarize the key trends and takeaways from the review and benchmark evaluation and provide suggestions to guide the design of future detection methods.<\/jats:p>","DOI":"10.1145\/3673906","type":"journal-article","created":{"date-parts":[[2024,8,13]],"date-time":"2024-08-13T11:10:46Z","timestamp":1723547446000},"page":"1-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":14,"title":["Language Models for Online Depression Detection: A Review and Benchmark Analysis on Remote Interviews"],"prefix":"10.1145","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0827-2257","authenticated-orcid":false,"given":"Ruiyang","family":"Qin","sequence":"first","affiliation":[{"name":"Computer Science, University of Notre Dame, Notre Dame, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6371-7741","authenticated-orcid":false,"given":"Kai","family":"Yang","sequence":"additional","affiliation":[{"name":"College of Economics, Shenzhen University, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7698-7794","authenticated-orcid":false,"given":"Ahmed","family":"Abbasi","sequence":"additional","affiliation":[{"name":"IT, Analytics, and Operations, University of Notre Dame, Notre Dame, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9493-3447","authenticated-orcid":false,"given":"David","family":"Dobolyi","sequence":"additional","affiliation":[{"name":"University of Colorado Boulder, Boulder, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6017-7049","authenticated-orcid":false,"given":"Salman","family":"Seyedi","sequence":"additional","affiliation":[{"name":"Emory University, Atlanta, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9161-8874","authenticated-orcid":false,"given":"Emily","family":"Griner","sequence":"additional","affiliation":[{"name":"Emory University, Atlanta, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5693-3278","authenticated-orcid":false,"given":"Hyeokhyen","family":"Kwon","sequence":"additional","affiliation":[{"name":"Emory University, Atlanta, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9903-8807","authenticated-orcid":false,"given":"Robert","family":"Cotes","sequence":"additional","affiliation":[{"name":"Emory University, Atlanta, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3570-9461","authenticated-orcid":false,"given":"Zifan","family":"Jiang","sequence":"additional","affiliation":[{"name":"Biomedical Engineering, Georgia Institute of Technology, Atlanta, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5709-201X","authenticated-orcid":false,"given":"Gari","family":"Clifford","sequence":"additional","affiliation":[{"name":"Biomedical Engineering, Georgia Institute of Technology, Atlanta, United States"},{"name":"Biomedical Informatics, Emory University, Atlanta, United States"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-2624-1867","authenticated-orcid":false,"given":"Ryan A.","family":"Cook","sequence":"additional","affiliation":[{"name":"IT, Analytics, and Operations, University of Notre Dame, Notre Dame, United States"}]}],"member":"320","published-online":{"date-parts":[[2025,3,13]]},"reference":[{"issue":"6","key":"e_1_3_1_2_2","first-page":"1439","article-title":"Data science for social good","volume":"24","author":"Abbasi Ahmed","year":"2023","unstructured":"Ahmed Abbasi, Roger H. L. Chiang, and Jennifer Xu. 2023. Data science for social good. J. Assoc. Inf. Syst. 24, 6 (2023), 1439.","journal-title":"J. Assoc. Inf. Syst."},{"key":"e_1_3_1_3_2","doi-asserted-by":"crossref","unstructured":"Ahmed Abbasi Jeffrey Parsons Gautam Pant Olivia R. Liu Sheng and Suprateek Sarker. 2024. Pathways for design research on artificial intelligence. Information Systems Research 35 2 (2024).","DOI":"10.1287\/isre.2024.editorial.v35.n2"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/s40596-022-01690-5"},{"key":"e_1_3_1_5_2","first-page":"1716","volume-title":"Proceedings of the Interspeech Conference","author":"Hanai Tuka Al","year":"2018","unstructured":"Tuka Al Hanai, Mohammad M. Ghassemi, and James R. Glass. 2018. Detecting depression with audio\/text sequence modeling of interviews. In Proceedings of the Interspeech Conference. 1716\u20131720."},{"key":"e_1_3_1_6_2","article-title":"Ensemble hybrid learning methods for automated depression detection","author":"Ansari Luna","year":"2022","unstructured":"Luna Ansari, Shaoxiong Ji, Qian Chen, and Erik Cambria. 2022. Ensemble hybrid learning methods for automated depression detection. IEEE Trans. Comput. Soc. Syst. (2022).","journal-title":"IEEE Trans. Comput. Soc. Syst."},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1370\/afm.1139"},{"key":"e_1_3_1_8_2","article-title":"When automated assessment meets automated content generation: Examining text quality in the era of GPTs","author":"Bevilacqua Marialena","year":"2023","unstructured":"Marialena Bevilacqua, Kezia Oketch, Ruiyang Qin, Will Stamey, Xinyuan Zhang, Yi Gan, Kai Yang, and Ahmed Abbasi. 2023. When automated assessment meets automated content generation: Examining text quality in the era of GPTs. arXiv preprint arXiv:2309.14488 (2023).","journal-title":"arXiv preprint arXiv:2309.14488"},{"key":"e_1_3_1_9_2","first-page":"2397","article-title":"Pythia: A suite for analyzing large language models across training and scaling","year":"2023","unstructured":"Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O\u2019Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. In Proceedings of the International Conference on Machine Learning. PMLR, 2397\u20132430.","journal-title":"Proceedings of the International Conference on Machine Learning"},{"key":"e_1_3_1_10_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","year":"2020","unstructured":"Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Advan. Neural Inf. Process. Syst. 33 (2020), 1877\u20131901.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.25300\/MISQ\/2020\/14110"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10844-020-00599-5"},{"key":"e_1_3_1_13_2","article-title":"SMHD: A large-scale resource for exploring online language usage for multiple mental health conditions","author":"Cohan Arman","year":"2018","unstructured":"Arman Cohan, Bart Desmet, Andrew Yates, Luca Soldaini, Sean MacAvaney, and Nazli Goharian. 2018. SMHD: A large-scale resource for exploring online language usage for multiple mental health conditions. arXiv preprint arXiv:1806.05258 (2018).","journal-title":"arXiv preprint arXiv:1806.05258"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/S2215-0366(21)00395-3"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/W14-3207"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jadr.2023.100645"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.2196\/36417"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-021-00296-0"},{"key":"e_1_3_1_19_2","unstructured":"Harm de Vries. 2023. Go smol or go home. (2023). https:\/\/www.harmdevries.com\/post\/model-size-vs-compute-overhead\/"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.3390\/s23229225"},{"key":"e_1_3_1_21_2","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).","journal-title":"arXiv preprint arXiv:1810.04805"},{"key":"e_1_3_1_22_2","article-title":"Text-based depression detection on sparse data","author":"Dinkel Heinrich","year":"2019","unstructured":"Heinrich Dinkel, Mengyue Wu, and Kai Yu. 2019. Text-based depression detection on sparse data. arXiv preprint arXiv:1904.05154 (2019).","journal-title":"arXiv preprint arXiv:1904.05154"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3347320.3357695"},{"key":"e_1_3_1_24_2","first-page":"14","volume-title":"Proceedings of the International AAAI Conference on Web and Social Media","author":"Farnadi Golnoosh","year":"2013","unstructured":"Golnoosh Farnadi, Susana Zoghbi, Marie-Francine Moens, and Martine De Cock. 2013. Recognising personality traits using Facebook status updates. In Proceedings of the International AAAI Conference on Web and Social Media. 14\u201318."},{"key":"e_1_3_1_25_2","article-title":"GPTQ: Accurate post-training quantization for generative pre-trained transformers","author":"Frantar Elias","year":"2022","unstructured":"Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. 2022. GPTQ: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323 (2022).","journal-title":"arXiv preprint arXiv:2210.17323"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-6208"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313698"},{"key":"e_1_3_1_28_2","volume-title":"OpenLLaMA: An Open Reproduction of LLaMA","author":"Geng Xinyang","year":"2023","unstructured":"Xinyang Geng and Hao Liu. 2023. OpenLLaMA: An Open Reproduction of LLaMA. Retrieved from https:\/\/github.com\/openlm-research\/open_llama"},{"key":"e_1_3_1_29_2","article-title":"Self-verification improves few-shot clinical information extraction","author":"Gero Zelalem","year":"2023","unstructured":"Zelalem Gero, Chandan Singh, Hao Cheng, Tristan Naumann, Michel Galley, Jianfeng Gao, and Hoifung Poon. 2023. Self-verification improves few-shot clinical information extraction. arXiv preprint arXiv:2306.00024 (2023).","journal-title":"arXiv preprint arXiv:2306.00024"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/2663204.2663274"},{"key":"e_1_3_1_31_2","first-page":"3123","article-title":"The distress analysis interview corpus of human and computer interviews.","year":"2014","unstructured":"Jonathan Gratch, Ron Artstein, Gale Lucas, Giota Stratou, Stefan Scherer, Angela Nazarian, Rachel Wood, Jill Boberg, David DeVault, Stacy Marsella, David Traum, Skip Rizzo, and Louis-Philippe Morency. 2014. The distress analysis interview corpus of human and computer interviews. In Proceedings of the International Conference on Language Resources and Evaluation (LREC\u201914). 3123\u20133128.","journal-title":"Proceedings of the International Conference on Language Resources and Evaluation (LREC\u201914)"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.72"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/2661806.2661810"},{"key":"e_1_3_1_34_2","article-title":"Measuring depression symptom severity from spoken language and 3D facial expressions","author":"Haque Albert","year":"2018","unstructured":"Albert Haque, Michelle Guo, Adam S. Miner, and Li Fei-Fei. 2018. Measuring depression symptom severity from spoken language and 3D facial expressions. arXiv preprint arXiv:1811.08592 (2018).","journal-title":"arXiv preprint arXiv:1811.08592"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-86383-8_35"},{"key":"e_1_3_1_36_2","unstructured":"Jordan Hoffmann Sebastian Borgeaud Arthur Mensch Elena Buchatskaya Trevor Cai Eliza Rutherford Diego de Las Casas Lisa Anne Hendricks Johannes Welbl Aidan Clark Tom Hennigan Eric Noland Katie Millican George van den Driessche Bogdan Damoc Aurelia Guy Simon Osindero Karen Simonyan Erich Elsen Jack W. Rae Oriol Vinyals and Laurent Sifre. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 (2022)."},{"key":"e_1_3_1_37_2","article-title":"LoRA: Low-rank adaptation of large language models","author":"Hu Edward J.","year":"2021","unstructured":"Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. LoRA: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021).","journal-title":"arXiv preprint arXiv:2106.09685"},{"key":"e_1_3_1_38_2","article-title":"Large language models in mental health care: A scoping review","author":"Hua Yining","year":"2024","unstructured":"Yining Hua, Fenglin Liu, Kailai Yang, Zehan Li, Yi-Han Sheu, Peilin Zhou, Lauren V. Moran, Sophia Ananiadou, and Andrew Beam. 2024. Large language models in mental health care: A scoping review. arXiv preprint arXiv:2401.02984 (2024).","journal-title":"arXiv preprint arXiv:2401.02984"},{"key":"e_1_3_1_39_2","article-title":"MentalBERT: Publicly available pretrained language models for mental healthcare","author":"Ji Shaoxiong","year":"2021","unstructured":"Shaoxiong Ji, Tianlin Zhang, Luna Ansari, Jie Fu, Prayag Tiwari, and Erik Cambria. 2021. MentalBERT: Publicly available pretrained language models for mental healthcare. arXiv preprint arXiv:2110.15621 (2021).","journal-title":"arXiv preprint arXiv:2110.15621"},{"key":"e_1_3_1_40_2","unstructured":"Albert Q. Jiang Alexandre Sablayrolles Arthur Mensch Chris Bamford Devendra Singh Chaplot Diego de las Casas Florian Bressand Gianna Lengyel Guillaume Lample Lucile Saulnier L\u00e9lio Renard Lavaud Marie-Anne Lachaux Pierre Stock Teven Le Scao Thibaut Lavril Thomas Wang Timoth\u00e9e Lacroix and William El Sayed. 2023. Mistral 7B. arXiv preprint arXiv:2310.06825 (2023)."},{"key":"e_1_3_1_41_2","doi-asserted-by":"crossref","unstructured":"Zifan Jiang Salman Seyedi Emily Griner Ahmed Abbasi Ali Bahrami Rad Hyeokhyen Kwon Robert O. Cotes and Gari D. Clifford. 2024. Multimodal mental health digital biomarker analysis from remote interviews using facial vocal linguistic and cardiovascular patterns. IEEE Journal of Biomedical and Health Informatics 28 3 (2024).","DOI":"10.1109\/JBHI.2024.3352075"},{"key":"e_1_3_1_42_2","first-page":"2023","article-title":"Evaluating and mitigating unfairness in multimodal remote mental health assessments","author":"Jiang Zifan","year":"2024","unstructured":"Zifan Jiang, Salman Seyedi, Emily Griner, Ahmed Abbasi, Ali Bahrami Rad, Hyeokhyen Kwon, Robert O. Cotes, and Gari D. Clifford. 2024. Evaluating and mitigating unfairness in multimodal remote mental health assessments. PLOS Digit. Health 3 (2024), 2023\u201311.","journal-title":"PLOS Digit. Health"},{"key":"e_1_3_1_43_2","article-title":"Large language models on graphs: A comprehensive survey","author":"Jin Bowen","year":"2023","unstructured":"Bowen Jin, Gang Liu, Chi Han, Meng Jiang, Heng Ji, and Jiawei Han. 2023. Large language models on graphs: A comprehensive survey. arXiv preprint arXiv:2312.02783 (2023).","journal-title":"arXiv preprint arXiv:2312.02783"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.25300\/MISQ\/2023\/17381"},{"key":"e_1_3_1_45_2","doi-asserted-by":"crossref","unstructured":"Harnain Kour and Manoj K. Gupta. 2022. An hybrid deep learning approach for depression prediction from user tweets using feature-rich CNN and bi-directional LSTM. Multimedia Tools and Applications 81 17 (2022) 1\u201337.","DOI":"10.1007\/s11042-022-12648-y"},{"key":"e_1_3_1_46_2","article-title":"Race: Large-scale reading comprehension dataset from examinations","author":"Lai Guokun","year":"2017","unstructured":"Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683 (2017).","journal-title":"arXiv preprint arXiv:1704.04683"},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1145\/3641276"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.naacl-main.263"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8683027"},{"key":"e_1_3_1_50_2","article-title":"Evaluation of ChatGPT for NLP-based mental health applications","author":"Lamichhane Bishal","year":"2023","unstructured":"Bishal Lamichhane. 2023. Evaluation of ChatGPT for NLP-based mental health applications. arXiv preprint arXiv:2303.15727 (2023).","journal-title":"arXiv preprint arXiv:2303.15727"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/IEEECONF53345.2021.9723273"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.3389\/fpsyt.2023.1160291"},{"key":"e_1_3_1_53_2","article-title":"Micromodels for efficient, explainable, and reusable systems: A case study on mental health","author":"Lee Andrew","year":"2021","unstructured":"Andrew Lee, Jonathan K. Kummerfeld, Lawrence C. An, and Rada Mihalcea. 2021. Micromodels for efficient, explainable, and reusable systems: A case study on mental health. arXiv preprint arXiv:2109.13770 (2021).","journal-title":"arXiv preprint arXiv:2109.13770"},{"key":"e_1_3_1_54_2","article-title":"LoRA fine-tuning efficiently undoes safety training in Llama 2-Chat 70B","author":"Lermen Simon","year":"2023","unstructured":"Simon Lermen, Charlie Rogers-Smith, and Jeffrey Ladish. 2023. LoRA fine-tuning efficiently undoes safety training in Llama 2-Chat 70B. arXiv preprint arXiv:2310.20624 (2023).","journal-title":"arXiv preprint arXiv:2310.20624"},{"key":"e_1_3_1_55_2","article-title":"Prefix-tuning: Optimizing continuous prompts for generation","author":"Li Xiang Lisa","year":"2021","unstructured":"Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 (2021).","journal-title":"arXiv preprint arXiv:2101.00190"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01159-8_17"},{"key":"e_1_3_1_57_2","doi-asserted-by":"publisher","DOI":"10.1145\/3372278.3391932"},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.2196\/27244"},{"key":"e_1_3_1_59_2","article-title":"RoBERTa: A robustly optimized BERT pretraining approach","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019).","journal-title":"arXiv preprint arXiv:1907.11692"},{"key":"e_1_3_1_60_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-65813-1_30"},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-98932-7_30"},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2014.04.043"},{"key":"e_1_3_1_63_2","article-title":"Open-ended multi-modal relational reason for video question answering","author":"Luo Haozheng","year":"2020","unstructured":"Haozheng Luo and Ruiyang Qin. 2020. Open-ended multi-modal relational reason for video question answering. arXiv preprint arXiv:2012.00822 (2020).","journal-title":"arXiv preprint arXiv:2012.00822"},{"key":"e_1_3_1_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/2988257.2988267"},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2017.23"},{"key":"e_1_3_1_66_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2022.109713"},{"key":"e_1_3_1_67_2","doi-asserted-by":"crossref","unstructured":"Adria Mallol-Ragolta Ziping Zhao Lukas Stappen Nicholas Cummins and Bj\u00f6rn Schuller. 2019. A hierarchical attention network-based approach for depression detection from transcribed clinical interviews. https:\/\/opus.bibliothek.uni-augsburg.de\/opus4\/frontdoor\/deliver\/index\/docId\/65784\/file\/2036.pdf","DOI":"10.21437\/Interspeech.2019-2036"},{"key":"e_1_3_1_68_2","article-title":"Distributed representations of words and phrases and their compositionality","volume":"26","author":"Mikolov Tomas","year":"2013","unstructured":"Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advan. Neural Inf. Process. Syst. 26 (2013).","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/3605943"},{"key":"e_1_3_1_70_2","first-page":"182","volume-title":"Proceedings of the Workshop on Computational Modeling of People\u2019s Opinions, Personality, and Emotions in Social Media (PEOPLES\u201916)","author":"Mowery Danielle L.","year":"2016","unstructured":"Danielle L. Mowery, Y. Albert Park, Craig Bryan, and Mike Conway. 2016. Towards automatically classifying depressive symptoms from Twitter data for population health. In Proceedings of the Workshop on Computational Modeling of People\u2019s Opinions, Personality, and Emotions in Social Media (PEOPLES\u201916). 182\u2013191."},{"key":"e_1_3_1_71_2","article-title":"Scaling data-constrained language models","author":"Muennighoff Niklas","year":"2023","unstructured":"Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023. Scaling data-constrained language models. arXiv preprint arXiv:2305.16264 (2023).","journal-title":"arXiv preprint arXiv:2305.16264"},{"key":"e_1_3_1_72_2","first-page":"59","volume-title":"Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis","author":"Murarka Ankit","year":"2021","unstructured":"Ankit Murarka, Balaji Radhakrishnan, and Sushma Ravichandran. 2021. Classification of mental illnesses on social media using RoBERTa. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis. 59\u201368."},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.1145\/3485447.3512128"},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP39728.2021.9413486"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W18-0609"},{"key":"e_1_3_1_76_2","article-title":"Instruction tuning with GPT-4","author":"Peng Baolin","year":"2023","unstructured":"Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with GPT-4. arXiv preprint arXiv:2304.03277 (2023).","journal-title":"arXiv preprint arXiv:2304.03277"},{"key":"e_1_3_1_77_2","unstructured":"J. W. Pennebaker R. L. Boyd K. Jordan and K. Blackburn. 2015. The development and psychometric properties of LIWC2015. Austin TX: University of Texas at Austin. https:\/\/repositories.lib.utexas.edu\/server\/api\/core\/bitstreams\/b0d26dcf-2391-4701-88d0-3cf50ebee697\/content"},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14-1162"},{"key":"e_1_3_1_79_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2022.102380"},{"key":"e_1_3_1_80_2","doi-asserted-by":"publisher","DOI":"10.1191\/1478088705qp045oa"},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0140-6736(07)61238-0"},{"key":"e_1_3_1_82_2","unstructured":"Ruiyang Qin Yuting Hu Zheyu Yan Jinjun Xiong Ahmed Abbasi and Yiyu Shi. 2024. FL-NAS: Towards fairness of NAS for resource constrained devices via large language models. https:\/\/ieeexplore.ieee.org\/stamp\/stamp.jsp?arnumber=10473847"},{"key":"e_1_3_1_83_2","first-page":"arXiv\u20132406","article-title":"Empirical guidelines for deploying LLMs onto resource-constrained edge devices","author":"Qin Ruiyang","year":"2024","unstructured":"Ruiyang Qin, Dancheng Liu, Zheyu Yan, Zhaoxuan Tan, Zixuan Pan, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Jinjun Xiong, and Yiyu Shi. 2024. Empirical guidelines for deploying LLMs onto resource-constrained edge devices. arXiv e-prints (2024), arXiv\u20132406.","journal-title":"arXiv e-prints"},{"key":"e_1_3_1_84_2","article-title":"IBERT: Idiom cloze-style reading comprehension with attention","author":"Qin Ruiyang","year":"2021","unstructured":"Ruiyang Qin, Haozheng Luo, Zheheng Fan, and Ziang Ren. 2021. IBERT: Idiom cloze-style reading comprehension with attention. arXiv preprint arXiv:2112.02994 (2021).","journal-title":"arXiv preprint arXiv:2112.02994"},{"key":"e_1_3_1_85_2","article-title":"Enabling on-device large language model personalization with self-supervised data selection and synthesis","author":"Qin Ruiyang","year":"2023","unstructured":"Ruiyang Qin, Jun Xia, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Peipei Zhou, Jingtong Hu, and Yiyu Shi. 2023. Enabling on-device large language model personalization with self-supervised data selection and synthesis. arXiv preprint arXiv:2311.12275 (2023).","journal-title":"arXiv preprint arXiv:2311.12275"},{"key":"e_1_3_1_86_2","unstructured":"Ruiyang Qin Zheyu Yan Dewen Zeng Zhenge Jia Dancheng Liu Jianbo Liu Ahmed Abbasi Zhi Zheng Ningyuan Cao Kai Ni Jinjun Xiong and Yiyu Shi. 2024. Robust implementation of retrieval-augmented generation on edge-based computing-in-memory architectures. arXiv preprint arXiv:2405.04700 (2024)."},{"key":"e_1_3_1_87_2","unstructured":"Jack W. Rae Sebastian Borgeaud Trevor Cai Katie Millican Jordan Hoffmann Francis Song John Aslanides Sarah Henderson Roman Ring Susannah Young Eliza Rutherford Tom Hennigan Jacob Menick Albin Cassirer Richard Powell George van den Driessche Lisa Anne Hendricks Maribeth Rauh Po-Sen Huang Amelia Glaese Johannes Welbl Sumanth Dathathri Saffron Huang Jonathan Uesato John Mellor Irina Higgins Antonia Creswell Nat McAleese Amy Wu Erich Elsen Siddhant Jayakumar Elena Buchatskaya David Budden Esme Sutherland Karen Simonyan Michela Paganini Laurent Sifre Lena Martens Xiang Lorraine Li Adhiguna Kuncoro Aida Nematzadeh Elena Gribovskaya Domenic Donato Angeliki Lazaridou Arthur Mensch Jean-Baptiste Lespiau Maria Tsimpoukelli Nikolai Grigorev Doug Fritz Thibault Sottiaux Mantas Pajarskas Toby Pohlen Zhitao Gong Daniel Toyama Cyprien de Masson d\u2019Autume Yujia Li Tayfun Terzi Vladimir Mikulik Igor Babuschkin Aidan Clark Diego de Las Casas Aurelia Guy Chris Jones James Bradbury Matthew Johnson Blake Hechtman Laura Weidinger Iason Gabriel William Isaac Ed Lockhart Simon Osindero Laura Rimell Chris Dyer Oriol Vinyals Kareem Ayoub Jeff Stanway Lorrayne Bennett Demis Hassabis Koray Kavukcuoglu and Geoffrey Irving. 2021. Scaling language models: Methods analysis & insights from training Gopher. arXiv preprint arXiv:2112.11446 (2021)."},{"key":"e_1_3_1_88_2","doi-asserted-by":"publisher","DOI":"10.2196\/28754"},{"key":"e_1_3_1_89_2","first-page":"3","article-title":"AVEC 2019 Workshop and challenge: State-of-mind, detecting depression with AI, and cross-cultural affect recognition","year":"2019","unstructured":"Fabien Ringeval, Bj\u00f6rn Schuller, Michel Valstar, Nicholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Messner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, and Maja Pantic. 2019. AVEC 2019 Workshop and challenge: State-of-mind, detecting depression with AI, and cross-cultural affect recognition. In Proceedings of the 9th International on Audio\/visual Emotion Challenge and Workshop. 3\u201312.","journal-title":"Proceedings of the 9th International on Audio\/visual Emotion Challenge and Workshop"},{"key":"e_1_3_1_90_2","doi-asserted-by":"publisher","DOI":"10.1080\/02699930441000030"},{"key":"e_1_3_1_91_2","article-title":"Testing the general deductive reasoning capacity of large language models using OOD examples","author":"Saparov Abulhair","year":"2023","unstructured":"Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim, and He He. 2023. Testing the general deductive reasoning capacity of large language models using OOD examples. arXiv preprint arXiv:2305.15269 (2023).","journal-title":"arXiv preprint arXiv:2305.15269"},{"key":"e_1_3_1_92_2","article-title":"Adapting deep learning methods for mental health prediction on social media","author":"Sekuli\u0107 Ivan","year":"2020","unstructured":"Ivan Sekuli\u0107 and Michael Strube. 2020. Adapting deep learning methods for mental health prediction on social media. arXiv preprint arXiv:2003.07634 (2020).","journal-title":"arXiv preprint arXiv:2003.07634"},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.2196\/48517"},{"issue":"20","key":"e_1_3_1_94_2","first-page":"22","article-title":"The Mini-International Neuropsychiatric Interview (MINI): The development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10","volume":"59","year":"1998","unstructured":"David V. Sheehan, Yves Lecrubier, K. Harnett Sheehan, Patricia Amorim, Juris Janavs, Emmanuelle Weiller, Thierry Hergueta, Roxy Baker, and Geoffrey C. Dunbar. 1998. The Mini-International Neuropsychiatric Interview (MINI): The development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. J. Clin. Psychiat. 59, 20 (1998), 22\u201333.","journal-title":"J. Clin. Psychiat."},{"key":"e_1_3_1_95_2","doi-asserted-by":"publisher","DOI":"10.1145\/3578931"},{"key":"e_1_3_1_96_2","article-title":"Text classification via large language models","author":"Sun Xiaofei","year":"2023","unstructured":"Xiaofei Sun, Xiaoya Li, Jiwei Li, Fei Wu, Shangwei Guo, Tianwei Zhang, and Guoyin Wang. 2023. Text classification via large language models. arXiv preprint arXiv:2305.08377 (2023).","journal-title":"arXiv preprint arXiv:2305.08377"},{"key":"e_1_3_1_97_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2876502"},{"key":"e_1_3_1_98_2","unstructured":"Rohan Taori Ishaan Gulrajani Tianyi Zhang Yann Dubois Xuechen Li Carlos Guestrin Percy Liang and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:\/\/crfm.stanford.edu\/2023\/03\/13\/alpaca.html"},{"issue":"1","key":"e_1_3_1_99_2","first-page":"bbad493","article-title":"Opportunities and challenges for ChatGPT and large language models in biomedicine and health","volume":"25","year":"2024","unstructured":"Shubo Tian, Qiao Jin, Lana Yeganova, Po-Ting Lai, Qingqing Zhu, Xiuying Chen, Yifan Yang, Qingyu Chen, Won Kim, Donald C. Comeau, Rezarta Islamaj, Aadit Kapoor, Xin Gao, and Zhiyong Lu. 2024. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Brief. Bioinform. 25, 1 (2024), bbad493.","journal-title":"Brief. Bioinform."},{"key":"e_1_3_1_100_2","article-title":"Clinical camel: An open-source expert-level medical language model with dialogue-based knowledge encoding","author":"Toma Augustin","year":"2023","unstructured":"Augustin Toma, Patrick R. Lawler, Jimmy Ba, Rahul G. Krishnan, Barry B. Rubin, and Bo Wang. 2023. Clinical camel: An open-source expert-level medical language model with dialogue-based knowledge encoding. arXiv preprint arXiv:2305.12031 (2023).","journal-title":"arXiv preprint arXiv:2305.12031"},{"key":"e_1_3_1_101_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timothee Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar Aurelien Rodriguez Armand Joplin Edouard Grave and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)."},{"key":"e_1_3_1_102_2","unstructured":"Jonathan Tow Marco Bellagente Dakota Mahan and Carlos Riquelme Ruiz. 2023. Technical report for StableLM-3B-4E1T. (2023). https:\/\/stability.wandb.io\/stability-llm\/stable-lm\/reports\/StableLM-3B-4E1T--VmlldzoyMjU4?accessToken=u3zujipenkx5g7rtcj9qojjgxpconyjktjkli2po09nffrffdhhchq045vp0wyfo"},{"key":"e_1_3_1_103_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-45442-5_50"},{"key":"e_1_3_1_104_2","article-title":"Dreaddit: A Reddit dataset for stress analysis in social media","author":"Turcan Elsbeth","year":"2019","unstructured":"Elsbeth Turcan and Kathleen McKeown. 2019. Dreaddit: A Reddit dataset for stress analysis in social media. arXiv preprint arXiv:1911.00133 (2019).","journal-title":"arXiv preprint arXiv:1911.00133"},{"key":"e_1_3_1_105_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.clpsych-1.24"},{"key":"e_1_3_1_106_2","doi-asserted-by":"publisher","DOI":"10.1145\/2661806.2661807"},{"key":"e_1_3_1_107_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41746-021-00432-5"},{"key":"e_1_3_1_108_2","doi-asserted-by":"crossref","unstructured":"Dave Van Veen Cara Van Uden Louis Blankemeier Jean-Benoit Delbrouck Asad Aali Christian Bluethgen Anuj Pareek Malgorzata Polacin Eduardo Pontes Reis Anna Seehofnerov\u00e1 Nidhi Rohatgi Poonam Hosamani William Collins Neera Ahuja Curtis P. Langlotz Jason Hom Sergios Gatidis John Pauly and Akshay S. Chaudhari. 2023. Clinical text summarization: Adapting large language models can outperform human experts. Nature Medicine 30 (2023).","DOI":"10.21203\/rs.3.rs-3483777\/v1"},{"key":"e_1_3_1_109_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_1_110_2","unstructured":"Zekun Wang Ge Zhang Kexin Yang Ning Shi Wangchunshu Zhou Shaochun Hao Guangzheng Xiong Yizhi Li Mong Yuan Sim Xiuying Chen Qingqing Zhu Zhenzhu Yang Adam Nik Qi Liu Chenghua Lin Shi Wang Ruibo Liu Wenhu Chen Ke Xu Dayiheng Liu Yike Guo and Jie Fu. 2023. Interactive natural language processing. arXiv preprint arXiv:2305.13246 (2023)."},{"key":"e_1_3_1_111_2","doi-asserted-by":"publisher","DOI":"10.1145\/2988257.2988263"},{"key":"e_1_3_1_112_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10844-018-0533-4"},{"key":"e_1_3_1_113_2","article-title":"Leveraging large language models for mental health prediction via online text data","author":"Xu Xuhai","year":"2023","unstructured":"Xuhai Xu, Bingshen Yao, Yuanzhe Dong, Hong Yu, James Hendler, Anind K. Dey, and Dakuo Wang. 2023. Leveraging large language models for mental health prediction via online text data. arXiv preprint arXiv:2307.14385 (2023).","journal-title":"arXiv preprint arXiv:2307.14385"},{"key":"e_1_3_1_114_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.370"},{"key":"e_1_3_1_115_2","doi-asserted-by":"publisher","DOI":"10.1287\/isre.2022.1111"},{"key":"e_1_3_1_116_2","article-title":"MentaLllama: Interpretable mental health analysis on social media with large language models","author":"Yang Kailai","year":"2023","unstructured":"Kailai Yang, Tianlin Zhang, Ziyan Kuang, Qianqian Xie, and Sophia Ananiadou. 2023. MentaLllama: Interpretable mental health analysis on social media with large language models. arXiv preprint arXiv:2309.13567 (2023).","journal-title":"arXiv preprint arXiv:2309.13567"},{"key":"e_1_3_1_117_2","doi-asserted-by":"publisher","DOI":"10.1145\/2988257.2988269"},{"key":"e_1_3_1_118_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-1174"},{"key":"e_1_3_1_119_2","article-title":"Depression and self-harm risk assessment in online forums","author":"Yates Andrew","year":"2017","unstructured":"Andrew Yates, Arman Cohan, and Nazli Goharian. 2017. Depression and self-harm risk assessment in online forums. arXiv preprint arXiv:1709.01848 (2017).","journal-title":"arXiv preprint arXiv:1709.01848"},{"key":"e_1_3_1_120_2","article-title":"Benchmarking LLMs via uncertainty quantification","author":"Ye Fanghua","year":"2024","unstructured":"Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, and Zhaopeng Tu. 2024. Benchmarking LLMs via uncertainty quantification. arXiv preprint arXiv:2401.12794 (2024).","journal-title":"arXiv preprint arXiv:2401.12794"},{"key":"e_1_3_1_121_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jad.2021.08.090"},{"key":"e_1_3_1_122_2","doi-asserted-by":"publisher","DOI":"10.1145\/3347320.3357696"},{"key":"e_1_3_1_123_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICAwST.2017.8256484"},{"key":"e_1_3_1_124_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP40776.2020.9053207"},{"key":"e_1_3_1_125_2","doi-asserted-by":"publisher","DOI":"10.1002\/cpp.2006"},{"key":"e_1_3_1_126_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.clpsych-1.3"},{"key":"e_1_3_1_127_2","article-title":"DepressionNet: A novel summarization boosted deep framework for depression detection on social media","author":"Zogan Hamad","year":"2021","unstructured":"Hamad Zogan, Imran Razzak, Shoaib Jameel, and Guandong Xu. 2021. DepressionNet: A novel summarization boosted deep framework for depression detection on social media. arXiv preprint arXiv:2105.10878 (2021).","journal-title":"arXiv preprint arXiv:2105.10878"},{"key":"e_1_3_1_128_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11280-021-00992-2"}],"container-title":["ACM Transactions on Management Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3673906","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3673906","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:58:23Z","timestamp":1750294703000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3673906"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,13]]},"references-count":127,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,6,30]]}},"alternative-id":["10.1145\/3673906"],"URL":"https:\/\/doi.org\/10.1145\/3673906","relation":{},"ISSN":["2158-656X","2158-6578"],"issn-type":[{"value":"2158-656X","type":"print"},{"value":"2158-6578","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,13]]},"assertion":[{"value":"2024-03-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-10","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-13","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}