{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T21:43:39Z","timestamp":1776116619515,"version":"3.50.1"},"reference-count":82,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,5,13]],"date-time":"2024-05-13T00:00:00Z","timestamp":1715558400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100006374","name":"Amazon Web Services","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100006374","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100006374","name":"Google","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100006374","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2024,5,13]]},"abstract":"<jats:p>Passively collected behavioral health data from ubiquitous sensors could provide mental health professionals valuable insights into patient's daily lives, but such efforts are impeded by disparate metrics, lack of interoperability, and unclear correlations between the measured signals and an individual's mental health. To address these challenges, we pioneer the exploration of large language models (LLMs) to synthesize clinically relevant insights from multi-sensor data. We develop chain-of-thought prompting methods to generate LLM reasoning on how data pertaining to activity, sleep and social interaction relate to conditions such as depression and anxiety. We then prompt the LLM to perform binary classification, achieving accuracies of 61.1%, exceeding the state of the art. We find models like GPT-4 correctly reference numerical data 75% of the time.<\/jats:p>\n          <jats:p>While we began our investigation by developing methods to use LLMs to output binary classifications for conditions like depression, we find instead that their greatest potential value to clinicians lies not in diagnostic classification, but rather in rigorous analysis of diverse self-tracking data to generate natural language summaries that synthesize multiple data streams and identify potential concerns. Clinicians envisioned using these insights in a variety of ways, principally for fostering collaborative investigation with patients to strengthen the therapeutic alliance and guide treatment. We describe this collaborative engagement, additional envisioned uses, and associated concerns that must be addressed before adoption in real-world contexts.<\/jats:p>","DOI":"10.1145\/3659604","type":"journal-article","created":{"date-parts":[[2024,5,15]],"date-time":"2024-05-15T12:20:41Z","timestamp":1715775641000},"page":"1-25","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":44,"title":["From Classification to Clinical Insights"],"prefix":"10.1145","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6646-6466","authenticated-orcid":false,"given":"Zachary","family":"Englhardt","sequence":"first","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-3841-8966","authenticated-orcid":false,"given":"Chengqian","family":"Ma","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8925-9718","authenticated-orcid":false,"given":"Margaret E.","family":"Morris","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-5472-4897","authenticated-orcid":false,"given":"Chun-Cheng","family":"Chang","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5930-3899","authenticated-orcid":false,"given":"Xuhai \"Orson\"","family":"Xu","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, Massachusetts, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-4757-8136","authenticated-orcid":false,"given":"Lianhui","family":"Qin","sequence":"additional","affiliation":[{"name":"University of California, San Diego, La Jolla, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7313-0082","authenticated-orcid":false,"given":"Daniel","family":"McDuff","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9279-5386","authenticated-orcid":false,"given":"Xin","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6300-4389","authenticated-orcid":false,"given":"Shwetak","family":"Patel","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3025-7953","authenticated-orcid":false,"given":"Vikram","family":"Iyer","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, Washington, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,5,15]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Diagnostic and statistical manual of mental disorders: DSM-5","unstructured":"2013. Diagnostic and statistical manual of mental disorders: DSM-5 (fifth edition. ed.). American Psychiatric Association, Arlington, VA."},{"key":"e_1_2_1_2_1","volume-title":"Schuller","author":"Amin Mostafa M.","year":"2023","unstructured":"Mostafa M. Amin, Erik Cambria, and Bj\u00f6rn W. Schuller. 2023. Will Affective Computing Emerge from Foundation Models and General AI? A First Evaluation on ChatGPT. http:\/\/arxiv.org\/abs\/2303.03186"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1186\/1471-2296-9-1"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2310.10631"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3090051"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","unstructured":"Rachit Bansal Bidisha Samanta Siddharth Dalmia Nitish Gupta Shikhar Vashishth Sriram Ganapathy Abhishek Bapna Prateek Jain and Partha Talukdar. 2024. LLM Augmented LLMs: Expanding Capabilities through Composition. https:\/\/doi.org\/10.48550\/arXiv.2401.02412 arXiv:2401.02412 [cs].","DOI":"10.48550\/arXiv.2401.02412"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1037\/prj0000130"},{"key":"e_1_2_1_8_1","unstructured":"Thorsten Brants Ashok C Popat Peng Xu Franz J Och and Jeffrey Dean. 2007. Large language models in machine translation. (2007)."},{"key":"e_1_2_1_9_1","volume-title":"Random forests. Machine learning 45","author":"Breiman Leo","year":"2001","unstructured":"Leo Breiman. 2001. Random forests. Machine learning 45 (2001), 5--32."},{"key":"e_1_2_1_10_1","volume-title":"Advances in Neural Information Processing Systems","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 1877--1901. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html"},{"key":"e_1_2_1_11_1","volume-title":"Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang.","author":"Bubeck S\u00e9bastien","year":"2023","unstructured":"S\u00e9bastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of Artificial General Intelligence: Early experiments with GPT-4. http:\/\/arxiv.org\/abs\/2303.12712"},{"key":"e_1_2_1_12_1","volume-title":"The relationship between the therapeutic alliance and clinical outcomes in cognitive behaviour therapy for adults with depression: A meta-analytic review. Clinical psychology & psychotherapy 25, 3","author":"Cameron Sarah Kate","year":"2018","unstructured":"Sarah Kate Cameron, Jacqui Rodgers, and Dave Dagnan. 2018. The relationship between the therapeutic alliance and clinical outcomes in cognitive behaviour therapy for adults with depression: A meta-analytic review. Clinical psychology & psychotherapy 25, 3 (2018), 446--456."},{"key":"e_1_2_1_13_1","volume-title":"30th USENIX Security Symposium (USENIX Security 21)","author":"Carlini Nicholas","year":"2021","unstructured":"Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21). 2633--2650."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41746-020-0233-7"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3422821"},{"key":"e_1_2_1_16_1","volume-title":"Charles Sutton, Sebastian Gehrmann, et al.","author":"Chowdhery Aakanksha","year":"2022","unstructured":"Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022)."},{"key":"e_1_2_1_17_1","unstructured":"Aakanksha Chowdhery Sharan Narang Jacob Devlin Maarten Bosma Gaurav Mishra Adam Roberts Paul Barham Hyung Won Chung Charles Sutton Sebastian Gehrmann Parker Schuh Kensen Shi Sasha Tsvyashchenko Joshua Maynez Abhishek Rao Parker Barnes Yi Tay Noam Shazeer Vinodkumar Prabhakaran Emily Reif Nan Du Ben Hutchinson Reiner Pope James Bradbury Jacob Austin Michael Isard Guy Gur-Ari Pengcheng Yin Toju Duke Anselm Levskaya Sanjay Ghemawat Sunipa Dev Henryk Michalewski Xavier Garcia Vedant Misra Kevin Robinson Liam Fedus Denny Zhou Daphne Ippolito David Luan Hyeontaek Lim Barret Zoph Alexander Spiridonov Ryan Sepassi David Dohan Shivani Agrawal Mark Omernick Andrew M. Dai Thanumalayan Sankaranarayana Pillai Marie Pellat Aitor Lewkowycz Erica Moreira Rewon Child Oleksandr Polozov Katherine Lee Zongwei Zhou Xuezhi Wang Brennan Saeta Mark Diaz Orhan Firat Michele Catasta Jason Wei Kathy Meier-Hellstern Douglas Eck Jeff Dean Slav Petrov and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. http:\/\/arxiv.org\/abs\/2204.02311 arXiv:2204.02311 [cs]."},{"key":"e_1_2_1_18_1","unstructured":"Hyung Won Chung Le Hou Shayne Longpre Barret Zoph Yi Tay William Fedus Yunxuan Li Xuezhi Wang Mostafa Dehghani Siddhartha Brahma Albert Webson Shixiang Shane Gu Zhuyun Dai Mirac Suzgun Xinyun Chen Aakanksha Chowdhery Alex Castro-Ros Marie Pellat Kevin Robinson Dasha Valter Sharan Narang Gaurav Mishra Adams Yu Vincent Zhao Yanping Huang Andrew Dai Hongkun Yu Slav Petrov Ed H. Chi Jeff Dean Jacob Devlin Adam Roberts Denny Zhou Quoc V. Le and Jason Wei. 2022. Scaling Instruction-Finetuned Language Models. http:\/\/arxiv.org\/abs\/2210.11416 arXiv:2210.11416 [cs]."},{"key":"e_1_2_1_19_1","volume-title":"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs] (May","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs] (May 2019). http:\/\/arxiv.org\/abs\/1810.04805"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","unstructured":"Xiangjue Dong Yibo Wang Philip S. Yu and James Caverlee. 2023. Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation. https:\/\/doi.org\/10.48550\/arXiv.2311.00306 arXiv:2311.00306 [cs].","DOI":"10.48550\/arXiv.2311.00306"},{"key":"e_1_2_1_21_1","volume-title":"Proceedings of the 40th International Conference on Machine Learning. PMLR, 10764--10799","author":"Gao Luyu","year":"2023","unstructured":"Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. PAL: Program-aided Language Models. In Proceedings of the 40th International Conference on Machine Learning. PMLR, 10764--10799. https:\/\/proceedings.mlr.press\/v202\/gao23f.html"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.csl.2017.01.014"},{"key":"e_1_2_1_23_1","volume-title":"Digital health","author":"Hickey Aodh\u00e1n","unstructured":"Aodh\u00e1n Hickey. 2021. The rise of wearables: From innovation to implementation. In Digital health. Elsevier, 357--365."},{"key":"e_1_2_1_24_1","volume-title":"Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. arXiv preprint arXiv:2305.02301","author":"Hsieh Cheng-Yu","year":"2023","unstructured":"Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. arXiv preprint arXiv:2305.02301 (2023)."},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.2196\/16684"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.2196\/20185"},{"key":"e_1_2_1_27_1","volume-title":"Emergence of Pharmaceutical Industry Growth with Industrial IoT Approach","author":"Indrakumari R","unstructured":"R Indrakumari, T Poongodi, P Suresh, and B Balamurugan. 2020. The growing role of Internet of Things in healthcare wearables. In Emergence of Pharmaceutical Industry Growth with Industrial IoT Approach. Elsevier, 163--194."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.tcm.2019.10.010"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.3390\/s20123572"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","unstructured":"Lavender Yao Jiang Xujin Chris Liu Nima Pour Nejatian Mustafa Nasir-Moin Duo Wang Anas Abidin Kevin Eaton Howard Antony Riina Ilya Laufer Paawan Punjabi Madeline Miceli Nora C. Kim Cordelia Orillac Zane Schnurman Christopher Livia Hannah Weiss David Kurland Sean Neifert Yosef Dastagirzada Douglas Kondziolka Alexander T. M. Cheung Grace Yang Ming Cao Mona Flores Anthony B. Costa Yindalon Aphinyanaphongs Kyunghyun Cho and Eric Karl Oermann. 2023. Health system-scale language models are all-purpose prediction engines. Nature (June 2023). https:\/\/doi.org\/10.1038\/s41586-023-06160-y","DOI":"10.1038\/s41586-023-06160-y"},{"key":"e_1_2_1_31_1","unstructured":"Yubin Kim Xuhai Xu Daniel McDuff Cynthia Breazeal and Hae Won Park. 2024. Health-LLM: Large Language Models for Health Prediction via Wearable Sensor Data. https:\/\/arxiv.org\/abs\/2401.06866v1"},{"key":"e_1_2_1_32_1","doi-asserted-by":"crossref","unstructured":"Jan Koco\u0144 Igor Cichecki Oliwier Kaszyca Mateusz Kochanek Dominika Szyd\u0142o Joanna Baran Julita Bielaniewicz Marcin Gruza Arkadiusz Janz Kamil Kanclerz et al. 2023. ChatGPT: Jack of all trades master of none. Information Fusion (2023) 101861.","DOI":"10.1016\/j.inffus.2023.101861"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1176\/appi.psy.50.6.613"},{"key":"e_1_2_1_34_1","unstructured":"Bishal Lamichhane. 2023. Evaluation of ChatGPT for NLP-based Mental Health Applications. http:\/\/arxiv.org\/abs\/2303.15727"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1017\/S0033291722002847"},{"key":"e_1_2_1_36_1","unstructured":"Yunxiang Li Zihan Li Kai Zhang Ruilong Dan Steve Jiang and You Zhang. 2023. ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge. http:\/\/arxiv.org\/abs\/2303.14070 arXiv:2303.14070 [cs]."},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","unstructured":"Nelson F. Liu Kevin Lin John Hewitt Ashwin Paranjape Michele Bevilacqua Fabio Petroni and Percy Liang. 2023. Lost in the Middle: How Language Models Use Long Contexts. https:\/\/doi.org\/10.48550\/arXiv.2307.03172 arXiv:2307.03172 [cs].","DOI":"10.48550\/arXiv.2307.03172"},{"key":"e_1_2_1_38_1","volume-title":"Paolo Di Achille, and Shwetak Patel","author":"Liu Xin","year":"2023","unstructured":"Xin Liu, Daniel McDuff, Geza Kovacs, Isaac Galatzer-Levy, Jacob Sunshine, Jiening Zhan, Ming-Zher Poh, Shun Liao, Paolo Di Achille, and Shwetak Patel. 2023. Large Language Models are Few-Shot Health Learners. In arXiv."},{"key":"e_1_2_1_39_1","volume-title":"Paolo Di Achille, and Shwetak Patel","author":"Liu Xin","year":"2023","unstructured":"Xin Liu, Daniel McDuff, Geza Kovacs, Isaac Galatzer-Levy, Jacob Sunshine, Jiening Zhan, Ming-Zher Poh, Shun Liao, Paolo Di Achille, and Shwetak Patel. 2023. Large Language Models are Few-Shot Health Learners. arXiv preprint arXiv:2305.15525 (2023)."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1037\/0022-006X.68.3.438"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290607.3299041"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2556288.2557220"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3328908"},{"key":"e_1_2_1_44_1","doi-asserted-by":"crossref","unstructured":"Stefanie Nickels Matthew D Edwards Sarah F Poole Dale Winter Jessica Gronsbell Bella Rozenkrants David P Miller Mathias Fleck Alan McLean Bret Peterson et al. 2021. Toward a mobile platform for real-world digital measurement of depression: User-centered design data quality and behavioral and clinical modeling. JMIR mental health 8 8 (2021) e27589.","DOI":"10.2196\/27589"},{"key":"e_1_2_1_45_1","volume-title":"Dean Carignan, and Eric Horvitz.","author":"Nori Harsha","year":"2023","unstructured":"Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. 2023. Capabilities of GPT-4 on Medical Challenge Problems. http:\/\/arxiv.org\/abs\/2303.13375 arXiv:2303.13375 [cs]."},{"key":"e_1_2_1_46_1","volume-title":"Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots. arXiv preprint arXiv:2302.06466","author":"Omar Reham","year":"2023","unstructured":"Reham Omar, Omij Mangukiya, Panos Kalnis, and Essam Mansour. 2023. Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots. arXiv preprint arXiv:2302.06466 (2023)."},{"key":"e_1_2_1_47_1","unstructured":"OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2110.08193"},{"key":"e_1_2_1_49_1","volume-title":"Is ChatGPT a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476","author":"Qin Chengwei","year":"2023","unstructured":"Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is ChatGPT a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476 (2023)."},{"key":"e_1_2_1_50_1","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training."},{"key":"e_1_2_1_51_1","volume-title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research (2020)."},{"key":"e_1_2_1_52_1","volume-title":"Leveraging Large Language Models for Multiple Choice Question Answering. In The Eleventh International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=yKbprarjc5B","author":"Robinson Joshua","year":"2023","unstructured":"Joshua Robinson and David Wingate. 2023. Leveraging Large Language Models for Multiple Choice Question Answering. In The Eleventh International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=yKbprarjc5B"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.2196\/mhealth.9691"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.2196\/jmir.4273"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3214284"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359216"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","unstructured":"Omar Shaikh Hongxin Zhang William Held Michael Bernstein and Diyi Yang. 2023. On Second Thought Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning. https:\/\/doi.org\/10.48550\/arXiv.2212.08061 arXiv:2212.08061 [cs].","DOI":"10.48550\/arXiv.2212.08061"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-acl.441"},{"key":"e_1_2_1_59_1","volume-title":"Nenad Tomasev, Yun Liu, Renee Wong, Christopher Semturs, S. Sara Mahdavi, Joelle Barral, Dale Webster, Greg S. Corrado, Yossi Matias, Shekoofeh Azizi, Alan Karthikesalingam, and Vivek Natarajan.","author":"Singhal Karan","year":"2023","unstructured":"Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, Mike Schaekermann, Amy Wang, Mohamed Amin, Sami Lachgar, Philip Mansfield, Sushant Prakash, Bradley Green, Ewa Dominowska, Blaise Aguera y Arcas, Nenad Tomasev, Yun Liu, Renee Wong, Christopher Semturs, S. Sara Mahdavi, Joelle Barral, Dale Webster, Greg S. Corrado, Yossi Matias, Shekoofeh Azizi, Alan Karthikesalingam, and Vivek Natarajan. 2023. Towards Expert-Level Medical Question Answering with Large Language Models. http:\/\/arxiv.org\/abs\/2305.09617 arXiv:2305.09617 [cs]."},{"key":"e_1_2_1_60_1","volume-title":"Hashimoto","author":"Taori Rohan","year":"2023","unstructured":"Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model."},{"key":"e_1_2_1_61_1","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar Aurelien Rodriguez Armand Joulin Edouard Grave and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. http:\/\/arxiv.org\/abs\/2302.13971 arXiv:2302.13971 [cs]."},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijmedinf.2019.103984"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.2196\/mhealth.5960"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/2971648.2971740"},{"key":"e_1_2_1_65_1","volume-title":"Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 3--14","author":"Wang Rui","unstructured":"Rui Wang, Fanglin Chen, Zhenyu Chen, Tianxing Li, Gabriella Harari, Stefanie Tignor, Xia Zhou, Dror Ben-Zeev, and Andrew T. Campbell. 2014. StudentLife: Assessing mental health, academic performance and behavioral trends of college students using smartphones. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 3--14."},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/2750858.2804251"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3191775"},{"key":"e_1_2_1_68_1","volume-title":"Aakanksha Chowdhery, and Denny Zhou.","author":"Wang Xuezhi","year":"2022","unstructured":"Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 (2022)."},{"key":"e_1_2_1_69_1","volume-title":"Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le.","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned Language Models Are Zero-Shot Learners. http:\/\/arxiv.org\/abs\/2109.01652 arXiv:2109.01652 [cs]."},{"key":"e_1_2_1_70_1","volume-title":"Chi, Quoc Le, and Denny Zhou","author":"Wei Jason","year":"2023","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. http:\/\/arxiv.org\/abs\/2201.11903 arXiv:2201.11903 [cs]."},{"key":"e_1_2_1_71_1","unstructured":"Chaoyi Wu Xiaoman Zhang Ya Zhang Yanfeng Wang and Weidi Xie. 2023. PMC-LLaMA: Further Finetuning LLaMA on Medical Papers. http:\/\/arxiv.org\/abs\/2304.14454 arXiv:2304.14454 [cs]."},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448107"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1145\/3569485"},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1145\/3569485"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448124"},{"key":"e_1_2_1_76_1","volume-title":"GLOBEM Dataset: Multi-Year Datasets for Longitudinal Human Behavior Modeling Generalization. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 18","author":"Xu Xuhai","year":"2022","unstructured":"Xuhai Xu, Han Zhang, Yasaman Sefidgar, Yiyi Ren, Xin Liu, Woosuk Seo, Jennifer Brown, Kevin Kuehn, Mike Merrill, Paula Nurius, Shwetak Patel, Tim Althoff, Margaret E Morris, Eve Riskin, Jennifer Mankoff, and Anind K Dey. 2022. GLOBEM Dataset: Multi-Year Datasets for Longitudinal Human Behavior Modeling Generalization. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 18."},{"key":"e_1_2_1_77_1","volume-title":"Effective distillation of table-based reasoning ability from llms. arXiv preprint arXiv:2309.13182","author":"Yang Bohao","year":"2023","unstructured":"Bohao Yang, Chen Tang, Kun Zhao, Chenghao Xiao, and Chenghua Lin. 2023. Effective distillation of table-based reasoning ability from llms. arXiv preprint arXiv:2309.13182 (2023)."},{"key":"e_1_2_1_78_1","unstructured":"Kailai Yang Shaoxiong Ji Tianlin Zhang Qianqian Xie and Sophia Ananiadou. 2023. On the Evaluations of ChatGPT and Emotion-enhanced Prompting for Mental Health Analysis. http:\/\/arxiv.org\/abs\/2304.03347"},{"key":"e_1_2_1_79_1","volume-title":"Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023","author":"Yeh Kai-Ching","year":"2023","unstructured":"Kai-Ching Yeh, Jou-An Chi, Da-Chen Lian, and Shu-Kai Hsieh. 2023. Evaluating Interfaced LLM Bias. In Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023), Jheng-Long Wu and Ming-Hsiang Su (Eds.). The Association for Computational Linguistics and Chinese Language Processing (ACLCLP), Taipei City, Taiwan, 292--299. https:\/\/aclanthology.org\/2023.rocling-1.37"},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1145\/3538514"},{"key":"e_1_2_1_81_1","volume-title":"Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert. arXiv preprint arXiv:2302.10198","author":"Zhong Qihuang","year":"2023","unstructured":"Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao. 2023. Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert. arXiv preprint arXiv:2302.10198 (2023)."},{"key":"e_1_2_1_82_1","volume-title":"Chi","author":"Zhou Denny","year":"2023","unstructured":"Denny Zhou, Nathanael Sch\u00e4rli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi. 2023. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. http:\/\/arxiv.org\/abs\/2205.10625 arXiv:2205.10625 [cs]."}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3659604","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3659604","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T17:01:05Z","timestamp":1755882065000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3659604"}},"subtitle":["Towards Analyzing and Reasoning About Mobile and Behavioral Health Data With Large Language Models"],"short-title":[],"issued":{"date-parts":[[2024,5,13]]},"references-count":82,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,5,13]]}},"alternative-id":["10.1145\/3659604"],"URL":"https:\/\/doi.org\/10.1145\/3659604","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,5,13]]},"assertion":[{"value":"2024-05-15","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}