{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T22:52:54Z","timestamp":1776120774796,"version":"3.50.1"},"reference-count":35,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,1,27]],"date-time":"2022-01-27T00:00:00Z","timestamp":1643241600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,1,27]],"date-time":"2022-01-27T00:00:00Z","timestamp":1643241600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002790","name":"Canadian Network for Research and Innovation in Machining Technology, Natural Sciences and Engineering Research Council of Canada","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100002790","id-type":"DOI","asserted-by":"publisher"}]},{"name":"RBC fellowship"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digit. Med."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Current clinical note-taking approaches cannot capture the entirety of information available from patient encounters and detract from patient-clinician interactions. By surveying healthcare providers\u2019 current note-taking practices and attitudes toward new clinical technologies, we developed a patient-centered paradigm for clinical note-taking that makes use of hybrid tablet\/keyboard devices and artificial intelligence (AI) technologies. PhenoPad is an intelligent clinical note-taking interface that captures free-form notes and standard phenotypic information via a variety of modalities, including speech and natural language processing techniques, handwriting recognition, and more. The output is unobtrusively presented on mobile devices to clinicians for real-time validation and can be automatically transformed into digital formats that would be compatible with integration into electronic health record systems. Semi-structured interviews and trials in clinical settings rendered positive feedback from both clinicians and patients, demonstrating that AI-enabled clinical note-taking under our design improves ease and breadth of information captured during clinical visits without compromising patient-clinician interactions. We open source a proof-of-concept implementation that can lay the foundation for broader clinical use cases.<\/jats:p>","DOI":"10.1038\/s41746-021-00555-9","type":"journal-article","created":{"date-parts":[[2022,1,27]],"date-time":"2022-01-27T11:03:15Z","timestamp":1643281395000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":22,"title":["PhenoPad: Building AI enabled note-taking interfaces for patient encounters"],"prefix":"10.1038","volume":"5","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2684-6496","authenticated-orcid":false,"given":"Jixuan","family":"Wang","sequence":"first","affiliation":[]},{"given":"Jingbo","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Haochi","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Helen","family":"Lu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6831-8398","authenticated-orcid":false,"given":"Marta","family":"Skreta","sequence":"additional","affiliation":[]},{"given":"Mia","family":"Husi\u0107","sequence":"additional","affiliation":[]},{"given":"Aryan","family":"Arbabi","sequence":"additional","affiliation":[]},{"given":"Nicole","family":"Sultanum","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7947-2243","authenticated-orcid":false,"given":"Michael","family":"Brudno","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,1,27]]},"reference":[{"key":"555_CR1","doi-asserted-by":"publisher","first-page":"753","DOI":"10.7326\/M16-0961","volume":"165","author":"C Sinsky","year":"2016","unstructured":"Sinsky, C. et al. Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Ann. Intern. Med. 165, 753\u2013760 (2016).","journal-title":"Ann. Intern. Med."},{"key":"555_CR2","doi-asserted-by":"publisher","first-page":"505","DOI":"10.1197\/jamia.M1700","volume":"12","author":"L Poissant","year":"2005","unstructured":"Poissant, L., Pereira, J., Tamblyn, R. & Kawasumi, Y. The impact of electronic health records on time efficiency of physicians and nurses: A systematic review. J. Am. Med. Inform. Assoc. 12, 505\u2013516 (2005).","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"555_CR3","first-page":"499","volume":"2006","author":"JA Linder","year":"2006","unstructured":"Linder, J. A. et al. Barriers to electronic health record use during patient visits. AMIA Annu. Symp . Proc. 2006, 499 (2006).","journal-title":"AMIA Annu. Symp . Proc."},{"key":"555_CR4","doi-asserted-by":"publisher","first-page":"1617","DOI":"10.1016\/S0140-6736(98)08309-3","volume":"352","author":"SM Powsner","year":"1998","unstructured":"Powsner, S. M., Wyatt, J. C. & Wright, P. Opportunities for and challenges of computerisation. Lancet 352, 1617\u20131622 (1998).","journal-title":"Lancet"},{"key":"555_CR5","doi-asserted-by":"publisher","unstructured":"Schaefbauer, C. L. & Siek, K. A. Cautious, but optimistic: An ethnographic study on location and content of primary care providers using electronic medical records. In 2011 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops 63\u201370 (IEEE, 2011). https:\/\/doi.org\/10.4108\/icst.pervasivehealth.2011.246024.","DOI":"10.4108\/icst.pervasivehealth.2011.246024"},{"key":"555_CR6","doi-asserted-by":"publisher","first-page":"419","DOI":"10.1370\/afm.2121","volume":"15","author":"BG Arndt","year":"2017","unstructured":"Arndt, B. G. et al. Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations. Ann. Fam. Med. 15, 419\u2013426 (2017).","journal-title":"Ann. Fam. Med."},{"key":"555_CR7","doi-asserted-by":"publisher","first-page":"1377","DOI":"10.1001\/archinternmed.2012.3199","volume":"172","author":"TD Shanafelt","year":"2012","unstructured":"Shanafelt, T. D. et al. Burnout and satisfaction with work-life balance among US physicians relative to the general US population. Arch. Intern. Med. 172, 1377\u20131385 (2012).","journal-title":"Arch. Intern. Med."},{"key":"555_CR8","doi-asserted-by":"crossref","unstructured":"Babbott, S. et al. Electronic medical records and physician stress in primary care: Results from the MEMO Study. J. Am. Med. Inform. Assoc. 21, e100\u20136 (2014).","DOI":"10.1136\/amiajnl-2013-001875"},{"key":"555_CR9","doi-asserted-by":"publisher","first-page":"836","DOI":"10.1016\/j.mayocp.2016.05.007","volume":"91","author":"TD Shanafelt","year":"2016","unstructured":"Shanafelt, T. D. et al. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin. Proc. 91, 836\u2013848 (2016).","journal-title":"Mayo Clin. Proc."},{"key":"555_CR10","doi-asserted-by":"publisher","first-page":"1600","DOI":"10.1016\/j.mayocp.2015.08.023","volume":"90","author":"TD Shanafelt","year":"2015","unstructured":"Shanafelt, T. D. et al. Changes in burnout and satisfaction with work-life balance in physicians and the general US working population between 2011 and 2014. Mayo Clin. Proc. 90, 1600\u20131613 (2015).","journal-title":"Mayo Clin. Proc."},{"key":"555_CR11","doi-asserted-by":"publisher","first-page":"1789","DOI":"10.1056\/NEJMp1809698","volume":"379","author":"M Ashton","year":"2018","unstructured":"Ashton, M. Getting rid of stupid stuff. N. Engl. J. Med. 379, 1789\u20131791 (2018).","journal-title":"N. Engl. J. Med."},{"key":"555_CR12","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1109\/MSP.2012.2205597","volume":"29","author":"G Hinton","year":"2012","unstructured":"Hinton, G. et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 29, 82\u201397 (2012).","journal-title":"IEEE Signal Process. Mag."},{"key":"555_CR13","unstructured":"Amodei, D. et al. Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. In Proceedings of The 33rd International Conference on Machine Learning (eds. Balcan, M. F. & Weinberger, K. Q.) vol. 48, 173\u2013182 (PMLR, 2016)."},{"key":"555_CR14","doi-asserted-by":"crossref","unstructured":"Graves, A., Mohamed, A. & Hinton, G. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing 6645\u20136649 (IEEE, 2013).","DOI":"10.1109\/ICASSP.2013.6638947"},{"key":"555_CR15","unstructured":"Chorowski, J., Bahdanau, D., Serdyuk, D., Cho, K. & Bengio, Y. Attention-Based Models for Speech Recognition. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, 577\u2013585 (MIT Press, 2015)."},{"key":"555_CR16","first-page":"2972","volume":"2018","author":"C-C Chiu","year":"2018","unstructured":"Chiu, C.-C. et al. Speech recognition for medical conversations. Interspeech 2018, 2972\u20132976 (2018).","journal-title":"Interspeech"},{"key":"555_CR17","doi-asserted-by":"publisher","first-page":"462","DOI":"10.1136\/jamia.2000.0070462","volume":"7","author":"EG Devine","year":"2000","unstructured":"Devine, E. G., Gaehde, S. A. & Curtis, A. C. Comparative evaluation of three continuous speech recognition software packages in the generation of medical reports. J. Am. Med. Inform. Assoc. 7, 462\u2013468 (2000).","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"555_CR18","doi-asserted-by":"publisher","first-page":"617","DOI":"10.2214\/ajr.174.3.1740617","volume":"174","author":"MR Ramaswamy","year":"2000","unstructured":"Ramaswamy, M. R., Chaljub, G., Esch, O., Fanning, D. D. & vanSonnenberg, E. Continuous speech recognition in MR imaging reporting: Advantages, disadvantages, and impact. Am. J. Roentgenol. 174, 617\u2013622 (2000).","journal-title":"Am. J. Roentgenol."},{"key":"555_CR19","doi-asserted-by":"publisher","first-page":"101","DOI":"10.1136\/jamia.2001.0080101","volume":"8","author":"SM Borowitz","year":"2001","unstructured":"Borowitz, S. M. Computer-based speech recognition as an alternative to medical transcription. J. Am. Med. Inform. Assoc. 8, 101\u2013102 (2001).","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"555_CR20","doi-asserted-by":"publisher","DOI":"10.1186\/1472-6947-9-S1-S3","volume":"9","author":"JG Klann","year":"2009","unstructured":"Klann, J. G. & Szolovits, P. An intelligent listening framework for capturing encounter notes from a doctor-patient dialog. BMC Med. Inform. Decis. Mak. 9, S3 (2009).","journal-title":"BMC Med. Inform. Decis. Mak."},{"key":"555_CR21","unstructured":"Lin, C.-Y. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out 74\u201381 (Association for Computational Linguistics, 2004)."},{"key":"555_CR22","doi-asserted-by":"publisher","unstructured":"Enarvi, S. et al. Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models. In Proceedings of the First Workshop on Natural Language Processing for Medical Conversations 22\u201330 (Association for Computational Linguistics, 2020). https:\/\/doi.org\/10.18653\/v1\/2020.nlpmc-1.4.","DOI":"10.18653\/v1\/2020.nlpmc-1.4"},{"key":"555_CR23","doi-asserted-by":"publisher","unstructured":"Krishna, K., Khosla, S., Bigham, J., & Lipton, Z. Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques. Proceedings Of The 59Th Annual Meeting Of The Association For Computational Linguistics And The 11Th International Joint Conference On Natural Language Processing (Volume 1: Long Papers, 2021). https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.384.","DOI":"10.18653\/v1\/2021.acl-long.384"},{"key":"555_CR24","unstructured":"The Topol Review. Preparing the healthcare workforce to deliver the digital future. An independent report on behalf of the Secretary of State for Health and Social Care. https:\/\/topol.hee.nhs.uk\/ (2019)."},{"key":"555_CR25","doi-asserted-by":"publisher","first-page":"210","DOI":"10.1080\/07448481.1985.9939607","volume":"33","author":"FW Peabody","year":"1985","unstructured":"Peabody, F. W. The care of the patient. J. Am. Coll. Health 33, 210\u2013216 (1985).","journal-title":"J. Am. Coll. Health"},{"key":"555_CR26","unstructured":"Povey, D. et al. The Kaldi Speech Recognition Toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (IEEE Signal Processing Society, 2011)."},{"key":"555_CR27","first-page":"289","volume":"2020","author":"J Wang","year":"2020","unstructured":"Wang, J. et al. Speaker attribution with voice profiles by graph-based semi-supervised learning. Interspeech 2020, 289\u2013293 (2020).","journal-title":"Interspeech"},{"key":"555_CR28","doi-asserted-by":"publisher","unstructured":"Wang, J. et al. Speaker Diarization with Session-Level Speaker Embedding Refinement Using Graph Neural Networks. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 7109\u20137113 (2020). https:\/\/doi.org\/10.1109\/ICASSP40776.2020.9054176.","DOI":"10.1109\/ICASSP40776.2020.9054176"},{"key":"555_CR29","doi-asserted-by":"publisher","unstructured":"Wang, J., Wang, K.-C., Law, M. T., Rudzicz, F. & Brudno, M. Centroid-based Deep Metric Learning for Speaker Recognition. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 3652\u20133656 (2019). https:\/\/doi.org\/10.1109\/ICASSP.2019.8683393.","DOI":"10.1109\/ICASSP.2019.8683393"},{"key":"555_CR30","doi-asserted-by":"crossref","unstructured":"Arbabi, A., Adams, D. R., Fidler, S. & Brudno, M. Identifying clinical terms in medical text using ontology-guided machine learning. JMIR Med. Inform. 7, e12596 (2019).","DOI":"10.2196\/12596"},{"key":"555_CR31","unstructured":"Project EmpowerMD. https:\/\/www.microsoft.com\/en-us\/research\/project\/empowermd\/."},{"key":"555_CR32","unstructured":"ASpIRE Chain Model. http:\/\/kaldi-asr.org\/models\/m1."},{"key":"555_CR33","first-page":"1086","volume":"2018","author":"JS Chung","year":"2018","unstructured":"Chung, J. S., Nagrani, A. & Zisserman, A. VoxCeleb2: Deep speaker recognition. Interspeech 2018, 1086\u20131090 (2018).","journal-title":"Interspeech"},{"key":"555_CR34","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41467-021-25578-4","volume":"12","author":"M Skreta","year":"2021","unstructured":"Skreta, M. et al. Automatically disambiguating medical acronyms with ontology-aware deep learning. Nat. Commun. 12, 1\u201310 (2021). 2021 12:1.","journal-title":"Nat. Commun."},{"key":"555_CR35","doi-asserted-by":"publisher","first-page":"1057","DOI":"10.1002\/humu.22347","volume":"34","author":"M Girdea","year":"2013","unstructured":"Girdea, M. et al. PhenoTips: Patient phenotyping software for clinical and research use. Hum. Mutat. 34, 1057\u20131065 (2013).","journal-title":"Hum. Mutat."}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-021-00555-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-021-00555-9","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-021-00555-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,11,25]],"date-time":"2022-11-25T08:21:12Z","timestamp":1669364472000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-021-00555-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1,27]]},"references-count":35,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["555"],"URL":"https:\/\/doi.org\/10.1038\/s41746-021-00555-9","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1,27]]},"assertion":[{"value":"1 June 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 December 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 January 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"12"}}