{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,26]],"date-time":"2026-02-26T10:24:14Z","timestamp":1772101454113,"version":"3.50.1"},"reference-count":53,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,7,30]],"date-time":"2022-07-30T00:00:00Z","timestamp":1659139200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,7,30]],"date-time":"2022-07-30T00:00:00Z","timestamp":1659139200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61931013"],"award-info":[{"award-number":["61931013"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["82171886"],"award-info":[{"award-number":["82171886"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61931013"],"award-info":[{"award-number":["61931013"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["82171886"],"award-info":[{"award-number":["82171886"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61931013"],"award-info":[{"award-number":["61931013"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62171297"],"award-info":[{"award-number":["62171297"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61931013"],"award-info":[{"award-number":["61931013"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Beijing Hospitals Authority Clinical Medicine Development of Special Funding Support","award":["ZYLX202101"],"award-info":[{"award-number":["ZYLX202101"]}]},{"name":"Beijing Hospitals Authority Clinical Medicine Development of Special Funding Support","award":["ZYLX202101"],"award-info":[{"award-number":["ZYLX202101"]}]},{"name":"Beijing Hospitals Authority Clinical Medicine Development of Special Funding Support","award":["ZYLX202101"],"award-info":[{"award-number":["ZYLX202101"]}]},{"DOI":"10.13039\/501100009592","name":"Beijing Municipal Science and Technology Commission","doi-asserted-by":"publisher","award":["Z201100005620009"],"award-info":[{"award-number":["Z201100005620009"]}],"id":[{"id":"10.13039\/501100009592","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100009592","name":"Beijing Municipal Science and Technology Commission","doi-asserted-by":"publisher","award":["Z201100005620009"],"award-info":[{"award-number":["Z201100005620009"]}],"id":[{"id":"10.13039\/501100009592","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["BMC Med Inform Decis Mak"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec><jats:title>Background<\/jats:title><jats:p>Given the increasing number of people suffering from tinnitus, the accurate categorization of patients with actionable reports is attractive in assisting clinical decision making. However, this process requires experienced physicians and significant human labor. Natural language processing (NLP) has shown great potential in big data analytics of medical texts; yet, its application to domain-specific analysis of radiology reports is limited.<\/jats:p><\/jats:sec><jats:sec><jats:title>Objective<\/jats:title><jats:p>The aim of this study is to propose a novel approach in classifying actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer BERT-based models and evaluate the benefits of in domain pre-training (IDPT) along with a sequence adaptation strategy.<\/jats:p><\/jats:sec><jats:sec><jats:title>Methods<\/jats:title><jats:p>A total of 5864 temporal bone computed tomography(CT) reports are labeled by two experienced radiologists as follows: (1) normal findings without notable lesions; (2) notable lesions but uncorrelated to tinnitus; and (3) at least one lesion considered as potential cause of tinnitus. We then constructed a framework consisting of deep learning (DL) neural networks and self-supervised BERT models. A tinnitus domain-specific corpus is used to pre-train the BERT model to further improve its embedding weights. In addition, we conducted an experiment to evaluate multiple groups of max sequence length settings in BERT to reduce the excessive quantity of calculations. After a comprehensive comparison of all metrics, we determined the most promising approach through the performance comparison of F1-scores and AUC values.<\/jats:p><\/jats:sec><jats:sec><jats:title>Results<\/jats:title><jats:p>In the first experiment, the BERT finetune model achieved a more promising result (AUC-0.868, F1-0.760) compared with that of the Word2Vec-based models(AUC-0.767, F1-0.733) on validation data. In the second experiment, the BERT in-domain pre-training model (AUC-0.948, F1-0.841) performed significantly better than the BERT based model(AUC-0.868, F1-0.760). Additionally, in the variants of BERT fine-tuning models, Mengzi achieved the highest AUC of 0.878 (F1-0.764). Finally, we found that the BERT max-sequence-length of 128 tokens achieved an AUC of 0.866 (F1-0.736), which is almost equal to the BERT max-sequence-length of 512 tokens (AUC-0.868,F1-0.760).<\/jats:p><\/jats:sec><jats:sec><jats:title>Conclusion<\/jats:title><jats:p>In conclusion, we developed a reliable BERT-based framework for tinnitus diagnosis from Chinese radiology reports, along with a sequence adaptation strategy to reduce computational resources while maintaining accuracy. The findings could provide a reference for NLP development in Chinese radiology reports.<\/jats:p><\/jats:sec>","DOI":"10.1186\/s12911-022-01946-y","type":"journal-article","created":{"date-parts":[[2022,7,30]],"date-time":"2022-07-30T05:03:53Z","timestamp":1659157433000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":21,"title":["Automatic text classification of actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer (BERT) and in-domain pre-training (IDPT)"],"prefix":"10.1186","volume":"22","author":[{"given":"Jia","family":"Li","sequence":"first","affiliation":[]},{"given":"Yucong","family":"Lin","sequence":"additional","affiliation":[]},{"given":"Pengfei","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Wenjuan","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Linkun","family":"Cai","sequence":"additional","affiliation":[]},{"given":"Jing","family":"Sun","sequence":"additional","affiliation":[]},{"given":"Lei","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Zhenghan","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Hong","family":"Song","sequence":"additional","affiliation":[]},{"given":"Han","family":"Lv","sequence":"additional","affiliation":[]},{"given":"Zhenchang","family":"Wang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,7,30]]},"reference":[{"issue":"11S","key":"1946_CR1","doi-asserted-by":"publisher","first-page":"S584","DOI":"10.1016\/j.jacr.2017.08.052","volume":"14","author":"MM Kessler","year":"2017","unstructured":"Kessler MM, Moussa M, Bykowski J, et al. ACR Appropriateness criteria((R)) tinnitus. J Am Coll Radiol. 2017;14(11S):S584\u201391. https:\/\/doi.org\/10.1016\/j.jacr.2017.08.052.","journal-title":"J Am Coll Radiol"},{"issue":"9","key":"1946_CR2","doi-asserted-by":"publisher","first-page":"578","DOI":"10.3766\/jaaa.22.9.3","volume":"22","author":"X Xu","year":"2011","unstructured":"Xu X, Bu X, Zhou L, et al. An epidemiologic study of tinnitus in a population in Jiangsu Province, China. J Am Acad Audiol. 2011;22(9):578\u201385. https:\/\/doi.org\/10.3766\/jaaa.22.9.3.","journal-title":"J Am Acad Audiol"},{"key":"1946_CR3","unstructured":"American Tinnitus Association(ATA)[EB\/OL]. Accessed at February 1. 2022. https:\/\/www.ata.org\/understanding-facts\/demographics."},{"issue":"2","key":"1946_CR4","doi-asserted-by":"publisher","first-page":"S1","DOI":"10.1177\/0194599814545325","volume":"151","author":"DE Tunkel","year":"2014","unstructured":"Tunkel DE, Bauer CA, Sun GH, et al. Clinical practice guideline: tinnitus. Otolaryngol Head Neck Surg. 2014;151(2):S1\u201340. https:\/\/doi.org\/10.1177\/0194599814545325.","journal-title":"Otolaryngol Head Neck Surg"},{"issue":"2","key":"1946_CR5","doi-asserted-by":"publisher","first-page":"7","DOI":"10.1590\/0100-3984.2019.52.2e2","volume":"52","author":"RLE Gomes","year":"2019","unstructured":"Gomes RLE. Review and update of temporal bone imaging. Radiol Brasil. 2019;52(2):7\u20138. https:\/\/doi.org\/10.1590\/0100-3984.2019.52.2e2.","journal-title":"Radiol Brasil"},{"issue":"5","key":"1946_CR6","doi-asserted-by":"publisher","first-page":"1446","DOI":"10.1148\/rg.2021200113","volume":"41","author":"A Mozayan","year":"2021","unstructured":"Mozayan A, Fabbri AR, Maneevese M, et al. Practical guide to natural language processing for radiology. Radiographics. 2021;41(5):1446\u201353. https:\/\/doi.org\/10.1148\/rg.2021200113.","journal-title":"Radiographics"},{"issue":"1","key":"1946_CR7","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1007\/s13244-016-0534-1","volume":"8","author":"AP Brady","year":"2017","unstructured":"Brady AP. Error and discrepancy in radiology: inevitable or avoidable? Insights Imag. 2017;8(1):171\u201382. https:\/\/doi.org\/10.1007\/s13244-016-0534-1.","journal-title":"Insights Imag"},{"issue":"4","key":"1946_CR8","doi-asserted-by":"publisher","first-page":"458","DOI":"10.1016\/j.jacr.2018.09.052","volume":"16","author":"AB Shinagare","year":"2019","unstructured":"Shinagare AB, Lacson R, Boland GW, et al. Radiologist preferences, agreement, and variability in phrases used to convey diagnostic certainty in radiology reports. J Am Coll Radiol. 2019;16(4):458\u201364. https:\/\/doi.org\/10.1016\/j.jacr.2018.09.052.","journal-title":"J Am Coll Radiol"},{"issue":"6","key":"1946_CR9","doi-asserted-by":"publisher","first-page":"1845","DOI":"10.1148\/rg.2018180021","volume":"38","author":"JN Itri","year":"2018","unstructured":"Itri JN, Tappouni RR, McEachern RO, et al. Fundamentals of diagnostic error in imaging. Radiographics. 2018;38(6):1845\u201365. https:\/\/doi.org\/10.1148\/rg.2018180021.","journal-title":"Radiographics"},{"issue":"1","key":"1946_CR10","doi-asserted-by":"publisher","first-page":"248","DOI":"10.1186\/s12891-020-03200-w","volume":"21","author":"SH Kim","year":"2020","unstructured":"Kim SH, Sobez LM, Spiro JE, et al. Structured reporting has the potential to reduce reporting times of dual-energy x-ray absorptiometry exams. BMC Musculoskelet Disord. 2020;21(1):248. https:\/\/doi.org\/10.1186\/s12891-020-03200-w.","journal-title":"BMC Musculoskelet Disord"},{"issue":"2","key":"1946_CR11","doi-asserted-by":"publisher","first-page":"329","DOI":"10.1148\/radiol.16142770","volume":"279","author":"E Pons","year":"2016","unstructured":"Pons E, Braun LM, Hunink MG, et al. Natural language processing in radiology: a systematic review. Radiology. 2016;279(2):329\u201343. https:\/\/doi.org\/10.1148\/radiol.16142770.","journal-title":"Radiology"},{"key":"1946_CR12","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbi.2020.103665","volume":"113","author":"TL Chen","year":"2021","unstructured":"Chen TL, Emerling M, Chaudhari GR, et al. Domain specific word embeddings for natural language processing in radiology. J Biomed Inform. 2021;113: 103665. https:\/\/doi.org\/10.1016\/j.jbi.2020.103665.","journal-title":"J Biomed Inform"},{"issue":"6","key":"1946_CR13","doi-asserted-by":"publisher","first-page":"919","DOI":"10.1016\/j.rcl.2021.06.003","volume":"59","author":"J Steinkamp","year":"2021","unstructured":"Steinkamp J, Cook TS. Basic artificial intelligence techniques: natural language processing of radiology reports. Radiol Clin North Am. 2021;59(6):919\u201331. https:\/\/doi.org\/10.1016\/j.rcl.2021.06.003.","journal-title":"Radiol Clin North Am"},{"issue":"6","key":"1946_CR14","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0234908","volume":"15","author":"CJ Ong","year":"2020","unstructured":"Ong CJ, Orfanoudaki A, Zhang R, et al. Machine learning and natural language processing methods to identify ischemic stroke, acuity and location from radiology reports. PLoS ONE. 2020;15(6): e234908. https:\/\/doi.org\/10.1371\/journal.pone.0234908.","journal-title":"PLoS ONE"},{"issue":"1","key":"1946_CR15","doi-asserted-by":"publisher","DOI":"10.2196\/19689","volume":"23","author":"H Liu","year":"2021","unstructured":"Liu H, Zhang Z, Xu Y, et al. Use of BERT (bidirectional encoder representations from transformers)-based deep learning method for extracting evidences in chinese radiology reports: development of a computer-aided liver cancer diagnosis framework. J Med Internet Res. 2021;23(1): e19689. https:\/\/doi.org\/10.2196\/19689.","journal-title":"J Med Internet Res"},{"issue":"1","key":"1946_CR16","doi-asserted-by":"publisher","first-page":"262","DOI":"10.1186\/s12911-021-01623-6","volume":"21","author":"Y Nakamura","year":"2021","unstructured":"Nakamura Y, Hanaoka S, Nomura Y, et al. Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers. BMC Med Inform Decis Mak. 2021;21(1):262. https:\/\/doi.org\/10.1186\/s12911-021-01623-6.","journal-title":"BMC Med Inform Decis Mak"},{"key":"1946_CR17","first-page":"2251","volume":"2020","author":"S Datta","year":"2020","unstructured":"Datta S, Ulinski M, Godfrey-Stovall J, et al. Rad-spatialnet: a frame-based resource for fine-grained spatial relations in radiology reports. LREC Int Conf Lang Resour Eval. 2020;2020:2251\u201360.","journal-title":"LREC Int Conf Lang Resour Eval"},{"key":"1946_CR18","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1016\/j.artmed.2018.11.004","volume":"97","author":"I Banerjee","year":"2019","unstructured":"Banerjee I, Ling Y, Chen MC, et al. Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification. Artif Intell Med. 2019;97:79\u201388. https:\/\/doi.org\/10.1016\/j.artmed.2018.11.004.","journal-title":"Artif Intell Med"},{"key":"1946_CR19","unstructured":"Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Adv Neural Inf Process Systems, 2017,30."},{"key":"1946_CR20","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2019.101726","volume":"101","author":"S Gao","year":"2019","unstructured":"Gao S, Qiu JX, Alawad M, et al. Classifying cancer pathology reports with hierarchical self-attention networks. Artif Intell Med. 2019;101: 101726. https:\/\/doi.org\/10.1016\/j.artmed.2019.101726.","journal-title":"Artif Intell Med"},{"key":"1946_CR21","unstructured":"Devlin J, Chang M, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018."},{"key":"1946_CR22","doi-asserted-by":"publisher","first-page":"225","DOI":"10.1016\/j.aiopen.2021.08.002","volume":"2","author":"X Han","year":"2021","unstructured":"Han X, Zhang Z, Ding N, et al. Pre-trained models: past, present and future. AI Open. 2021;2:225\u201350. https:\/\/doi.org\/10.1016\/j.aiopen.2021.08.002.","journal-title":"AI Open"},{"issue":"10","key":"1946_CR23","doi-asserted-by":"publisher","first-page":"1872","DOI":"10.1007\/s11431-020-1647-3","volume":"63","author":"X Qiu","year":"2020","unstructured":"Qiu X, Sun T, Xu Y, et al. Pre-trained models for natural language processing: a survey. Science China Technol Sci. 2020;63(10):1872\u201397.","journal-title":"Science China Technol Sci"},{"key":"1946_CR24","unstructured":"Liu Y, Ott M, Goyal N, et al. Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019."},{"key":"1946_CR25","unstructured":"Lan Z, Chen M, Goodman S, et al. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019."},{"key":"1946_CR26","unstructured":"Sun Y, Wang S, Li Y, et al. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223, 2019."},{"key":"1946_CR27","unstructured":"Huang K, Altosaar J, Ranganath R. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342, 2019."},{"issue":"4","key":"1946_CR28","doi-asserted-by":"crossref","first-page":"1234","DOI":"10.1093\/bioinformatics\/btz682","volume":"36","author":"J Lee","year":"2020","unstructured":"Lee J, Yoon W, Kim S, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2020;36(4):1234\u201340.","journal-title":"Bioinformatics"},{"key":"1946_CR29","doi-asserted-by":"publisher","first-page":"3504","DOI":"10.1109\/TASLP.2021.3124365","volume":"29","author":"Y Cui","year":"2021","unstructured":"Cui Y, Che W, Liu T, et al. Pre-training with whole word masking for Chinese Bert. IEEE\/ACM Trans Audio Speech Lang Process. 2021;29:3504\u201314. https:\/\/doi.org\/10.1109\/TASLP.2021.3124365.","journal-title":"IEEE\/ACM Trans Audio Speech Lang Process"},{"key":"1946_CR30","doi-asserted-by":"crossref","unstructured":"Xiang B, Yang C, Li Y, et al. CLiMP: a benchmark for Chinese language model evaluation. arXiv preprint arXiv:2101.11131, 2021.","DOI":"10.18653\/v1\/2021.eacl-main.242"},{"key":"1946_CR31","unstructured":"Wang B, Pan B, Li X, et al. Towards evaluating the robustness of chinese bert classifiers. arXiv preprint arXiv:2004.03742, 2020."},{"issue":"4","key":"1946_CR32","doi-asserted-by":"publisher","first-page":"634","DOI":"10.1016\/j.acra.2021.03.036","volume":"29","author":"S Soffer","year":"2022","unstructured":"Soffer S, Glicksberg BS, Zimlichman E, et al. BERT for the processing of radiological reports: an attention-based natural language processing algorithm. Acad Radiol. 2022;29(4):634\u20135.","journal-title":"Acad Radiol"},{"issue":"3","key":"1946_CR33","doi-asserted-by":"publisher","first-page":"336","DOI":"10.1016\/j.jacr.2018.10.020","volume":"16","author":"E Carrodeguas","year":"2019","unstructured":"Carrodeguas E, Lacson R, Swanson W, et al. Use of Machine learning to identify follow-up recommendations in radiology reports. J Am Coll Radiol JACR. 2019;16(3):336\u201343. https:\/\/doi.org\/10.1016\/j.jacr.2018.10.020.","journal-title":"J Am Coll Radiol JACR"},{"issue":"9","key":"1946_CR34","doi-asserted-by":"publisher","first-page":"1299","DOI":"10.1016\/j.jacr.2019.05.038","volume":"16","author":"ME Heilbrun","year":"2019","unstructured":"Heilbrun ME, Chapman BE, Narasimhan E, et al. Feasibility of natural language processing-assisted auditing of critical findings in chest radiology. J Am Coll Radiol JACR. 2019;16(9):1299\u2013304. https:\/\/doi.org\/10.1016\/j.jacr.2019.05.038.","journal-title":"J Am Coll Radiol JACR"},{"issue":"1","key":"1946_CR35","doi-asserted-by":"publisher","first-page":"131","DOI":"10.1007\/s10278-019-00271-7","volume":"33","author":"R Lou","year":"2020","unstructured":"Lou R, Lalevic D, Chambers C, et al. Automated detection of radiology reports that require follow-up imaging using natural language processing feature engineering and machine learning classification. J Digit Imaging. 2020;33(1):131\u20136. https:\/\/doi.org\/10.1007\/s10278-019-00271-7.","journal-title":"J Digit Imaging"},{"key":"1946_CR36","first-page":"465","volume":"2011","author":"EF Gershanik","year":"2011","unstructured":"Gershanik EF, Lacson R, Khorasani R. Critical finding capture in the impression section of radiology reports. AMIA Symp. 2011;2011:465\u20139.","journal-title":"AMIA Symp"},{"issue":"6","key":"1946_CR37","doi-asserted-by":"publisher","first-page":"742","DOI":"10.1007\/s10278-016-9889-6","volume":"29","author":"C Morioka","year":"2016","unstructured":"Morioka C, Meng F, Taira R, et al. Automatic classification of ultrasound screening examinations of the abdominal aorta. J Digital Imaging. 2016;29(6):742\u20138.","journal-title":"J Digital Imaging"},{"issue":"2","key":"1946_CR38","doi-asserted-by":"publisher","first-page":"e12109","DOI":"10.2196\/12109","volume":"7","author":"S Fu","year":"2019","unstructured":"Fu S, Leung LY, Wang Y, et al. Natural language processing for the identification of silent brain infarcts from neuroimaging reports. JMIR Med Inform. 2019;7(2):e12109. https:\/\/doi.org\/10.2196\/12109.","journal-title":"JMIR Med Inform"},{"issue":"1","key":"1946_CR39","doi-asserted-by":"publisher","first-page":"262","DOI":"10.1186\/s12911-021-01623-6","volume":"21","author":"Y Nakamura","year":"2021","unstructured":"Nakamura Y, Hanaoka S, Nomura Y, et al. Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers. BMC Med Inform Decision Mak. 2021;21(1):262. https:\/\/doi.org\/10.1186\/s12911-021-01623-6.","journal-title":"BMC Med Inform Decision Mak"},{"issue":"3","key":"1946_CR40","doi-asserted-by":"publisher","first-page":"S188","DOI":"10.1016\/j.acra.2021.09.005","volume":"29","author":"C Jujjavarapu","year":"2022","unstructured":"Jujjavarapu C, Pejaver V, Cohen TA, et al. A Comparison of natural language processing methods for the classification of lumbar spine imaging findings related to lower back pain. Acad Radiol. 2022;29(3):S188\u2013200. https:\/\/doi.org\/10.1016\/j.acra.2021.09.005.","journal-title":"Acad Radiol"},{"issue":"Suppl 2","key":"1946_CR41","doi-asserted-by":"publisher","first-page":"214","DOI":"10.1186\/s12911-021-01575-x","volume":"21","author":"H Zhang","year":"2021","unstructured":"Zhang H, Hu D, Duan H, et al. A novel deep learning approach to extract Chinese clinical entities for lung cancer screening and staging. BMC Med Inform Decision Making. 2021;21(Suppl 2):214. https:\/\/doi.org\/10.1186\/s12911-021-01575-x.","journal-title":"BMC Med Inform Decision Making"},{"issue":"1","key":"1946_CR42","doi-asserted-by":"publisher","first-page":"e210085","DOI":"10.1148\/ryai.210085","volume":"4","author":"S Zaman","year":"2022","unstructured":"Zaman S, Petri C, Vimalesvaran K, et al. Automatic diagnosis labeling of cardiovascular mri by using semisupervised natural language processing of text reports. Radiol Artif Intell. 2022;4(1):e210085. https:\/\/doi.org\/10.1148\/ryai.210085.","journal-title":"Radiol Artif Intell"},{"issue":"10","key":"1946_CR43","doi-asserted-by":"publisher","first-page":"1755","DOI":"10.3174\/ajnr.A7241","volume":"42","author":"F Liu","year":"2021","unstructured":"Liu F, Zhou P, Baccei SJ, et al. qualifying certainty in radiology reports through deep learning-based natural language processing. AJNR Am J Neuroradiol. 2021;42(10):1755\u201361. https:\/\/doi.org\/10.3174\/ajnr.A7241.","journal-title":"AJNR Am J Neuroradiol"},{"issue":"Suppl 1","key":"1946_CR44","doi-asserted-by":"publisher","first-page":"10","DOI":"10.1007\/s00106-019-0633-7","volume":"67","author":"R Cima","year":"2019","unstructured":"Cima R, Mazurek B, Haider H, et al. A multidisciplinary European guideline for tinnitus: diagnostics, assessment, and treatment. HNO. 2019;67(Suppl 1):10\u201342. https:\/\/doi.org\/10.1007\/s00106-019-0633-7.","journal-title":"HNO"},{"key":"1946_CR45","unstructured":"Mosbach M, Andriushchenko M, Klakow D. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884, 2020."},{"key":"1946_CR46","doi-asserted-by":"crossref","unstructured":"Cui Y, Che W, Liu T, et al. Revisiting pre-trained models for Chinese natural language processing. arXiv preprint arXiv:2004.13922, 2020.","DOI":"10.18653\/v1\/2020.findings-emnlp.58"},{"key":"1946_CR47","unstructured":"Zhang Z, Zhang H, Chen K, et al. Mengzi: towards lightweight yet ingenious pre-trained models for Chinese. arXiv preprint arXiv:2110.06696, 2021."},{"key":"1946_CR48","doi-asserted-by":"crossref","unstructured":"Sun C, Qiu X, Xu Y, et al. How to fine-tune bert for text classification? In: China national conference on Chinese computational linguistics, 2019. Springer.","DOI":"10.1007\/978-3-030-32381-3_16"},{"issue":"2","key":"1946_CR49","doi-asserted-by":"publisher","first-page":"237","DOI":"10.1007\/s13244-018-0596-3","volume":"9","author":"AP Brady","year":"2018","unstructured":"Brady AP. Radiology reporting-from Hemingway to HAL? Insights Imaging. 2018;9(2):237\u201346. https:\/\/doi.org\/10.1007\/s13244-018-0596-3.","journal-title":"Insights Imaging"},{"key":"1946_CR50","doi-asserted-by":"crossref","unstructured":"Lu W, Jiao J, Zhang R. Twinbert: Distilling knowledge to twin-structured compressed bert models for large-scale retrieval. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020.","DOI":"10.1145\/3340531.3412747"},{"issue":"2","key":"1946_CR51","doi-asserted-by":"publisher","first-page":"129","DOI":"10.1183\/20734735.006616","volume":"13","author":"G Hardavella","year":"2017","unstructured":"Hardavella G, Aamli-Gaagnat A, Frille A, et al. Top tips to deal with challenging situations: doctor-patient interactions. Breathe (Sheff). 2017;13(2):129\u201335. https:\/\/doi.org\/10.1183\/20734735.006616.","journal-title":"Breathe (Sheff)"},{"issue":"11","key":"1946_CR52","first-page":"1036","volume":"56","author":"W Gregory","year":"2016","unstructured":"Gregory W. Rutecki. Tinnitus recommendations: what to do when there is ringing in the Ears. Consultant. 2016;56(11):1036.","journal-title":"Consultant"},{"key":"1946_CR53","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1186\/s12911-016-0306-3","volume":"16","author":"AJ Masino","year":"2016","unstructured":"Masino AJ, Grundmeier RW, Pennington JW, et al. Temporal bone radiology report classification using open source machine learning and natural langue processing libraries. BMC Med Inform Decis Mak. 2016;16:65. https:\/\/doi.org\/10.1186\/s12911-016-0306-3.","journal-title":"BMC Med Inform Decis Mak"}],"container-title":["BMC Medical Informatics and Decision Making"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-022-01946-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s12911-022-01946-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-022-01946-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,13]],"date-time":"2023-02-13T01:18:41Z","timestamp":1676251121000},"score":1,"resource":{"primary":{"URL":"https:\/\/bmcmedinformdecismak.biomedcentral.com\/articles\/10.1186\/s12911-022-01946-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,7,30]]},"references-count":53,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["1946"],"URL":"https:\/\/doi.org\/10.1186\/s12911-022-01946-y","relation":{},"ISSN":["1472-6947"],"issn-type":[{"value":"1472-6947","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,7,30]]},"assertion":[{"value":"14 May 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 July 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 July 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Ethical approval and the waived informed consent was gained from the Beijing Friendship Hospital ethics committee, Capital Medical University (Research Application System number 2021-P2-142-01) according to \u300aDeclaration of Helsinki\u300band\u300aEthical review of biomedical research involving people\u300bby Ministry of Public Health of China.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable, as no identifiable participant data, pictures or illustrations that require consent for publishing are included in this manuscript.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}},{"value":"All methods were performed in accordance with the relevant guidelines and regulations.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Accordance Statement"}}],"article-number":"200"}}