{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T15:31:01Z","timestamp":1773329461117,"version":"3.50.1"},"reference-count":37,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,4,21]],"date-time":"2022-04-21T00:00:00Z","timestamp":1650499200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,21]],"date-time":"2022-04-21T00:00:00Z","timestamp":1650499200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["BMC Bioinformatics"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec><jats:title>Background<\/jats:title><jats:p>The abundance of biomedical text data coupled with advances in natural language processing (NLP) is resulting in novel biomedical NLP (BioNLP) applications. These NLP applications, or tasks, are reliant on the availability of domain-specific language models (LMs) that are trained on a massive amount of data. Most of the existing domain-specific LMs adopted bidirectional encoder representations from transformers (BERT) architecture which has limitations, and their generalizability is unproven as there is an absence of baseline results among common BioNLP tasks.<\/jats:p><\/jats:sec><jats:sec><jats:title>Results<\/jats:title><jats:p>We present 8 variants of BioALBERT, a domain-specific adaptation of a lite bidirectional encoder representations from transformers (ALBERT), trained on biomedical (PubMed and PubMed Central) and clinical (MIMIC-III) corpora and fine-tuned for 6 different tasks across 20 benchmark datasets. Experiments show that a large variant of BioALBERT trained on PubMed outperforms the state-of-the-art on named-entity recognition (+\u00a011.09% BLURB score improvement), relation extraction (+\u00a00.80% BLURB score), sentence similarity (+\u00a01.05% BLURB score), document classification (+\u00a00.62% F1-score), and question answering (+\u00a02.83% BLURB score). It represents a new state-of-the-art in 5 out of 6 benchmark BioNLP tasks.<\/jats:p><\/jats:sec><jats:sec><jats:title>Conclusions<\/jats:title><jats:p>The large variant of BioALBERT trained on PubMed achieved a higher BLURB score than previous state-of-the-art models on 5 of the 6 benchmark BioNLP tasks. Depending on the task, 5 different variants of BioALBERT outperformed previous state-of-the-art models on 17 of the 20 benchmark datasets, showing that our model is robust and generalizable in the common BioNLP tasks. We have made BioALBERT freely available which will help the BioNLP community avoid computational cost of training and establish a new set of baselines for future efforts across a broad range of BioNLP tasks.<\/jats:p><\/jats:sec>","DOI":"10.1186\/s12859-022-04688-w","type":"journal-article","created":{"date-parts":[[2022,4,21]],"date-time":"2022-04-21T09:06:04Z","timestamp":1650531964000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":55,"title":["Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT"],"prefix":"10.1186","volume":"23","author":[{"given":"Usman","family":"Naseem","sequence":"first","affiliation":[]},{"given":"Adam G.","family":"Dunn","sequence":"additional","affiliation":[]},{"given":"Matloob","family":"Khushi","sequence":"additional","affiliation":[]},{"given":"Jinman","family":"Kim","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,21]]},"reference":[{"issue":"1","key":"4688_CR1","doi-asserted-by":"publisher","first-page":"151","DOI":"10.1111\/j.1471-6712.2011.00900.x","volume":"26","author":"L M\u00e5rtensson","year":"2012","unstructured":"M\u00e5rtensson L, Hensing G. Health literacy-a heterogeneous phenomenon: a literature review. Scand J Caring Sci. 2012;26(1):151\u201360.","journal-title":"Scand J Caring Sci"},{"issue":"01","key":"4688_CR2","doi-asserted-by":"publisher","first-page":"128","DOI":"10.1055\/s-0038-1638592","volume":"17","author":"SM Meystre","year":"2008","unstructured":"Meystre SM, Savova GK, Kipper-Schuler KC, Hurdle JF. Extracting information from textual documents in the electronic health record: a review of recent research. Yearb Med Inform. 2008;17(01):128\u201344.","journal-title":"Yearb Med Inform"},{"key":"4688_CR3","unstructured":"Storks S, Gao Q, Chai JY. Recent advances in natural language inference: a survey of benchmarks, resources, and approaches. 2019. arXiv:1904.01172."},{"key":"4688_CR4","doi-asserted-by":"publisher","unstructured":"Peters M, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L. Deep contextualized word representations. In: Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies, vol 1 (Long Papers). Association for Computational Linguistics; 2018, pp. 2227\u20132237. https:\/\/doi.org\/10.18653\/v1\/N18-1202. http:\/\/aclweb.org\/anthology\/N18-1202.","DOI":"10.18653\/v1\/N18-1202"},{"key":"4688_CR5","unstructured":"Devlin J, Chang M-W, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, vol 1 (long and short papers). 2019, pp. 4171\u20134186."},{"key":"4688_CR6","unstructured":"Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: a lite BERT for self-supervised learning of language representations. 2019. arXiv:1909.11942."},{"key":"4688_CR7","unstructured":"Krallinger M, Rabal O, Akhondi SA, P\u00e9rez MP, Santamar\u00eda J, Rodr\u00edguez GP, et\u00a0al. Overview of the biocreative vi chemical\u2013protein interaction track. In: Proceedings of the sixth BioCreative challenge evaluation workshop, vol 1. 2017, pp. 141\u2013146."},{"key":"4688_CR8","unstructured":"Pyysalo S, Ginter F, Moen H, Salakoski T, Ananiadou S. Distributional semantics resources for biomedical text processing. 2013."},{"key":"4688_CR9","doi-asserted-by":"crossref","unstructured":"Jin Q, Dhingra B, Cohen WW, Lu X. Probing biomedical embeddings from language models. 2019. arXiv:1904.02181.","DOI":"10.18653\/v1\/W19-2011"},{"issue":"11","key":"4688_CR10","doi-asserted-by":"publisher","first-page":"1297","DOI":"10.1093\/jamia\/ocz096","volume":"26","author":"Y Si","year":"2019","unstructured":"Si Y, Wang J, Xu H, Roberts K. Enhancing clinical concept extraction with contextual embeddings. J Am Med Inform Assoc. 2019;26(11):1297\u2013304. https:\/\/doi.org\/10.1093\/jamia\/ocz096.","journal-title":"J Am Med Inform Assoc"},{"key":"4688_CR11","doi-asserted-by":"crossref","unstructured":"Beltagy I, Lo K, Cohan A. SciBERT: a pretrained language model for scientific text. 2019. arXiv:1903.10676.","DOI":"10.18653\/v1\/D19-1371"},{"key":"4688_CR12","doi-asserted-by":"crossref","unstructured":"Peng Y, Yan S, Lu Z. Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. 2019. arXiv:1906:05474.","DOI":"10.18653\/v1\/W19-5006"},{"key":"4688_CR13","doi-asserted-by":"crossref","unstructured":"Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, Kang J. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. 2019. arXiv:1901.08746.","DOI":"10.1093\/bioinformatics\/btz682"},{"key":"4688_CR14","doi-asserted-by":"crossref","unstructured":"Gu Y, Tinn R, Cheng H, Lucas M, Usuyama N, Liu X, Naumann T, Gao J, Poon H. Domain-specific language model pretraining for biomedical natural language processing. 2020. arXiv preprint arXiv:2007.15779.","DOI":"10.1145\/3458754"},{"key":"4688_CR15","doi-asserted-by":"crossref","unstructured":"Yuan Z, Liu Y, Tan C, Huang S, Huang F. Improving biomedical pretrained language models with knowledge. 2021. arXiv preprint arXiv:2104.10344.","DOI":"10.18653\/v1\/2021.bionlp-1.20"},{"key":"4688_CR16","doi-asserted-by":"crossref","unstructured":"Naseem U, Khushi M, Reddy V, Rajendran S, Razzak I, Kim J. Bioalbert: a simple and effective pre-trained language model for biomedical named entity recognition. 2020. arXiv preprint arXiv:2009.09223.","DOI":"10.21203\/rs.3.rs-90025\/v1"},{"key":"4688_CR17","doi-asserted-by":"crossref","unstructured":"Suominen H, Salanter\u00e4 S, Velupillai S, Chapman WW, Savova G, Elhadad N, Pradhan S, South BR, Mowery DL, Jones GJ, et\u00a0al. Overview of the share\/clef ehealth evaluation lab 2013. In: International conference of the cross-language evaluation forum for European languages. Springer; 2013, pp. 212\u2013231.","DOI":"10.1007\/978-3-642-40802-1_24"},{"key":"4688_CR18","first-page":"baw068","volume":"2016","author":"J Li","year":"2016","unstructured":"Li J, Sun Y, Johnson RJ, Sciaky D, Wei C-H, Leaman R, Davis AP, Mattingly CJ, Wiegers TC, Lu Z. Biocreative V CDR task corpus: a resource for chemical disease relation extraction. Database J Biol Databases Curation. 2016;2016:baw068.","journal-title":"Database J Biol Databases Curation"},{"key":"4688_CR19","doi-asserted-by":"crossref","unstructured":"Kim, J-D, Ohta T, Tsuruoka Y, Tateisi Y, Collier N. Introduction to the bio-entity recognition task at JNLPBA. In: Proceedings of the international joint workshop on natural language processing in biomedicine and its applications. JNLPBA \u201904. Association for Computational Linguistics, USA; 2004, pp. 70\u201375.","DOI":"10.3115\/1567594.1567610"},{"issue":"1","key":"4688_CR20","doi-asserted-by":"publisher","first-page":"85","DOI":"10.1186\/1471-2105-11-85","volume":"11","author":"M Gerner","year":"2010","unstructured":"Gerner M, Nenadic G, Bergman CM. Linnaeus: a species name identification system for biomedical literature. BMC Bioinform. 2010;11(1):85.","journal-title":"BMC Bioinform"},{"issue":"C","key":"4688_CR21","first-page":"1","volume":"47","author":"RI Doundefinedan","year":"2014","unstructured":"Doundefinedan RI, Leaman R, Lu Z. NCBI disease corpus. J Biomed Inform. 2014;47(C):1\u201310.","journal-title":"J Biomed Inform"},{"issue":"6","key":"4688_CR22","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1371\/journal.pone.0065390","volume":"8","author":"E Pafilis","year":"2013","unstructured":"Pafilis E, Frankild SP, Fanini L, Faulwetter S, Pavloudi C, Vasileiadou A, Arvanitidis C, Jensen LJ. The species and organisms resources for fast and accurate identification of taxonomic names in text. PLoS ONE. 2013;8(6):1\u20136. https:\/\/doi.org\/10.1371\/journal.pone.0065390.","journal-title":"PLoS ONE"},{"key":"4688_CR23","unstructured":"Ando RK. Biocreative II gene mention tagging system at IBM WATSON. 2007."},{"issue":"5","key":"4688_CR24","doi-asserted-by":"publisher","first-page":"914","DOI":"10.1016\/j.jbi.2013.07.011","volume":"46","author":"M Herrero-Zazo","year":"2013","unstructured":"Herrero-Zazo M, Segura-Bedmar I, Mart\u00ednez P, Declerck T. The DDI corpus: an annotated corpus with pharmacological substances and drug\u2013drug interactions. J Biomed Inform. 2013;46(5):914\u201320.","journal-title":"J Biomed Inform"},{"issue":"5","key":"4688_CR25","doi-asserted-by":"publisher","first-page":"552","DOI":"10.1136\/amiajnl-2011-000203","volume":"18","author":"\u00d6 Uzuner","year":"2011","unstructured":"Uzuner \u00d6, South BR, Shen S, DuVall SL. 2010 i2b2\/VA challenge on concepts, assertions, and relations in clinical text. J Am Med Inform Assoc. 2011;18(5):552\u20136.","journal-title":"J Am Med Inform Assoc"},{"issue":"5","key":"4688_CR26","doi-asserted-by":"publisher","first-page":"879","DOI":"10.1016\/j.jbi.2012.04.004","volume":"45","author":"EM Van Mulligen","year":"2012","unstructured":"Van Mulligen EM, Fourrier-Reglat A, Gurwitz D, Molokhia M, Nieto A, Trifiro G, Kors JA, Furlong LI. The EU-ADR corpus: annotated drugs, diseases, targets, and their relationships. J Biomed Inform. 2012;45(5):879\u201384.","journal-title":"J Biomed Inform"},{"issue":"1","key":"4688_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12859-015-0472-9","volume":"16","author":"\u00c0 Bravo","year":"2015","unstructured":"Bravo \u00c0, Pi\u00f1ero J, Queralt-Rosinach N, Rautschka M, Furlong LI. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research. BMC Bioinform. 2015;16(1):1\u201317.","journal-title":"BMC Bioinform"},{"issue":"14","key":"4688_CR28","doi-asserted-by":"publisher","first-page":"49","DOI":"10.1093\/bioinformatics\/btx238","volume":"33","author":"G So\u011fanc\u0131o\u011flu","year":"2017","unstructured":"So\u011fanc\u0131o\u011flu G, \u00d6zt\u00fcrk H, \u00d6zg\u00fcr A. Biosses: a semantic sentence similarity estimation system for the biomedical domain. Bioinformatics. 2017;33(14):49\u201358.","journal-title":"Bioinformatics"},{"issue":"1","key":"4688_CR29","doi-asserted-by":"publisher","first-page":"57","DOI":"10.1007\/s10579-018-9431-1","volume":"54","author":"Y Wang","year":"2020","unstructured":"Wang Y, Afzal N, Fu S, Wang L, Shen F, Rastegar-Mojarad M, Liu H. Medsts: a resource for clinical semantic textual similarity. Lang Resour Eval. 2020;54(1):57\u201372.","journal-title":"Lang Resour Eval"},{"key":"4688_CR30","doi-asserted-by":"crossref","unstructured":"Romanov A, Shivade C. Lessons from natural language inference in the clinical domain. In: Proceedings of the 2018 conference on empirical methods in natural language processing. 2018, pp. 1586\u20131596.","DOI":"10.18653\/v1\/D18-1187"},{"issue":"3","key":"4688_CR31","doi-asserted-by":"publisher","first-page":"432","DOI":"10.1093\/bioinformatics\/btv585","volume":"32","author":"S Baker","year":"2016","unstructured":"Baker S, Silins I, Guo Y, Ali I, H\u00f6gberg J, Stenius U, Korhonen A. Automatic semantic classification of scientific literature according to the hallmarks of cancer. Bioinformatics. 2016;32(3):432\u201340.","journal-title":"Bioinformatics"},{"issue":"1","key":"4688_CR32","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12859-015-0564-6","volume":"16","author":"G Tsatsaronis","year":"2015","unstructured":"Tsatsaronis G, Balikas G, Malakasiotis P, Partalas I, Zschunke M, Alvers MR, Weissenborn D, Krithara A, Petridis S, Polychronopoulos D, et al. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC Bioinform. 2015;16(1):1\u201328.","journal-title":"BMC Bioinform"},{"issue":"23","key":"4688_CR33","doi-asserted-by":"publisher","first-page":"4087","DOI":"10.1093\/bioinformatics\/bty449","volume":"34","author":"JM Giorgi","year":"2018","unstructured":"Giorgi JM, Bader GD. Transfer learning for biomedical named entity recognition with neural networks. Bioinformatics. 2018;34(23):4087\u201394.","journal-title":"Bioinformatics"},{"key":"4688_CR34","doi-asserted-by":"crossref","unstructured":"Poerner N, Waltinger U, Sch\u00fctze H. Inexpensive domain adaptation of pretrained language models: case studies on biomedical NER and covid-19 QA. 2020. arXiv preprint arXiv:2004.03354.","DOI":"10.18653\/v1\/2020.findings-emnlp.134"},{"key":"4688_CR35","unstructured":"Devlin J, Chang M-W, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. 2018. arXiv preprint arXiv:1810.04805."},{"key":"4688_CR36","doi-asserted-by":"crossref","unstructured":"Chao W-L, Changpinyo S, Gong B, Sha F. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In: European conference on computer vision. Springer; 2016, pp. 52\u201368","DOI":"10.1007\/978-3-319-46475-6_4"},{"key":"4688_CR37","doi-asserted-by":"publisher","first-page":"103982","DOI":"10.1016\/j.jbi.2021.103982","volume":"126","author":"KS Kalyan","year":"2021","unstructured":"Kalyan KS, Rajasekharan A, Sangeetha S. AMMU: a survey of transformer-based biomedical pretrained language models. J Biomed Inform. 2021;126:103982.","journal-title":"J Biomed Inform"}],"container-title":["BMC Bioinformatics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12859-022-04688-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s12859-022-04688-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12859-022-04688-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,2]],"date-time":"2023-02-02T21:25:49Z","timestamp":1675373149000},"score":1,"resource":{"primary":{"URL":"https:\/\/bmcbioinformatics.biomedcentral.com\/articles\/10.1186\/s12859-022-04688-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,21]]},"references-count":37,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["4688"],"URL":"https:\/\/doi.org\/10.1186\/s12859-022-04688-w","relation":{},"ISSN":["1471-2105"],"issn-type":[{"value":"1471-2105","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,4,21]]},"assertion":[{"value":"12 November 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 March 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 April 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Not applicable.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"144"}}