{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T06:14:07Z","timestamp":1765260847612,"version":"3.46.0"},"reference-count":37,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2025,7,31]],"date-time":"2025-07-31T00:00:00Z","timestamp":1753920000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2025,7,31]],"date-time":"2025-07-31T00:00:00Z","timestamp":1753920000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Lang Resources &amp; Evaluation"],"published-print":{"date-parts":[[2025,12]]},"DOI":"10.1007\/s10579-025-09853-0","type":"journal-article","created":{"date-parts":[[2025,7,31]],"date-time":"2025-07-31T07:34:20Z","timestamp":1753947260000},"page":"3659-3697","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Prompting encoder models for zero-shot classification: a cross-domain study in Italian"],"prefix":"10.1007","volume":"59","author":[{"given":"Serena","family":"Auriemma","sequence":"first","affiliation":[]},{"given":"Martina","family":"Miliani","sequence":"additional","affiliation":[]},{"given":"Mauro","family":"Madeddu","sequence":"additional","affiliation":[]},{"given":"Alessandro","family":"Bondielli","sequence":"additional","affiliation":[]},{"given":"Lucia","family":"Passaro","sequence":"additional","affiliation":[]},{"given":"Alessandro","family":"Lenci","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,31]]},"reference":[{"key":"9853_CR1","doi-asserted-by":"publisher","unstructured":"Alsentzer, E., Murphy, J.R., Boag, W., Weng, W.-H., Jin, D., Naumann, T., & McDermott, M. (2019). Publicly available clinical bert embeddings. Retrieved from, https:\/\/doi.org\/10.48550\/arXiv.1904.03323","DOI":"10.48550\/arXiv.1904.03323"},{"key":"9853_CR2","unstructured":"Auriemma, S., Madeddu, M., Miliani, M., Bondielli, A., Passaro, L. C., & Lenci, A. (2023). BureauBERTo: adapting UmBERTo to the Italian bureaucratic language. In F. Falchi, F. Giannotti, A. Monreale, C. Boldrini, S. Rinzivillo, & S. Colantonio, (Eds.), Proceedings of the Italia Intelligenza Artificiale\u2014Thematic Workshops Co-located with the 3rd CINI National Lab AIIS Conference on Artificial Intelligence (Ital IA 2023). CEUR Workshop Proceedings, (vol. 3486, pp. 240\u2013248). Pisa, Italy: CEUR-WS.org. Retrieved from, https:\/\/ceur-ws.org\/Vol-3486\/42.pdf"},{"key":"9853_CR3","unstructured":"Auriemma, S., Miliani, M., Bondielli, A., Passaro, L. C., & Lenci, A. (2022). Evaluating pre-trained transformers on italian administrative texts. In: P. Lops, P. Basile, L. Siciliani, V. Taccardi, M. Di\u00a0Ciano, & N. Lopane (Eds.), Proceedings of the 1st Workshop on AI for Public Administration, AIxPA 2022. CEUR Workshop Proceedings, (vol. 3285, pp. 54\u201370). Retrieved from, https:\/\/ceur-ws.org\/Vol-3285\/paper4.pdf"},{"key":"9853_CR4","doi-asserted-by":"crossref","unstructured":"Beltagy, I., Lo, K., & Cohan, A. (2019). SciBERT: A pretrained language model for scientific text. In K. Inui, J. Jiang, V. Ng, & X. Wan (Eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (pp. 3615\u20133620). Hong Kong, China: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/D19-1371\/","DOI":"10.18653\/v1\/D19-1371"},{"key":"9853_CR5","unstructured":"Bosco, C., Dell\u2019Orletta, F., Montemagni, S., Sanguinetti, M., & Simi, M. (2014). The evalita 2014 dependency parsing task. In Proceedings of the First Italian Conference on Computational Linguistics CLiC-it 2014 & and of the Fourth International Workshop EVALITA 2014: 9-11 December 2014, Pisa, (pp. 1\u20138). Pisa, Italy: Pisa University Press. Retrieved from, http:\/\/digital.casalini.it\/3044378"},{"key":"9853_CR6","unstructured":"Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language models are few-shot learners. In H. Larochelle, M. Ranzato, , R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in Neural Information Processing Systems, (Vol. 33, pp. 1877\u20131901) (2020). Retrieved from, https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf"},{"key":"9853_CR7","doi-asserted-by":"crossref","unstructured":"Chalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., & Androutsopoulos, I. (2020). LEGAL-BERT: The muppets straight out of law school. In T. Cohn, Y. He, Y. Liu (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2020, (pp. 2898\u20132904). Association for Computational Linguistics, Online. Retrieved from, https:\/\/aclanthology.org\/2020.findings-emnlp.261\/","DOI":"10.18653\/v1\/2020.findings-emnlp.261"},{"key":"9853_CR8","unstructured":"Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In J. Burstein, C. Doran, & T. Solorio, (Eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), (pp. 4171\u20134186). Minneapolis, Minnesota: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/N19-1423\/"},{"key":"9853_CR9","doi-asserted-by":"crossref","unstructured":"Fei, Y., Hou, Y., Chen, Z., & Bosselut, A. (2023). Mitigating label biases for in-context learning. In A. Rogers, J. Boyd-Graber, & N. Okazaki, (Eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), (pp. 14014\u201314031). Toronto, Canada: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/2023.acl-long.783\/","DOI":"10.18653\/v1\/2023.acl-long.783"},{"key":"9853_CR10","doi-asserted-by":"crossref","unstructured":"Gao, T., Fisch, A., & Chen, D. (2021). Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), (pp. 3816\u20133830). Association for Computational Linguistics, Online. Retrieved from, https:\/\/aclanthology.org\/2021.acl-long.295","DOI":"10.18653\/v1\/2021.acl-long.295"},{"issue":"1","key":"9853_CR11","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3458754","volume":"3","author":"Y Gu","year":"2021","unstructured":"Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Gao, J., & Poon, H. (2021). Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare, 3(1), 1\u201323. https:\/\/doi.org\/10.1145\/3458754","journal-title":"ACM Transactions on Computing for Healthcare"},{"key":"9853_CR12","doi-asserted-by":"crossref","unstructured":"Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. In I. Gurevych, & Y. Miyao, (Eds.), Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), (pp. 328\u2013339). Melbourne, Australia: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/P18-1031\/","DOI":"10.18653\/v1\/P18-1031"},{"key":"9853_CR13","doi-asserted-by":"crossref","unstructured":"Hu, S., Ding, N., Wang, H., Liu, Z., Wang, J., Li, J., Wu, W., & Sun, M. (2022). Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In S. Muresan, P. Nakov, & A. Villavicencio (Eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), (pp. 2225\u20132240). Dublin, Ireland: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/2022.acl-long.158\/","DOI":"10.18653\/v1\/2022.acl-long.158"},{"key":"9853_CR14","unstructured":"Hu, E. J., Shen, Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2022). LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. Retrieved from, https:\/\/openreview.net\/forum?id=nZeVKeeFYf9"},{"issue":"4","key":"9853_CR15","doi-asserted-by":"publisher","first-page":"1234","DOI":"10.1093\/BIOINFORMATICS\/BTZ682","volume":"36","author":"J Lee","year":"2020","unstructured":"Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J. (2020). Biobert: A pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), 1234\u20131240. https:\/\/doi.org\/10.1093\/BIOINFORMATICS\/BTZ682","journal-title":"Bioinformatics"},{"key":"9853_CR16","doi-asserted-by":"crossref","unstructured":"Lester, B., Al-Rfou, R., & Constant, N. (2021). The power of scale for parameter-efficient prompt tuning. In M.-F. Moens, X. Huang, L. Specia, & S.W.-T. Yih (Eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, (pp. 3045\u20133059). Punta Cana, Dominican Republic: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/2021.emnlp-main.243\/","DOI":"10.18653\/v1\/2021.emnlp-main.243"},{"key":"9853_CR17","unstructured":"Licari, D., & Comand\u00e8, G. (2022). Italian-legal-bert: A pre-trained transformer language model for italian law. In CEUR Workshop Proceedings (Ed.), The Knowledge Management for Law Workshop (KM4LAW), (Vol. 3265). Retrieved from, https:\/\/hdl.handle.net\/11382\/549631"},{"key":"9853_CR18","doi-asserted-by":"crossref","unstructured":"Lu, Y., Bartolo, M., Moore, A., Riedel, S., & Stenetorp, P. (2022). Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In S. Muresan, P. Nakov, & A. Villavicencio (Eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), (pp. 8086\u20138098). Dublin, Ireland: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/2022.acl-long.556\/","DOI":"10.18653\/v1\/2022.acl-long.556"},{"key":"9853_CR19","doi-asserted-by":"crossref","unstructured":"Meng, Y., Zhang, Y., Huang, J., Xiong, C., Ji, H., Zhang, C., & Han, J. (2020). Text classification using label names only: A language model self-training approach. In: B. Webber, T. Cohn, Y. He, & Y. Liu, (Eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), (pp. 9006\u20139017). Association for Computational Linguistics, Online. Retrieved from, https:\/\/aclanthology.org\/2020.emnlp-main.724\/","DOI":"10.18653\/v1\/2020.emnlp-main.724"},{"key":"9853_CR20","doi-asserted-by":"publisher","unstructured":"Moradi, M., Blagec, K., Haberl, F., & Samwald, M. (2021). Gpt-3 models are poor few-shot learners in the biomedical domain. Retrieved from, https:\/\/doi.org\/10.48550\/arXiv.2109.02555","DOI":"10.48550\/arXiv.2109.02555"},{"key":"9853_CR21","doi-asserted-by":"crossref","unstructured":"Passaro, L. C., Lenci, A., & Gabbolini, A. (2017). Informed PA: A NER for the italian public administration domain. In R. Basili, M. Nissim, & G. Satta (Eds.), Proceedings of the Fourth Italian Conference on Computational Linguistics (CLiC-it 2017), Rome, Italy, December 11-13, 2017. CEUR Workshop Proceedings, (Vol. 2006, pp. 246\u2013251). Retrieved from, https:\/\/ceur-ws.org\/Vol-2006\/paper048.pdf","DOI":"10.4000\/books.aaccademia.2440"},{"key":"9853_CR22","doi-asserted-by":"crossref","unstructured":"Pedersen, T., Patwardhan, S., & Michelizzi, J. (2004). WordNet::Similarity\u2014Measuring the relatedness of concepts. In Demonstration Papers at HLT-NAACL 2004, (pp. 38\u201341). Boston, Massachusetts: Association for Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/N04-3012\/","DOI":"10.3115\/1614025.1614037"},{"key":"9853_CR23","unstructured":"Puri, R., & Catanzaro, B. (2019). Zero-shot text classification with generative language models. Computing Research Repository (CoRR). Retrieved from, arXiv:1912.10165"},{"issue":"8","key":"9853_CR24","first-page":"9","volume":"1","author":"A Radford","year":"2019","unstructured":"Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9.","journal-title":"OpenAI Blog"},{"key":"9853_CR25","doi-asserted-by":"crossref","unstructured":"Salazar, J., Liang, D., Nguyen, T. Q., & Kirchhoff, K. (2020). Masked language model scoring. In D. Jurafsky, J. Chai, N. Schluter, & J. Tetreault, (Eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (pp. 2699\u20132712). Association for Computational Linguistics, Online. Retrieved from, https:\/\/aclanthology.org\/2020.acl-main.240\/","DOI":"10.18653\/v1\/2020.acl-main.240"},{"key":"9853_CR26","doi-asserted-by":"crossref","unstructured":"Schick, T., & Sch\u00fctze, H. (2021a). Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, (pp. 255\u2013269). Association for Computational Linguistics, Online. Retrieved from, https:\/\/aclanthology.org\/2021.eacl-main.20","DOI":"10.18653\/v1\/2021.eacl-main.20"},{"key":"9853_CR27","doi-asserted-by":"crossref","unstructured":"Schick, T., & Sch\u00fctze, H. (2021b). It\u2019s not just size that matters: Small language models are also few-shot learners. In K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, & Y. Zhou (Eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (pp. 2339\u20132352). Association for Computational Linguistics, Online. Retrieved from, https:\/\/aclanthology.org\/2021.naacl-main.185\/","DOI":"10.18653\/v1\/2021.naacl-main.185"},{"key":"9853_CR28","doi-asserted-by":"crossref","unstructured":"Schick, T., Schmid, H., & Sch\u00fctze, H. (2020). Automatically identifying words that can serve as labels for few-shot text classification. In D. Scott, N. Bel, & C. Zong (Eds.), Proceedings of the 28th International Conference on Computational Linguistics, (pp. 5569\u20135578). Barcelona, Spain: International Committee on Computational Linguistics. Retrieved from, https:\/\/aclanthology.org\/2020.coling-main.488\/","DOI":"10.18653\/v1\/2020.coling-main.488"},{"key":"9853_CR29","doi-asserted-by":"publisher","first-page":"972","DOI":"10.48550\/arXiv.2203.05061","volume":"2022","author":"S Sivarajkumar","year":"2023","unstructured":"Sivarajkumar, S., & Wang, Y. (2023). Healthprompt: A zero-shot learning paradigm for clinical natural language processing. AMIA Annual Symposium Proceedings, 2022, 972. https:\/\/doi.org\/10.48550\/arXiv.2203.05061","journal-title":"AMIA Annual Symposium Proceedings"},{"key":"9853_CR30","doi-asserted-by":"publisher","unstructured":"Speer, R., Chin, J., & Havasi, C. (2017). Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, (Vol. 31). Retrieved from, https:\/\/doi.org\/10.1609\/aaai.v31i1.11164","DOI":"10.1609\/aaai.v31i1.11164"},{"key":"9853_CR31","doi-asserted-by":"publisher","unstructured":"Trautmann, D., Petrova, A., & Schilder, F. (2022). Legal prompt engineering for multilingual legal judgement prediction. Retrieved from, https:\/\/doi.org\/10.48550\/arXiv.2212.02199","DOI":"10.48550\/arXiv.2212.02199"},{"key":"9853_CR32","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., & Polosukhin, I. (2017). Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett, (Eds.), Advances in Neural Information Processing Systems, (Vol. 30). Retrieved from, https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"},{"key":"9853_CR33","unstructured":"Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. (2019). Superglue: A stickier benchmark for general-purpose language understanding systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. Alch\u00e9-Buc, E. Fox, & R. Garnett (Eds.), Advances in Neural Information Processing Systems, (Vol. 32). Retrieved from, https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2019\/file\/4496bf24afe7fab6f046bf4923da8de6-Paper.pdf"},{"key":"9853_CR34","doi-asserted-by":"publisher","unstructured":"Yu, F., Quartey, L., & Schilder, F. (2022). Legal prompting: Teaching a language model to think like a lawyer. Retrieved from, https:\/\/doi.org\/10.48550\/arXiv.2212.01326","DOI":"10.48550\/arXiv.2212.01326"},{"key":"9853_CR35","unstructured":"Zhao, Z., Wallace, E., Feng, S., Klein, D., & Singh, S.. (2021). Calibrate before use: Improving few-shot performance of language models. In: Meila, M., Zhang, T. (Eds.), Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, (Vol. 139, pp. 12697\u201312706). Retrieved from, https:\/\/proceedings.mlr.press\/v139\/zhao21c.html"},{"key":"9853_CR36","unstructured":"Zhou, H., Wan, X., Proleev, L., Mincu, D., Chen, J., Heller, K. A., & Roy, S. (2024). Batch calibration: Rethinking calibration for in-context learning and prompt engineering. In The Twelfth International Conference on Learning Representations. Retrieved from, https:\/\/openreview.net\/forum?id=L3FHMoKZcS"},{"key":"9853_CR37","unstructured":"Zhuang, L., Wayne, L., Ya, S., & Jun, Z. (2021). A robustly optimized BERT pre-training approach with post-training. In S. Li, M. Sun, Y. Liu, H. Wu, K. Liu, W. Che, S. He, & G. Rao, (Eds.), Proceedings of the 20th Chinese National Conference on Computational Linguistics, (pp. 1218\u20131227). Huhhot, China: Chinese Information Processing Society of China. Retrieved from, https:\/\/aclanthology.org\/2021.ccl-1.108\/"}],"container-title":["Language Resources and Evaluation"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10579-025-09853-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10579-025-09853-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10579-025-09853-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T05:16:12Z","timestamp":1765257372000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10579-025-09853-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,31]]},"references-count":37,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["9853"],"URL":"https:\/\/doi.org\/10.1007\/s10579-025-09853-0","relation":{},"ISSN":["1574-020X","1574-0218"],"issn-type":[{"type":"print","value":"1574-020X"},{"type":"electronic","value":"1574-0218"}],"subject":[],"published":{"date-parts":[[2025,7,31]]},"assertion":[{"value":"30 May 2025","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 July 2025","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflicts of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflicts of interest"}}]}}