{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T16:19:19Z","timestamp":1775578759673,"version":"3.50.1"},"reference-count":61,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T00:00:00Z","timestamp":1767916800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T00:00:00Z","timestamp":1767916800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100016386","name":"Conselleria de Innovaci\u00f3n, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana","doi-asserted-by":"publisher","award":["ACIF\/2021\/356"],"award-info":[{"award-number":["ACIF\/2021\/356"]}],"id":[{"id":"10.13039\/501100016386","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100016386","name":"Conselleria de Innovaci\u00f3n, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana","doi-asserted-by":"publisher","award":["CIBEFP\/2022\/19"],"award-info":[{"award-number":["CIBEFP\/2022\/19"]}],"id":[{"id":"10.13039\/501100016386","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100016386","name":"Conselleria de Innovaci\u00f3n, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana","doi-asserted-by":"publisher","award":["CISEJI\/2023\/9"],"award-info":[{"award-number":["CISEJI\/2023\/9"]}],"id":[{"id":"10.13039\/501100016386","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2026,2]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Optical Music Recognition (OMR) has made significant progress since its inception, with various approaches now capable of accurately transcribing music scores into digital formats. Despite these advancements, most so-called\n                    <jats:italic>end-to-end<\/jats:italic>\n                    OMR approaches still rely on multi-stage processing pipelines for transcribing full-page score images, which entails challenges such as the need for dedicated layout analysis and specific annotated data, thereby limiting the general applicability of such methods. In this paper, we present the first truly end-to-end approach for page-level OMR in complex layouts. Our system, which combines convolutional layers with autoregressive Transformers, processes an entire music score page and outputs a complete transcription in a music encoding format. This is made possible by both the architecture and the training procedure, which utilizes curriculum learning through incremental synthetic data generation. We evaluate the proposed system using pianoform corpora, which is one of the most complex sources in the OMR literature. This evaluation is conducted first in a controlled scenario with synthetic data, and subsequently against two real-world corpora of varying conditions. Our approach is compared with leading commercial OMR software. The results demonstrate that our system not only successfully transcribes full-page music scores but also outperforms the commercial tool in both zero-shot settings and after fine-tuning with the target domain, representing a significant contribution to the field of OMR.\n                  <\/jats:p>","DOI":"10.1007\/s11263-025-02654-6","type":"journal-article","created":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T12:58:28Z","timestamp":1767963508000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["End-to-End Full-Page Optical Music Recognition for Pianoform Sheet Music"],"prefix":"10.1007","volume":"134","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7770-9726","authenticated-orcid":false,"given":"Antonio","family":"R\u00edos-Vila","sequence":"first","affiliation":[]},{"given":"Jorge","family":"Calvo-Zaragoza","sequence":"additional","affiliation":[]},{"given":"David","family":"Rizo","sequence":"additional","affiliation":[]},{"given":"Thierry","family":"Paquet","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,1,9]]},"reference":[{"key":"2654_CR1","doi-asserted-by":"publisher","first-page":"157","DOI":"10.1016\/j.patrec.2022.04.032","volume":"158","author":"M Alfaro-Contreras","year":"2022","unstructured":"Alfaro-Contreras, M., R\u00edos-Vila, A., Valero-Mas, J. J., I\u00f1esta, J. M., & Calvo-Zaragoza, J. (2022). Decoupling music notation to improve end-to-end optical music recognition. Pattern Recognition Letters,158, 157\u2013163.","journal-title":"Pattern Recognition Letters"},{"issue":"1","key":"2654_CR2","doi-asserted-by":"publisher","first-page":"12","DOI":"10.1007\/s13735-023-00278-5","volume":"12","author":"M Alfaro-Contreras","year":"2023","unstructured":"Alfaro-Contreras, M., I\u00f1esta, J. M., & Calvo-Zaragoza, J. (2023). Optical music recognition for homophonic scores with neural networks and synthetic music generation. International Journal of Multimedia Information Retrieval,12(1), 12.","journal-title":"International Journal of Multimedia Information Retrieval"},{"key":"2654_CR3","doi-asserted-by":"publisher","first-page":"95","DOI":"10.1023\/A:1002485918032","volume":"35","author":"D Bainbridge","year":"2001","unstructured":"Bainbridge, D., & Bell, T. (2001). The challenge of optical music recognition. Computers and the Humanities,35, 95\u2013121.","journal-title":"Computers and the Humanities"},{"key":"2654_CR4","doi-asserted-by":"crossref","unstructured":"Bar\u00f3, A., Riba, P., Calvo-Zaragoza, J., & Forn\u00e9s, A. (2018). Optical music recognition by long short-term memory networks. Graphics recognition. current trends and evolutions: 12th iapr international workshop, grec 2017, kyoto, japan, november 9-10, 2017, revised selected papers 12 (pp. 81\u201395).","DOI":"10.1007\/978-3-030-02284-6_7"},{"key":"2654_CR5","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.patrec.2019.02.029","volume":"123","author":"A Bar\u00f3","year":"2019","unstructured":"Bar\u00f3, A., Riba, P., Calvo-Zaragoza, J., & Forn\u00e9s, A. (2019). From optical music recognition to handwritten music recognition: a baseline. Pattern Recognition Letters,123, 1\u20138.","journal-title":"Pattern Recognition Letters"},{"key":"2654_CR6","doi-asserted-by":"crossref","unstructured":"Bar\u00f3, A., Riba, P., & Forn\u00e9s, A. (2022). Musigraph: Optical music recognition through object detection and graph neural network. International conference on frontiers in handwriting recognition (pp. 171\u2013184).","DOI":"10.1007\/978-3-031-21648-0_12"},{"key":"2654_CR7","doi-asserted-by":"crossref","unstructured":"Bellini, P., Bruno, I., & Nesi, P. (2007). Assessing optical music recognition tools. Computer Music Journal, 68\u201393,","DOI":"10.1162\/comj.2007.31.1.68"},{"key":"2654_CR8","unstructured":"Blecher, L., Cucurull, G., Scialom, T., & Stojnic, R. (2023). Nougat: Neural optical understanding for academic documents."},{"key":"2654_CR9","unstructured":"Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot learners. H.\u00a0Larochelle, M.\u00a0Ranzato, R.\u00a0Hadsell, M.\u00a0Balcan, and H.\u00a0Lin (Eds.), Advances in neural information processing systems (Vol.\u00a033, pp. 1877\u20131901). Curran Associates, Inc."},{"issue":"3","key":"2654_CR10","doi-asserted-by":"publisher","first-page":"169","DOI":"10.1080\/09298215.2015.1045424","volume":"44","author":"D Byrd","year":"2015","unstructured":"Byrd, D., & Simonsen, J. G. (2015). Towards a standard testbed for optical music recognition: Definitions, metrics, and page images. Journal of New Music Research,44(3), 169\u2013195.","journal-title":"Journal of New Music Research"},{"key":"2654_CR11","doi-asserted-by":"crossref","unstructured":"Calvo-Zaragoza, J., Haji\u010d\u00a0Jr., J., & Pacha, A. (2020). Understanding optical music recognition. ACM Comput. Surv., 53(4), ,","DOI":"10.1145\/3397499"},{"key":"2654_CR12","doi-asserted-by":"crossref","unstructured":"Calvo-Zaragoza, J., & Rizo, D. (2018a). Camera-PrIMuS: Neural End-to-End Optical Music Recognition on Realistic Monophonic Scores. Proceedings of the 19th International Society for Music Information Retrieval Conference (pp. 248\u2013255). ISMIR.","DOI":"10.3390\/app8040606"},{"issue":"4","key":"2654_CR13","doi-asserted-by":"publisher","first-page":"606","DOI":"10.3390\/app8040606","volume":"8","author":"J Calvo-Zaragoza","year":"2018","unstructured":"Calvo-Zaragoza, J., & Rizo, D. (2018). End-to-end neural optical music recognition of monophonic scores. Applied Sciences,8(4), 606.","journal-title":"Applied Sciences"},{"key":"2654_CR14","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1016\/j.patrec.2019.08.021","volume":"128","author":"J Calvo-Zaragoza","year":"2019","unstructured":"Calvo-Zaragoza, J., Toselli, A. H., & Vidal, E. (2019). Handwritten Music Recognition for Mensural notation with convolutional recurrent neural networks. Pattern Recognition Letters,128, 115\u2013121.","journal-title":"Pattern Recognition Letters"},{"key":"2654_CR15","doi-asserted-by":"crossref","unstructured":"Campos, V.B., Calvo-Zaragoza, J., Toselli, A.H., & Ruiz, E.V. (2016). Sheet music statistical layout analysis. 2016 15th international conference on frontiers in handwriting recognition (icfhr) (pp. 313\u2013318).","DOI":"10.1109\/ICFHR.2016.0066"},{"key":"2654_CR16","unstructured":"Castellanos, F.J., Calvo-Zaragoza, J., & I\u00f1esta, J.M. (2020). A neural approach for full-page optical music recognition of mensural documents. Proc. of the 21th int. society for music information retrieval conference (pp. 12\u201316)."},{"key":"2654_CR17","doi-asserted-by":"crossref","unstructured":"Castellanos, F. J., Garrido-Munoz, C., & R\u00ed\u00ados-Vila, A., Calvo-Zaragoza, J. (2022). Region-based layout analysis of music score images. Expert Systems with Applications,209, Article 118211.","DOI":"10.1016\/j.eswa.2022.118211"},{"issue":"7","key":"2654_CR18","doi-asserted-by":"publisher","first-page":"8227","DOI":"10.1109\/TPAMI.2023.3235826","volume":"45","author":"D Coquenet","year":"2023","unstructured":"Coquenet, D., Chatelain, C., & Paquet, T. (2023). Dan: a segmentation-free document attention network for handwritten document recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,45(7), 8227\u20138243.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"2654_CR19","unstructured":"Degroot-Maggetti, J., de Reuse, T., Feisthauer, L., Howes, S., Ju, Y., Kokubu, S., & Upham, F. (2020). Data quality matters: Iterative corrections on a corpus of mendelssohn string quartets and implications for mir analysis. International society for music information retrieval conference (ismir 2020)."},{"key":"2654_CR20","doi-asserted-by":"crossref","unstructured":"De Vega, F.F., Alvarado, J., & Cortez, J.V. (2022). Optical Music recognition and Deep Learning: An application to 4-part harmony. 2022 ieee congress on evolutionary computation (cec) (pp. 01\u201307).","DOI":"10.1109\/CEC55065.2022.9870357"},{"key":"2654_CR21","unstructured":"Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. J.\u00a0Burstein, C.\u00a0Doran, and T.\u00a0Solorio (Eds.), Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: Human language technologies, NAACL-HLT 2019, minneapolis, mn, usa, june 2-7, 2019, volume 1 (long and short papers) (pp. 4171\u20134186). Association for Computational Linguistics."},{"key":"2654_CR22","doi-asserted-by":"publisher","first-page":"28","DOI":"10.1016\/j.patrec.2023.03.020","volume":"169","author":"M Dhiaf","year":"2023","unstructured":"Dhiaf, M., Rouhou, A. C., Kessentini, Y., & Salem, S. B. (2023). Msdoctr-lite: A lite transformer for full page multi-script handwriting recognition. Pattern Recognition Letters,169, 28\u201334.","journal-title":"Pattern Recognition Letters"},{"key":"2654_CR23","unstructured":"Dvor\u00e1k, V., Hajic\u00a0jr, J., & Mayer, J. (2024). Staff layout analysis using the yolo platform. 6 th international workshop on reading music systems (p.18)."},{"key":"2654_CR24","doi-asserted-by":"crossref","unstructured":"Foscarin, F., Jacquemard, F., & Fournier-S\u2019niehotta, R. (2019). A diff procedure for music score files. 6th international conference on digital libraries for musicology (pp. 58\u201364).","DOI":"10.1145\/3358664.3358671"},{"key":"2654_CR25","unstructured":"Good, M., et al. (2001). Musicxml: An internet-friendly format for sheet music. Xml conference and expo (pp. 03\u201304)."},{"key":"2654_CR26","unstructured":"Jan Haji\u02c7c, J., Novotn\u1ef3, J., Pecina, P., & Pokorn\u1ef3, J. (2016). Further steps towards a standard testbed for optical music recognition. Ismir (pp. 157\u2013163)."},{"key":"2654_CR27","doi-asserted-by":"crossref","unstructured":"Hartelt, A., & Puppe, F. (2022). Optical medieval music recognition using background knowledge. Algorithms, 15(7), ,","DOI":"10.3390\/a15070221"},{"key":"2654_CR28","doi-asserted-by":"publisher","DOI":"10.1016\/j.softx.2023.101365","volume":"22","author":"C Hernandez-Olivan","year":"2023","unstructured":"Hernandez-Olivan, C., & Beltran, J. R. (2023). Musicaiz: A python library for symbolic music generation, analysis and visualization. SoftwareX,22, Article 101365.","journal-title":"SoftwareX"},{"key":"2654_CR29","unstructured":"Huron, D. (1997). Humdrum and kern: Selective feature encoding. Beyond MIDI, ,"},{"key":"2654_CR30","doi-asserted-by":"crossref","unstructured":"Jan Haji\u010d, J., & Pecina, P. (2017). The MUSCIMA++ Dataset for Handwritten Optical Music Recognition. 14th international conference on document analysis and recognition, ICDAR,. (2017). kyoto, japan, november 13\u201315, 2017 (pp. 39\u201346). New York, USA: IEEE Computer Society.","DOI":"10.1109\/ICDAR.2017.16"},{"key":"2654_CR31","doi-asserted-by":"crossref","unstructured":"Kim, G., Hong, T., Yim, M., Nam, J., Park, J., Yim, J. & Park, S. (2022). Ocr-free document understanding transformer. European conference on computer vision (eccv).","DOI":"10.1007\/978-3-031-19815-1_29"},{"key":"2654_CR32","unstructured":"Li, M., Lv, T., Cui, L., Lu, Y., Florencio, D., Zhang, C. & Wei, F. (2021). Trocr: Transformer-based optical character recognition with pre-trained models."},{"key":"2654_CR33","doi-asserted-by":"crossref","unstructured":"Li, Y., Liu, H., Jin, Q., Cai, M., & Li, P. (2023). TROMR: Transformer- 1017 based polyphonic optical music recognition. ICASSP 2023-2023 1018 ieee international conference on acoustics, speech and signal pro- 1019 cessing (icassp) (pp. 1\u20135)","DOI":"10.1109\/ICASSP49357.2023.10096055"},{"key":"2654_CR34","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z. & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the ieee\/cvf international conference on computer vision (iccv).","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"2654_CR35","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C-Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. Proceedings of the ieee\/cvf conference on computer vision and pattern recognition (cvpr) (pp. 11976\u201311986).","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"2654_CR36","doi-asserted-by":"crossref","unstructured":"Mart\u00ednez-Sevilla, J.C., R\u00edos-Vila, A., Castellanos, F.J., & Calvo-Zaragoza, J. (2023). A holistic approach for aligned music and lyrics transcription. Document analysis and recognition - ICDAR 2023 - 17th international conference, san jos\u00e9, ca, usa, august 21-26, 2023, proceedings, part I (Vol. 14187, pp. 185\u2013201). Springer.","DOI":"10.1007\/978-3-031-41676-7_11"},{"key":"2654_CR37","doi-asserted-by":"crossref","unstructured":"Mayer, J., & Pecina, P. (2021). Synthesizing training data for handwritten music recognition. International conference on document analysis and recognition (pp. 626\u2013641).","DOI":"10.1007\/978-3-030-86334-0_41"},{"key":"2654_CR38","doi-asserted-by":"crossref","unstructured":"Mayer, J., Straka, M., Pecina, P., et al. (2024). Practical end-to-end optical music recognition for pianoform music. arXiv preprint arXiv:2403.13763,<hi rend=\"it\" \/>, ,","DOI":"10.1007\/978-3-031-70552-6_4"},{"key":"2654_CR39","doi-asserted-by":"publisher","first-page":"6383","DOI":"10.1007\/s11042-019-08200-0","volume":"79","author":"L Mengarelli","year":"2020","unstructured":"Mengarelli, L., Kostiuk, B., Vit\u00f3rio, J. G., Tibola, M. A., Wolff, W., & Silla, C. N. (2020). Omr metrics and evaluation: a systematic review. Multimedia Tools and Applications,79, 6383\u20136408.","journal-title":"Multimedia Tools and Applications"},{"key":"2654_CR40","doi-asserted-by":"crossref","unstructured":"Pacha, A., & Eidenberger, H. (2017). Towards a universal music symbol classifier. 14th international conference on document analysis and recognition (pp. 35\u201336). Kyoto, Japan: IEEE Computer Society.","DOI":"10.1109\/ICDAR.2017.265"},{"key":"2654_CR41","unstructured":"Pe\u00f1arrubia, C., Garrido-Mu\u00f1oz, C., Valero-Mas, J.J., & Calvo-Zaragoza, J. (2023). Efficient Notation Assembly in Optical Music Recognition. Proceedings of the 24th International Society for Music Information Retrieval Conference (pp. 182\u2013189). ISMIR."},{"key":"2654_CR42","unstructured":"Pugin, L., Zitellini, R., & Roland, P. (2014). Verovio - A library for Engraving MEI Music Notation into SVG. International society for music information retrieval."},{"issue":"3","key":"2654_CR43","doi-asserted-by":"publisher","first-page":"173","DOI":"10.1007\/s13735-012-0004-6","volume":"1","author":"A Rebelo","year":"2012","unstructured":"Rebelo, A., Fujinaga, I., Paszkiewicz, F., Marcal, A. R., Guedes, C., & Cardoso, J. S. (2012). Optical music recognition: state-of-the-art and open issues. International Journal of Multimedia Information Retrieval,1(3), 173\u2013190.","journal-title":"International Journal of Multimedia Information Retrieval"},{"issue":"1","key":"2654_CR44","doi-asserted-by":"publisher","first-page":"51","DOI":"10.1017\/S1479409819000673","volume":"18","author":"J Rink","year":"2021","unstructured":"Rink, J. (2021). Digital editions and the creative work of the performer. Nineteenth-Century Music Review,18(1), 51\u201381.","journal-title":"Nineteenth-Century Music Review"},{"key":"2654_CR45","doi-asserted-by":"crossref","unstructured":"R\u00edos-Vila, A., Calvo-Zaragoza, & J., Rizo, D. (2020). Evaluating simultaneous recognition and encoding for optical music recognition. Dlfm \u201920: 7th international conference on digital libraries for musicology, montr\u00e9al, qc, canada, october 16, 2020 (pp. 10\u201317). ACM.","DOI":"10.1145\/3424911.3425512"},{"key":"2654_CR46","doi-asserted-by":"crossref","unstructured":"R\u00edos-Vila, A., Rizo, D., & Calvo-Zaragoza, J. (2021). Complete optical music recognition via agnostic transcription and machine translation. 16th international conference on document analysis and recognition, ICDAR 2021, lausanne, switzerland, september 5-10, 2021, proceedings, part III (Vol. 12823, pp. 661\u2013675). Springer.","DOI":"10.1007\/978-3-030-86334-0_43"},{"issue":"3","key":"2654_CR47","doi-asserted-by":"publisher","first-page":"347","DOI":"10.1007\/s10032-023-00432-z","volume":"26","author":"A R\u00edos-Vila","year":"2023","unstructured":"R\u00edos-Vila, A., Rizo, D., I\u00f1esta, J. M., & Calvo-Zaragoza, J. (2023). End-to-end optical music recognition for pianoform sheet music. Int. J. Document Anal. Recognit.,26(3), 347\u2013362.","journal-title":"Int. J. Document Anal. Recognit."},{"key":"2654_CR48","doi-asserted-by":"crossref","unstructured":"R\u00edos-Vila, A., Calvo-Zaragoza, J., & Paquet, T. (2024). Sheet music transformer: End-to-end optical music recognition beyond monophonic transcription. Document analysis and recognition - icdar 2024 (pp. 20\u201337). Cham: Springer Nature Switzerland.","DOI":"10.1007\/978-3-031-70552-6_2"},{"issue":"1","key":"2654_CR49","doi-asserted-by":"publisher","DOI":"10.1155\/2007\/81541","volume":"2007","author":"F Rossant","year":"2006","unstructured":"Rossant, F., & Bloch, I. (2006). Robust and adaptive omr system including fuzzy modeling, fusion of musical rules, and possible error detection. EURASIP Journal on Advances in Signal Processing,2007(1), Article 081541.","journal-title":"EURASIP Journal on Advances in Signal Processing"},{"key":"2654_CR50","doi-asserted-by":"crossref","unstructured":"R\u00ed\u00ados-Vila, A., Espl\u00ed -Gomis, M., Rizo, D., Ponce\u00a0de Le\u00f3n, P.J., & I\u00f1esta, J.M. (2021). Applying automatic translation for optical music recognition\u2019s encoding step. Applied Sciences, Special Issue: Advances in Music Reading Systems, 11(9), ,","DOI":"10.3390\/app11093890"},{"key":"2654_CR51","unstructured":"R\u00ed\u00ados-Vila, A., I\u00f1esta, J.M., & Calvo-Zaragoza, J. (2022). End-to-End Full-Page Optical Music Recognition for Mensural Notation. Proceedings of the 23rd International Society for Music Information Retrieval Conference (pp. 226\u2013232). Bengaluru, India: ISMIR."},{"issue":"5","key":"2654_CR52","doi-asserted-by":"publisher","first-page":"580","DOI":"10.1080\/07494467.2020.1852802","volume":"39","author":"F Schuiling","year":"2020","unstructured":"Schuiling, F. (2020). (re-) assembling notations in the performance of early music. Contemporary Music Review,39(5), 580\u2013601.","journal-title":"Contemporary Music Review"},{"key":"2654_CR53","doi-asserted-by":"crossref","unstructured":"Singh, S.S., & Karayev, S. (2021). Full page handwriting recognition via image to sequence extraction. J.\u00a0Llad\u00f3s, D.\u00a0Lopresti, and S.\u00a0Uchida (Eds.), 16th international conference on document analysis and recognition, ICDAR 2021, lausanne, switzerland, september 5-10, 2021, proceedings, part III (Vol. 12823, pp. 55\u201369). Springer.","DOI":"10.1007\/978-3-030-86334-0_4"},{"key":"2654_CR54","doi-asserted-by":"crossref","unstructured":"Song, Y., Shen, Y., Ding, P., Zhang, X., Shi, X., & Xue, Y. (2022). Optical music recognition based deep neural networks. Signal and information processing, networking and computers (pp. 1051\u20131059). Singapore: Springer Nature Singapore.","DOI":"10.1007\/978-981-19-4775-9_136"},{"key":"2654_CR55","doi-asserted-by":"crossref","unstructured":"Tanha, J., Does, J., Depuydt, K., & S\u00e1nchez, J-A. (2015). Crossing the lines: making optimal use of context in line-based handwritten text recognition. 2015 13th international conference on document analysis and recognition (icdar) (pp. 956\u2013960).","DOI":"10.1109\/ICDAR.2015.7333903"},{"key":"2654_CR56","unstructured":"Torras, P., Bar\u00f3, A., Kang, L., & Forn\u00e9s, A. (2021). On the Integration of Language Models into Sequence to Sequence Architectures for Handwritten Music Recognition. Proceedings of the 22nd International Society for Music Information Retrieval Conference (pp. 690\u2013696). ISMIR."},{"key":"2654_CR57","unstructured":"Torras, P., Biswas, S., & Forn\u00e9s, A. (2023). The common optical music recognition evaluation framework."},{"key":"2654_CR58","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N. & Polosukhin, I. (2017). Attention is all you need. I.\u00a0Guyon et\u00a0al. (Eds.), Advances in neural information processing systems (Vol.\u00a030). Curran Associates, Inc."},{"key":"2654_CR59","doi-asserted-by":"crossref","unstructured":"Villarreal, M., & S\u00e1nchez, J.A. (2024). Enhancing recognition of historical musical pieces with synthetic andhi composed images. Document analysis and recognition - icdar 2024 (pp. 74\u201390). Cham: Springer Nature Switzerland.","DOI":"10.1007\/978-3-031-70543-4_5"},{"key":"2654_CR60","doi-asserted-by":"crossref","unstructured":"Yesilkanat, A., Soullard, Y., Co\u00fcasnon, B., & Girard, N. (2024). Full-page music symbols recognition: State-of-the-art deep model comparison for handwritten and printed music scores. Document analysis systems (pp. 327\u2013343). Cham: Springer Nature Switzerland.","DOI":"10.1007\/978-3-031-70442-0_20"},{"key":"2654_CR61","unstructured":"Yoyo, Liebhardt, C., & Samuel, S. (2023). BreezeWhite\/oemer: v0.1.7. Zenodo."}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02654-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02654-6","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02654-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,17]],"date-time":"2026-02-17T15:19:30Z","timestamp":1771341570000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02654-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,1,9]]},"references-count":61,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,2]]}},"alternative-id":["2654"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02654-6","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,1,9]]},"assertion":[{"value":"21 September 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 October 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 January 2026","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The first author is supported by grants ACIF\/2021\/356 and CIBEFP\/2022\/19 from the \u201cPrograma I+D+i de la Generalitat Valenciana\u201d. Second author is supported by grant CISEJI\/2023\/9 from \u201cPrograma para el apoyo a personas investigadoras con talento (Plan GenT) de la Generalitat Valenciana\u201d.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Funding"}},{"value":"The authors have no competing interests to declare that are relevant to the content of this article.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest \/ Competing interests"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"Not applicable.","order":6,"name":"Ethics","group":{"name":"EthicsHeading","label":"Materials availability"}},{"value":"The source code of the model is open-source. It is available at\n                      \n                      while the model and the weights are published in the HuggingFace Transformers platform\n                      \n                      .","order":7,"name":"Ethics","group":{"name":"EthicsHeading","label":"Code availability"}}],"article-number":"49"}}