{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,30]],"date-time":"2025-12-30T23:39:37Z","timestamp":1767137977705,"version":"build-2238731810"},"publisher-location":"Cham","reference-count":27,"publisher":"Springer Nature Switzerland","isbn-type":[{"value":"9783031705519","type":"print"},{"value":"9783031705526","type":"electronic"}],"license":[{"start":{"date-parts":[[2024,1,1]],"date-time":"2024-01-01T00:00:00Z","timestamp":1704067200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2024,1,1]],"date-time":"2024-01-01T00:00:00Z","timestamp":1704067200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2024]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    In recent years, the development of Optical Music Recognition (OMR) has progressed significantly. However, music cultures with smaller communities have only recently been considered in this process. This results in a lack of adequate ground truth datasets needed for the development and benchmarking of OMR systems. In this work, the KuiSCIMA (Jiang Kui Score Images for Musicological Analysis) dataset is introduced. KuiSCIMA is the first machine-readable dataset of the\n                    <jats:italic>suzipu<\/jats:italic>\n                    notations in Jiang Kui\u2019s collection\n                    <jats:italic>Baishidaoren Gequ<\/jats:italic>\n                    from 1202. Collected from five different woodblock print editions, the dataset contains 21797 manually annotated instances on 153 pages in total, from which 14500 are text character annotations, and 7297 are\n                    <jats:italic>suzipu<\/jats:italic>\n                    notation symbols. The dataset comes with an open-source tool which allows editing, visualizing, and exporting the contents of the dataset files. In total, this contribution promotes the preservation and understanding of cultural heritage through digitization.\n                  <\/jats:p>","DOI":"10.1007\/978-3-031-70552-6_3","type":"book-chapter","created":{"date-parts":[[2024,9,10]],"date-time":"2024-09-10T00:02:14Z","timestamp":1725926534000},"page":"38-54","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["The KuiSCIMA Dataset for\u00a0Optical Music Recognition of\u00a0Ancient Chinese Suzipu Notation"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-8435-1185","authenticated-orcid":false,"given":"Tristan","family":"Repolusk","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0356-4034","authenticated-orcid":false,"given":"Eduardo","family":"Veas","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,9,11]]},"reference":[{"key":"3_CR1","unstructured":"Berten, O.: GregoBase: A database of Gregorian scores (2013). https:\/\/gregobase.selapa.net"},{"key":"3_CR2","unstructured":"Bradski, G.: The OpenCV library. Dr. Dobb\u2019s J. Softw. Tools (2000)"},{"key":"3_CR3","doi-asserted-by":"publisher","unstructured":"Calvo-Zaragoza, J., Toselli, A.H., Vidal, E.: Handwritten music recognition for mensural notation with convolutional recurrent neural networks. Pattern Recogn. Lett. 128 (2019). https:\/\/doi.org\/10.1016\/j.patrec.2019.08.021","DOI":"10.1016\/j.patrec.2019.08.021"},{"key":"3_CR4","doi-asserted-by":"publisher","unstructured":"Chen, G.-F., Sheu, J.-S.: An optical music recognition system for traditional Chinese Kunqu Opera scores written in Gong-Che Notation. EURASIP J. Audio Speech Music Process. pp. 7\u201317. https:\/\/doi.org\/10.1186\/1687-4722-2014-7","DOI":"10.1186\/1687-4722-2014-7"},{"key":"3_CR5","doi-asserted-by":"publisher","unstructured":"Cheng, H., et al.: SCUT-CAB: a new benchmark dataset of ancient chinese books with complex layouts for document layout analysis, November 2022, pp. 436\u2013451. ISBN 978-3-031-21647-3. https:\/\/doi.org\/10.1007\/978-3-031-21648-0_30","DOI":"10.1007\/978-3-031-21648-0_30"},{"key":"3_CR6","unstructured":"Cheng, Y.: Xi\u2019an Guyue Xi\u2019an old music in new China. \u2018Living fossil\u2019 or \u2018flowing river\u2019? Dissertation. School of Oriental and African Studies, University of London (2005). https:\/\/eprints.soas.ac.uk\/29336\/ 1\/10731431.pdf. Accessed 03 Aug 2023"},{"key":"3_CR7","doi-asserted-by":"publisher","unstructured":"Forn\u00e9s, A., et al.: CVC-MUSCIMA: a ground-truth of handwritten music score images for writer identification and staff removal. Int. J. Doc. Anal. Recogn. 15(3), 243\u2013251 (2012). https:\/\/doi.org\/10.1007\/s10032-011-0168-2.","DOI":"10.1007\/s10032-011-0168-2."},{"key":"3_CR8","doi-asserted-by":"crossref","unstructured":"Haji Jr., J., Pecina, P.: The MUSCIMA++ dataset for handwritten optical music recognition. In: 14th International Conference on Document Analysis and Recognition. ICDAR 2017, Kyoto, Japan, pp. 39\u201346 (2017)","DOI":"10.1109\/ICDAR.2017.16"},{"key":"3_CR9","unstructured":"Joshi, P.: Fashion mNIST with Pytorch (93% accuracy) (2019). https:\/\/www.kaggle.com\/code\/pankajj\/fashion-mnist-with-pytorch-93-accuracy. Accessed 10 Feb 2024"},{"key":"3_CR10","doi-asserted-by":"crossref","unstructured":"Lam, J.S.C.: Ci songs from the song dynasty: a M\u00e9nage \u00e0 Trois of lyrics, music, and performance. New Liter. Hist. 46(4), 623\u2013646. (2015). ISSN 00286087, 1080661X. http:\/\/www.jstor.org\/stable\/24772762. Accessed 2 Aug 2023","DOI":"10.1353\/nlh.2015.0040"},{"key":"3_CR11","doi-asserted-by":"publisher","unstructured":"Ma, W., et al.: Joint layout analysis, character detection and recognition for historical document digitization. In: 2020 17th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 31\u2013 36 (2020). https:\/\/doi.org\/10.1109\/ICFHR2020.2020.00017","DOI":"10.1109\/ICFHR2020.2020.00017"},{"key":"3_CR12","unstructured":"Martinez-Sevilla, J.C., et al.: On the performance of optical music recognition in the absence of specific training data. In: Proceedings of the 24th International Society for Music Information Retrieval Conference (Milan, Italy). ISMIR, November 2023, pp. 319\u2013326 (2023). https:\/\/doi.org\/ 10.5281\/zenodo.10265289"},{"key":"3_CR13","doi-asserted-by":"publisher","unstructured":"Repolusk, T., Veas, E.: The Suzipu musical annotation tool for the creation of machine-readable datasets of ancient Chinese music. In: Calvo-Zaragoza, J., Pacha, A., Shatri, E. (eds.) Proceedings of the 5th International Workshop on Reading Music Systems, Milan, Italy, pp. 7\u201311 (2023). https:\/\/doi.org\/10.48550\/arXiv.2311.04091. https:\/\/sites.google.com\/view\/worms2023\/proceedings","DOI":"10.48550\/arXiv.2311.04091"},{"key":"3_CR14","doi-asserted-by":"publisher","unstructured":"Saini, R., et al.: ICDAR 2019 historical document reading challenge on large structured chinese family records. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 1499\u20131504 (2019). https:\/\/doi.org\/10.1109\/ICDAR.2019.00241","DOI":"10.1109\/ICDAR.2019.00241"},{"key":"3_CR15","doi-asserted-by":"publisher","unstructured":"Shen, T., et al.: Semantic recognition of common musical notes in Guqin score based on optimal statistical features. In: 4th International Conference on Advances in Computer Technology, Information Science and Communications (CTISC), pp. 1\u20134 (2022). https:\/\/doi.org\/10.1109\/CTISC54888.2022.9849792","DOI":"10.1109\/CTISC54888.2022.9849792"},{"key":"3_CR16","unstructured":"Sturgeon, D.: Chinese Text Project (2011). https:\/\/ctext.org\/library.pl. Accessed 30 June 2023"},{"key":"3_CR17","doi-asserted-by":"crossref","unstructured":"Sturgeon, D.: Large-scale optical character recognition of pre-modern Chinese texts. Int. J. Buddhist Thought Cult. 28(2), 11\u201344 (2018)","DOI":"10.16893\/IJBTC.2018.12.28.2.11"},{"key":"3_CR18","doi-asserted-by":"crossref","unstructured":"Tang, C.-W., Liu, C.-L., Chiu, P.-S.: HRCenterNet: an anchorless approach to Chinese character segmentation in historical documents. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 1924\u20131930 (2020)","DOI":"10.1109\/BigData50022.2020.9378051"},{"key":"3_CR19","unstructured":"West, A.C.: Musical notation for flute in Tangut manuscripts. In: Popova, I. (ed.) Tanguty v Central\u2019noj Azii, pp. 443\u2013454. Vostonaja literatura, Moskva (2012)"},{"key":"3_CR20","doi-asserted-by":"publisher","first-page":"30174","DOI":"10.1109\/ACCESS.2018.2840218","volume":"6","author":"H Yang","year":"2018","unstructured":"Yang, H., et al.: Dense and tight detection of Chinese characters in historical documents: datasets and a recognition guided detector. IEEE Access 6, 30174\u201330183 (2018). https:\/\/doi.org\/10.1109\/ACCESS.2018.2840218","journal-title":"IEEE Access"},{"key":"3_CR21","unstructured":"Yang, Y.: Plum blossom on the far side of the stream. The renaissance of Jiang Kui\u2019s Lyric Oeuvre with facsimiles and a new critical edition of the songs of the Whitestone Daoist. Hong Kong University Press, Hong Kong (2019)"},{"key":"3_CR22","unstructured":"Wu, S. \n\n\n                  \n                . Songci Yinyue Zhuanti Yanjiu \n\n\n                  \n                . Dissertation. Yangzhou University (2013)"},{"key":"3_CR23","unstructured":"Kui, J. \n\n\n                  \n                : Baishidaoren Gequ \n\n\n                  \n                . (Ed. by, Zumou, Z.) \n\n\n                  \n                . Guian: Zhushi (1913)"},{"key":"3_CR24","unstructured":"Kui, J., \n\n\n                  \n                . Baishidaoren Gequ \n\n\n                  \n                . (Ed. by, Lu, Z.) \n\n\n                  \n                . reprinted in [16], [1743] (2011). https:\/\/ctext.org\/library.pl?if=en&res=775747. Accessed 30 June 2023"},{"key":"3_CR25","unstructured":"Kui, J. \n\n\n                  \n                . Baishidaoren Gequ \n\n\n                  \n                . (Ed. by, Zhang, Y) \n\n\n                  \n                . reprinted in [21], pp. 259\u2013323, [1749] (2019)"},{"key":"3_CR26","unstructured":"Kui, J. \n\n\n                  \n                . Baishidaoren Gequ \n\n\n                  \n                . (Ed. by, Lu, Z. \n\n\n                  \n                , Min, H. \n\n\n                  \n                , Wang, Z. \n\n\n                  \n                ) reprinted in [21], pp. 193\u2013254, [c.1736] (2019)"},{"key":"3_CR27","unstructured":"Kui J., \n\n\n                  \n                . Baishidaoren Gequ \n\n\n                  \n                . In: Siku Quanshu \n\n\n                  \n                , vol. 1. reprinted in [16]. https:\/\/ctext.org\/library.pl?res=106386. Accessed 30 June 2023"}],"updated-by":[{"DOI":"10.1007\/978-3-031-70552-6_27","type":"correction","label":"Correction","source":"publisher","updated":{"date-parts":[[2024,9,11]],"date-time":"2024-09-11T00:00:00Z","timestamp":1726012800000}}],"container-title":["Lecture Notes in Computer Science","Document Analysis and Recognition - ICDAR 2024"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-70552-6_3","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,15]],"date-time":"2025-06-15T10:07:51Z","timestamp":1749982071000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-70552-6_3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024]]},"ISBN":["9783031705519","9783031705526"],"references-count":27,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-70552-6_3","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"value":"0302-9743","type":"print"},{"value":"1611-3349","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024]]},"assertion":[{"value":"11 September 2024","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"11 September 2024","order":2,"name":"change_date","label":"Change Date","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"Correction","order":3,"name":"change_type","label":"Change Type","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"A correction has been published.","order":4,"name":"change_details","label":"Change Details","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"The authors have no competing interests to declare that are relevant to the content of this article.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Disclosure of Interests"}},{"value":"ICDAR","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"International Conference on Document Analysis and Recognition","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Athens","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Greece","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2024","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"30 August 2024","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"4 September 2024","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"icdar2024","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/icdar2024.net\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}