{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,30]],"date-time":"2025-12-30T03:38:57Z","timestamp":1767065937945,"version":"3.41.0"},"reference-count":42,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2022,9,16]],"date-time":"2022-09-16T00:00:00Z","timestamp":1663286400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"German Science Foundation"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["J. Comput. Cult. Herit."],"published-print":{"date-parts":[[2022,9,30]]},"abstract":"<jats:p>Automated content-based search for arbitrary cuneiform signs in photographic reproductions is a challenging task in the analysis of ancient documents, a central component of which is a reliable cuneiform sign classification. We present an illumination-based approach to generate synthetic training data for cuneiform sign classification via deep neural networks to overcome common issues with the transferability of machine learning training results. Starting from an analysis of the negative impact of illumination variations in the processed cuneiform data, we employ an illumination augmentation to two-dimensional (2D) training data generated from annotated 3D datasets. We demonstrate that our method is able to overcome the high visual variance of most digitized 2D cuneiform reproductions and achieve an illumination invariant generalization. The effectiveness of our approach is evaluated by its successful application to several subsets of a cuneiform script dataset with an originally poor transferability of mutual training results. Furthermore, we show that a sufficient sampling of the illumination space mostly removes the necessity to match the training data to specific target illumination conditions. The practical applicability of our approach is validated by applying it to a larger dataset, raising the overall classification accuracy by 4 percentage points to 90%, resulting in a classification error reduction of 28.5% when compared to results without the proposed data augmentation.<\/jats:p>","DOI":"10.1145\/3495263","type":"journal-article","created":{"date-parts":[[2022,2,18]],"date-time":"2022-02-18T20:26:37Z","timestamp":1645215997000},"page":"1-20","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Illumination-based Augmentation for Cuneiform Deep Neural Sign Classification"],"prefix":"10.1145","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0551-517X","authenticated-orcid":false,"given":"Christopher","family":"Rest","sequence":"first","affiliation":[{"name":"TU Dortmund, Chair of Computer Graphics, Dortmund, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0796-9442","authenticated-orcid":false,"given":"Denis","family":"Fisseler","sequence":"additional","affiliation":[{"name":"TU Dortmund, Chair of Computer Graphics, Dortmund, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2530-8197","authenticated-orcid":false,"given":"Frank","family":"Weichert","sequence":"additional","affiliation":[{"name":"TU Dortmund, Chair of Computer Graphics, Dortmund, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2084-4967","authenticated-orcid":false,"given":"Turna","family":"Somel","sequence":"additional","affiliation":[{"name":"Akademie der Wissenschaften und der Literatur, Mainz, Germany"}]},{"given":"Gerfrid G. W.","family":"M\u00fcller","sequence":"additional","affiliation":[{"name":"Akademie der Wissenschaften und der Literatur, Mainz, Germany"}]}],"member":"320","published-online":{"date-parts":[[2022,9,16]]},"reference":[{"key":"e_1_3_2_2_2","article-title":"Applying data augmentation to handwritten arabic numeral recognition using deep learning neural networks","author":"Ashiquzzaman Akm","year":"2017","unstructured":"Akm Ashiquzzaman, Abdul Kawsar Tushar, and Ashiqur Rahman. 2017. Applying data augmentation to handwritten arabic numeral recognition using deep learning neural networks. arXiv:1708.05969. Retrieved from https:\/\/arxiv.org\/abs\/1708.05969.","journal-title":"arXiv:1708.05969"},{"key":"e_1_3_2_3_2","article-title":"Handwritten text recognition using deep learning","author":"Balci Batuhan","year":"2017","unstructured":"Batuhan Balci, Dan Saadati, and Dan Shiferaw. 2017. Handwritten text recognition using deep learning. CS231n: Convolutional Neural Networks for Visual Recognition, Stanford University, Course Project Report, Spring (2017).","journal-title":"CS231n: Convolutional Neural Networks for Visual Recognition, Stanford University, Course Project Report, Spring"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDAR.2015.7333777"},{"key":"e_1_3_2_5_2","first-page":"105","volume-title":"Proceedings of the 20th Computer Vision Winter Workshop","author":"Bogacz Bartosz","year":"2015","unstructured":"Bartosz Bogacz, Michael Gertz, and Hubert Mara. 2015b. Cuneiform character similarity using graph representations. In Proceedings of the 20th Computer Vision Winter Workshop. 105\u2013112."},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICFHR.2016.0064"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDAR.2017.106"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICFHR2020.2020.00053"},{"key":"e_1_3_2_9_2","volume-title":"Current Research in Cuneiform Paleography: Proceedings of a Workshop held at the 60th Rencontre Assyriologique Internationale","author":"Cammarosano Michele","year":"2014","unstructured":"Michele Cammarosano. 2014a. 3D-Joins und schriftmetrologie: A quantitative approach to cuneiform palaeography. In Current Research in Cuneiform Paleography: Proceedings of a Workshop held at the 60th Rencontre Assyriologique Internationale. University of Warsaw."},{"key":"e_1_3_2_10_2","unstructured":"Michele Cammarosano. 2014b. The cuneiform stylus. Mesopotamia 49 (2014) 53\u201390."},{"key":"e_1_3_2_11_2","unstructured":"London Department of the Middle East of the British Museum and Cuneiform Digital Library Initiative (CDLI). British Museum Collection. Retrieved June 15 2021 from cdli.ucla.edu."},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201378"},{"key":"e_1_3_2_13_2","article-title":"On the importance of visual context for data augmentation in scene understanding","author":"Dvornik Nikita","year":"2019","unstructured":"Nikita Dvornik, Julien Mairal, and Cordelia Schmid. 2019. On the importance of visual context for data augmentation in scene understanding. IEEE Trans. Pattern Anal. Mach. Intell. 43, 6 (2019), 2014\u20132028.","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/83.568931"},{"key":"e_1_3_2_15_2","unstructured":"Matthias Fey Jan Eric Lenssen Frank Weichert and Heinrich M\u00fcller. 2017. SplineCNN: Fast geometric deep learning with continuous B-Spline kernels. arxiv:1711.08920. Retrieved from https:\/\/arxiv.org\/abs\/1711.08920."},{"key":"e_1_3_2_16_2","volume-title":"Proceedings of the Scientific Computing and Cultural Heritage Conference (SCCH\u201913)","author":"Fisseler Denis","year":"2013","unstructured":"Denis Fisseler, Frank Weichert, Michele Cammarosano, and Gerfrid G. W. M\u00fcller. 2013. Towards an interactive and automated script feature analysis of 3D scanned cuneiform tablets. In Proceedings of the Scientific Computing and Cultural Heritage Conference (SCCH\u201913), Vol. 4."},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.5555\/2854922.2854945"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.5555\/2854922.2854945"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.5555\/3086952"},{"issue":"2","key":"e_1_3_2_20_2","first-page":"163","article-title":"New visualization techniques for cuneiform texts and sealings","volume":"132","author":"Hameeuw Hendrik","year":"2011","unstructured":"Hendrik Hameeuw and Geert Willems. 2011. New visualization techniques for cuneiform texts and sealings. Akkadica 132, 2 (2011), 163\u2013178.","journal-title":"Akkadica"},{"key":"e_1_3_2_21_2","unstructured":"Alex Hern\u00e1ndez-Garc\u00eda and Peter K\u00f6nig. 2019. Further advantages of data augmentation on convolutional neural networks. arxiv:1906.11052. Retrieved from https:\/\/arxiv.org\/abs\/1906.11052."},{"key":"e_1_3_2_22_2","unstructured":"Gao Huang Zhuang Liu Laurens van der Maaten and Kilian Q. Weinberger. 2016. Densely connected convolutional networks. arxiv:cs.CV\/1608.06993. Retrieved from https:\/\/arxiv.org\/abs\/1608.06993."},{"key":"e_1_3_2_23_2","unstructured":"Nils M. Kriege Matthias Fey Denis Fisseler Petra Mutzel and Frank Weichert. 2018. Recognizing cuneiform signs using graph based methods. arxiv:1802.05908. Retrieved from https:\/\/arxiv.org\/abs\/1802.05908."},{"key":"e_1_3_2_24_2","first-page":"1097","volume-title":"Advances in Neural Information Processing Systems","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097\u20131105."},{"key":"e_1_3_2_25_2","unstructured":"Niall O\u2019 Mahony Sean Campbell Anderson Carvalho Suman Harapanahalli Gustavo Adolfo Velasco-Hern\u00e1ndez Lenka Krpalkova Daniel Riordan and Joseph Walsh. 2019. Deep learning vs. traditional computer vision. arxiv:1910.13796. Retrieved from https:\/\/arxiv.org\/abs\/1910.13796."},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDAR.2013.21"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.11588\/heidok.00020244"},{"key":"e_1_3_2_28_2","unstructured":"Gerfrid G. W. M\u00fcller. 2000. Hethitologie Portal Mainz. Retrieved May 5 2018 from www.hethiter.net."},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.3390\/sym10110648"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICFHR.2014.142"},{"key":"e_1_3_2_31_2","doi-asserted-by":"crossref","unstructured":"Leonard Rothacker Denis Fisseler Gerfrid M\u00fcller Frank Weichert and Gernot A. Fink. 2015. Retrieving Cuneiform Structures in a Segmentation-free Word Spotting Framework. In Proceedings of the 3rd International Workshop on Historical Document Imaging and Processing . 129\u2013136.","DOI":"10.1145\/2809544.2809562"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDAR.2013.264"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3352631.3352632"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICFHR2020.2020.00019"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-019-0197-0"},{"key":"e_1_3_2_36_2","article-title":"Very deep convolutional networks for large-scale image recognition","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556. Retrieved from https:\/\/arxiv.org\/abs\/1409.1556.","journal-title":"arXiv:1409.1556"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.97"},{"key":"e_1_3_2_38_2","article-title":"Fixing the train-test resolution discrepancy: FixEfficientNet","author":"Touvron Hugo","year":"2020","unstructured":"Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2020. Fixing the train-test resolution discrepancy: FixEfficientNet. arXiv:2003.08237. Retrieved from https:\/\/arxiv.org\/abs\/2003.08237.","journal-title":"arXiv:2003.08237"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDAR.2017.110"},{"key":"e_1_3_2_40_2","first-page":"73","volume-title":"Proceedings of the 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST\u201905)","author":"Willems Geert","year":"2005","unstructured":"Geert Willems, Frank Verbiest, Wim Moreau, Hendrik Hameeuw, Karel van Lerberghe, and Luc van Gool. 2005. Easy and cost-effective cuneiform digitizing. In Proceedings of the 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST\u201905). 73\u201380."},{"key":"e_1_3_2_41_2","unstructured":"Saining Xie Ross B. Girshick Piotr Doll\u00e1r Zhuowen Tu and Kaiming He. 2016. Aggregated residual transformations for deep neural networks. arxiv:1611.05431. Retrieved from https:\/\/arxiv.org\/abs\/1611.05431."},{"key":"e_1_3_2_42_2","article-title":"Random erasing data augmentation","author":"Zhong Zhun","year":"2017","unstructured":"Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. 2017. Random erasing data augmentation. arXiv:1708.04896. Retrieved from https:\/\/arxiv.org\/abs\/1708.04896.","journal-title":"arXiv:1708.04896"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.4159\/harvard.9780674434929"}],"container-title":["Journal on Computing and Cultural Heritage"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3495263","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3495263","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:23Z","timestamp":1750182563000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3495263"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,16]]},"references-count":42,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,9,30]]}},"alternative-id":["10.1145\/3495263"],"URL":"https:\/\/doi.org\/10.1145\/3495263","relation":{},"ISSN":["1556-4673","1556-4711"],"issn-type":[{"type":"print","value":"1556-4673"},{"type":"electronic","value":"1556-4711"}],"subject":[],"published":{"date-parts":[[2022,9,16]]},"assertion":[{"value":"2021-06-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-11-02","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-09-16","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}