{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,2]],"date-time":"2026-01-02T07:37:06Z","timestamp":1767339426894,"version":"3.44.0"},"reference-count":99,"publisher":"Springer Science and Business Media LLC","issue":"32","license":[{"start":{"date-parts":[[2024,12,26]],"date-time":"2024-12-26T00:00:00Z","timestamp":1735171200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,12,26]],"date-time":"2024-12-26T00:00:00Z","timestamp":1735171200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100010661","name":"Horizon 2020 Framework Programme","doi-asserted-by":"publisher","award":["101004469"],"award-info":[{"award-number":["101004469"]}],"id":[{"id":"10.13039\/100010661","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Multimed Tools Appl"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>The automatic recognition of sensory gestures in artworks provides the opportunity to open up methods of computational humanities to modern paradigms like sensory studies or everyday history. We introduce SensoryArt, a dataset of multisensory gestures in historical artworks, annotated with person boxes, pose estimation key points and gesture labels. We analyze algorithms for each label type and explore their combination for gesture recognition without intermediate supervision. These combined algorithms are evaluated for their ability to recognize and localize depicted persons performing sensory gestures. Our experiments show that direct detection of smell gestures is the most effective method for both detecting and localizing gestures. After applying post-processing, this method outperforms even image-level classification algorithms in image-level classification metrics, despite not being the primary training objective. This work aims to open up the field of sensory history to the computational humanities and provide humanities-based scholars with a solid foundation to complement their methodological toolbox with quantitative methods.<\/jats:p>","DOI":"10.1007\/s11042-024-20502-6","type":"journal-article","created":{"date-parts":[[2024,12,25]],"date-time":"2024-12-25T22:29:19Z","timestamp":1735165759000},"page":"39055-39083","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Recognizing sensory gestures in historical artworks"],"prefix":"10.1007","volume":"84","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4366-5216","authenticated-orcid":false,"given":"Mathias","family":"Zinnen","sequence":"first","affiliation":[]},{"given":"Azhar","family":"Hussian","sequence":"additional","affiliation":[]},{"given":"Andreas","family":"Maier","sequence":"additional","affiliation":[]},{"given":"Vincent","family":"Christlein","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,12,26]]},"reference":[{"key":"20502_CR1","volume-title":"Sensing the past: seeing, hearing, smelling, tasting, and touching in history","author":"MM Smith","year":"2007","unstructured":"Smith MM (2007) Sensing the past: seeing, hearing, smelling, tasting, and touching in history. University of California Press, Berkeley"},{"issue":"373","key":"20502_CR2","doi-asserted-by":"publisher","first-page":"804","DOI":"10.1111\/1468-229X.13246","volume":"106","author":"W Tullett","year":"2021","unstructured":"Tullett W (2021) State of the field: sensory history. History 106(373):804\u2013820","journal-title":"History"},{"key":"20502_CR3","doi-asserted-by":"publisher","unstructured":"Murray Parker DHRS, Bond J (2023) Sensory perception in cultural studies-a review of sensorial and multisensorial heritage. The Sens Soc 1\u201331. https:\/\/doi.org\/10.1080\/17458927.2023.2284532","DOI":"10.1080\/17458927.2023.2284532"},{"key":"20502_CR4","doi-asserted-by":"publisher","first-page":"17","DOI":"10.1146\/annurev-anthro-102218-011324","volume":"48","author":"D Howes","year":"2019","unstructured":"Howes D (2019) Multisensory anthropology. Annu Rev Anthropol 48:17\u201328","journal-title":"Annu Rev Anthropol"},{"key":"20502_CR5","doi-asserted-by":"publisher","unstructured":"Tullett W, Leemans I, Hsu H, Weismann S, Bembibre C, Kiechle MA, Jethro D, Chen A, Huang X, Otero-Pailos J, Bradley M (2022) Smell, history, and heritage. Am Hist Rev 127(1):261\u2013309. https:\/\/doi.org\/10.1093\/ahr\/rhac147, https:\/\/academic.oup.com\/ahr\/article-pdf\/127\/1\/261\/43463916\/rhac147.pdf","DOI":"10.1093\/ahr\/rhac147"},{"key":"20502_CR6","doi-asserted-by":"publisher","DOI":"10.5040\/9781474252454","volume-title":"The museum of the senses: experiencing art and collections","author":"C Classen","year":"2017","unstructured":"Classen C (2017) The museum of the senses: experiencing art and collections. Bloomsbury Publishing, London"},{"key":"20502_CR7","doi-asserted-by":"crossref","unstructured":"Zinnen M (2021) How to see smells: Extracting olfactory references from artworks. In: Companion proceedings of the Web conference 2021. pp 725\u2013726","DOI":"10.1145\/3442442.3453710"},{"key":"20502_CR8","doi-asserted-by":"crossref","unstructured":"Menini S, Paccosi T, Tonelli S, Van\u00a0Erp M, Leemans I, Lisena P, Troncy R, Tullett W, H\u00fcrriyeto\u011flu A, Dijkstra G et al (2022) A multilingual benchmark to capture olfactory situations over time. In: Proceedings of the 3rd workshop on computational approaches to historical language change. pp 1\u201310","DOI":"10.18653\/v1\/2022.lchange-1.1"},{"key":"20502_CR9","doi-asserted-by":"crossref","unstructured":"Lisena P, Schwabe D, Erp M, Troncy R, Tullett W, Leemans I, Marx L, Ehrich SC (2022) Capturing the semantics of smell: the odeuropa data model for olfactory heritage information. In: European semantic web conference. Springer, pp 387\u2013405","DOI":"10.1007\/978-3-031-06981-9_23"},{"issue":"1","key":"20502_CR10","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1093\/ahr\/rhac147","volume":"127","author":"W Tullett","year":"2022","unstructured":"Tullett W, Leemans I, Hsu H, Weismann S, Bembibre C, Kiechle MA, Jethro D, Chen A, Huang X, Otero-Pailos J et al (2022) Smell, history, and heritage. Am Hist Rev 127(1):261\u2013309","journal-title":"Am Hist Rev"},{"key":"20502_CR11","doi-asserted-by":"crossref","unstructured":"Zinnen M, Hussian A, Tran H, Madhu P, Maier A, Christlein V (2023) Sniffyart: the dataset of smelling persons. In: Proceedings of the 5th workshop on analysis, understanding and promotion of heritage contents. pp 49\u201358","DOI":"10.1145\/3607542.3617357"},{"key":"20502_CR12","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vision 115:211\u2013252","journal-title":"Int J Comput Vision"},{"key":"20502_CR13","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Doll\u00e1r P, Zitnick CL (2014) Microsoft coco: Common objects in context. In: Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer, pp 740\u2013755","DOI":"10.1007\/978-3-319-10602-1_48"},{"issue":"7","key":"20502_CR14","doi-asserted-by":"publisher","first-page":"1956","DOI":"10.1007\/s11263-020-01316-z","volume":"128","author":"A Kuznetsova","year":"2020","unstructured":"Kuznetsova A, Rom H, Alldrin N, Uijlings J, Krasin I, Pont-Tuset J, Kamali S, Popov S, Malloci M, Kolesnikov A et al (2020) The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. Int J Comput Vision 128(7):1956\u20131981","journal-title":"Int J Comput Vision"},{"key":"20502_CR15","doi-asserted-by":"crossref","unstructured":"Shao S, Li Z, Zhang T, Peng C, Yu G, Zhang X, Li J, Sun J (2019) Objects365: A large-scale, high-quality dataset for object detection. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision. pp 8430\u20138439","DOI":"10.1109\/ICCV.2019.00852"},{"key":"20502_CR16","unstructured":"Bell P, Ommer B (2018) Computer vision und kunstgeschichte\u2013dialog zweier bildwissenschaften"},{"key":"20502_CR17","doi-asserted-by":"crossref","unstructured":"Arnold T, Tilton L (2019) Distant viewing: analyzing large visual corpora. Digital Scholarship in the Humanities. 34(Supplement_1):3\u201316","DOI":"10.1093\/llc\/fqz013"},{"key":"20502_CR18","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1007\/s41095-015-0017-1","volume":"1","author":"P Hall","year":"2015","unstructured":"Hall P, Cai H, Wu Q, Corradi T (2015) Cross-depiction problem: Recognition and synthesis of photographs and artwork. Comput Visual Media 1:91\u2013103","journal-title":"Comput Visual Media"},{"key":"20502_CR19","doi-asserted-by":"crossref","unstructured":"Cai H, Wu Q, Hall P (2015) Beyond photo-domain object recognition: Benchmarks for the cross-depiction problem. In: Proceedings of the IEEE international conference on computer vision workshops. pp 1\u20136","DOI":"10.1109\/ICCVW.2015.19"},{"key":"20502_CR20","doi-asserted-by":"crossref","unstructured":"Farahani A, Voghoei S, Rasheed K, Arabnia HR (2021) A brief review of domain adaptation. In: Advances in data science and information engineering: proceedings from ICDATA 2020 and IKE 2020. pp 877\u2013894","DOI":"10.1007\/978-3-030-71704-9_65"},{"key":"20502_CR21","doi-asserted-by":"crossref","unstructured":"Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp 2223\u20132232","DOI":"10.1109\/ICCV.2017.244"},{"key":"20502_CR22","doi-asserted-by":"crossref","unstructured":"Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE international conference on computer vision. pp 1501\u20131510","DOI":"10.1109\/ICCV.2017.167"},{"key":"20502_CR23","first-page":"26561","volume":"34","author":"H Chen","year":"2021","unstructured":"Chen H, Wang Z, Zhang H, Zuo Z, Li A, Xing W, Lu D et al (2021) Artistic style transfer with internal-external learning and contrastive learning. Adv Neural Inf Process Syst 34:26561\u201326573","journal-title":"Adv Neural Inf Process Syst"},{"key":"20502_CR24","doi-asserted-by":"crossref","unstructured":"Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B (2022) High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 10684\u201310695","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"20502_CR25","unstructured":"Patoliya V, Zinnen M, Maier A, Christlein V (2024) Smell and emotion: Recognising emotions in smell-related artworks. arXiv:2407.04592"},{"key":"20502_CR26","doi-asserted-by":"crossref","unstructured":"Huang H, Zinnen M, Liu S, Maier A, Christlein V (2024) Scene classification on fine arts with style transfer. In: Proceedings of the 6th Workshop on the analySis, understanding and promotion of heritage contents. pp 18\u201327","DOI":"10.1145\/3689094.3689468"},{"key":"20502_CR27","doi-asserted-by":"crossref","unstructured":"Madhu P, Kosti R, M\u00fchrenberg L, Bell P, Maier A, Christlein V (2019) Recognizing characters in art history using deep learning. In: Proceedings of the 1st workshop on structuring and understanding of multimedia heritage contents. pp 15\u201322","DOI":"10.1145\/3347317.3357242"},{"key":"20502_CR28","doi-asserted-by":"crossref","unstructured":"Kadish D, Risi S, L\u00f8vlie AS (2021) Improving object detection in art images using only style transfer. In: 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, pp 1\u20138","DOI":"10.1109\/IJCNN52387.2021.9534264"},{"key":"20502_CR29","doi-asserted-by":"publisher","first-page":"163","DOI":"10.1016\/j.neucom.2022.01.068","volume":"490","author":"Y Lu","year":"2022","unstructured":"Lu Y, Guo C, Dai X, Wang F-Y (2022) Data-efficient image captioning of fine art paintings via virtual-real semantic alignment training. Neurocomput 490:163\u2013180","journal-title":"Neurocomput"},{"issue":"1","key":"20502_CR30","first-page":"1","volume":"16","author":"P Madhu","year":"2022","unstructured":"Madhu P, Villar-Corrales A, Kosti R, Bendschus T, Reinhardt C, Bell P, Maier A, Christlein V (2022) Enhancing human pose estimation in ancient vase paintings via perceptually-grounded style transfer learning. ACM J Comput Cultural Heritage 16(1):1\u201317","journal-title":"ACM J Comput Cultural Heritage"},{"key":"20502_CR31","doi-asserted-by":"publisher","first-page":"631","DOI":"10.1007\/978-3-030-11012-3_48","volume-title":"Computer Vision - ECCV 2018 Workshops","author":"M Sabatelli","year":"2019","unstructured":"Sabatelli M, Kestemont M, Daelemans W, Geurts P (2019) Deep transfer learning for art classification problems. In: Leal-Taix\u00e9 L, Roth S (eds) Computer Vision - ECCV 2018 Workshops. Springer, Cham, pp 631\u2013646"},{"key":"20502_CR32","doi-asserted-by":"crossref","unstructured":"Gonthier N, Gousseau Y, Ladjal S (2021) An analysis of the transfer learning of convolutional neural networks for artistic images. In: Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10\u201315, 2021, Proceedings, Part III. Springer, pp 546\u2013561","DOI":"10.1007\/978-3-030-68796-0_39"},{"key":"20502_CR33","unstructured":"Zinnen M, Madhu P, Bell P, Maier A, Christlein V (2022) Transfer learning for olfactory object detection. In: Digital humanities conference, 2022. Alliance of Digital Humanities Organizations, pp 409\u2013413. arXiv:2301.09906"},{"key":"20502_CR34","doi-asserted-by":"crossref","unstructured":"Zhao W, Jiang W, Qiu X (2022) Big transfer learning for fine art classification. Comput Intell Neurosci 2022","DOI":"10.1155\/2022\/1764606"},{"key":"20502_CR35","doi-asserted-by":"crossref","unstructured":"Liu S, Huang H, Zinnen M, Maier A, Christlein V (2024) Novel artistic scene-centric datasets for effective transfer learning in fragrant spaces. arXiv:2407.11701","DOI":"10.1007\/978-3-031-91572-7_10"},{"key":"20502_CR36","doi-asserted-by":"publisher","first-page":"336","DOI":"10.1007\/s11263-019-01228-7","volume":"128","author":"RR Selvaraju","year":"2020","unstructured":"Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2020) Grad-cam: visual explanations from deep networks via gradient-based localization. Int J Comput Vision 128:336\u2013359","journal-title":"Int J Comput Vision"},{"key":"20502_CR37","doi-asserted-by":"crossref","unstructured":"Nguyen A, Yosinski J, Clune J (2019) Understanding neural networks via feature visualization: A survey. Explainable AI: interpreting, explaining and visualizing deep learning pp 55\u201376","DOI":"10.1007\/978-3-030-28954-6_4"},{"issue":"8","key":"20502_CR38","doi-asserted-by":"publisher","first-page":"3846","DOI":"10.3390\/app12083846","volume":"12","author":"J An","year":"2022","unstructured":"An J, Joe I (2022) Attention map-guided visual explanations for deep neural networks. Appl Sci 12(8):3846","journal-title":"Appl Sci"},{"key":"20502_CR39","doi-asserted-by":"publisher","first-page":"4","DOI":"10.61356\/SMIJ.2024.8290","volume":"8","author":"W Abdullah","year":"2024","unstructured":"Abdullah W, Tolba A, Elmasry A, Mostafa NN (2024) Visioncam: A comprehensive xai toolkit for interpreting image-based deep learning models. Sustain Mach Intell J 8:4\u201346","journal-title":"Sustain Mach Intell J"},{"key":"20502_CR40","doi-asserted-by":"crossref","unstructured":"Garcia N, Vogiatzis G (2018) How to read paintings: semantic art understanding with multi-modal retrieval. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops. pp 0\u20130","DOI":"10.1007\/978-3-030-11012-3_52"},{"key":"20502_CR41","unstructured":"Gupta J, Madhu P, Kosti R, Bell P, Maier A, Christlein V Towards image caption generation for art historical data"},{"key":"20502_CR42","unstructured":"Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J et al. (2021) Learning transferable visual models from natural language supervision. In: International conference on machine learning. PMLR, pp 8748\u20138763"},{"key":"20502_CR43","unstructured":"Ali H, Paccosi T, Menini S, Mathias Z, Pasquale L, Kiymet A, Rapha\u00ebl T, Erp M (2022) Musti-multimodal understanding of smells in texts and images at mediaeval 2022. In: Proceedings of MediaEval 2022 CEUR workshop"},{"key":"20502_CR44","unstructured":"Kiymet A, Ali H, Rapha\u00ebl T, Paccosi T, Menini S, Mathias Z, Vincent C (2022) Multimodal and multilingual understanding of smells using vilbert and muniter. In: Proceedings of MediaEval 2022 CEUR Workshop"},{"key":"20502_CR45","doi-asserted-by":"publisher","first-page":"128837","DOI":"10.1109\/ACCESS.2019.2939201","volume":"7","author":"L Jiao","year":"2019","unstructured":"Jiao L, Zhang F, Liu F, Yang S, Li L, Feng Z, Qu R (2019) A survey of deep learning-based object detection. IEEE Access 7:128837\u2013128868","journal-title":"IEEE Access"},{"key":"20502_CR46","doi-asserted-by":"crossref","unstructured":"Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on computer vision and pattern recognition. pp 580\u2013587","DOI":"10.1109\/CVPR.2014.81"},{"key":"20502_CR47","doi-asserted-by":"crossref","unstructured":"Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp 1440\u20131448","DOI":"10.1109\/ICCV.2015.169"},{"key":"20502_CR48","unstructured":"Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Info Process Syst 28"},{"key":"20502_CR49","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Doll\u00e1r P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2117\u20132125","DOI":"10.1109\/CVPR.2017.106"},{"key":"20502_CR50","doi-asserted-by":"crossref","unstructured":"He K, Gkioxari G, Doll\u00e1r P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp 2961\u20132969","DOI":"10.1109\/ICCV.2017.322"},{"key":"20502_CR51","doi-asserted-by":"crossref","unstructured":"Cai Z, Vasconcelos N (2018) Cascade r-cnn: delving into high quality object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 6154\u20136162","DOI":"10.1109\/CVPR.2018.00644"},{"key":"20502_CR52","doi-asserted-by":"crossref","unstructured":"Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 779\u2013788","DOI":"10.1109\/CVPR.2016.91"},{"key":"20502_CR53","unstructured":"Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. arXiv:1804.02767"},{"key":"20502_CR54","doi-asserted-by":"crossref","unstructured":"Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 7263\u20137271","DOI":"10.1109\/CVPR.2017.690"},{"key":"20502_CR55","unstructured":"Jocher G, Stoken A, Borovec J, Changyu L, Hogan A, Diaconu L, Poznanski J, Yu L, Rai P, Ferriday R et al (2020) ultralytics\/yolov5: v3. 0. Zenodo"},{"key":"20502_CR56","unstructured":"Long X, Deng K, Wang G, Zhang Y, Dang Q, Gao Y, Shen H, Ren J, Han S, Ding E et al (2020) Pp-yolo: an effective and efficient implementation of object detector. arXiv:2007.12099"},{"key":"20502_CR57","unstructured":"Jocher G, Chaurasia A, Qiu J (2023) YOLO by Ultralytics. https:\/\/github.com\/ultralytics\/ultralytics"},{"key":"20502_CR58","doi-asserted-by":"crossref","unstructured":"Wang C-Y, Yeh I-H, Liao H-YM (2024) Yolov9: learning what you want to learn using programmable gradient information. arXiv:2402.13616","DOI":"10.1007\/978-3-031-72751-1_1"},{"key":"20502_CR59","doi-asserted-by":"crossref","unstructured":"Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European conference on computer vision. Springer, pp 213\u2013229","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"20502_CR60","unstructured":"Zhu X, Su W, Lu L, Li B, Wang X, Dai J (2020) Deformable detr: deformable transformers for end-to-end object detection. In: International conference on learning representations"},{"key":"20502_CR61","unstructured":"Liu S, Li F, Zhang H, Yang X, Qi X, Su H, Zhu J, Zhang L (2021) Dab-detr: Dynamic anchor boxes are better queries for detr. In: International conference on learning representations"},{"key":"20502_CR62","doi-asserted-by":"crossref","unstructured":"Li F, Zhang H, Liu S, Guo J, Ni LM, Zhang L (2022) Dn-detr: accelerate detr training by introducing query denoising. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 13619\u201313627","DOI":"10.1109\/CVPR52688.2022.01325"},{"key":"20502_CR63","doi-asserted-by":"crossref","unstructured":"Li F, Zhang H, Liu S, Guo J, Ni LM, Zhang L (2022) Dn-detr: accelerate detr training by introducing query denoising. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 13619\u201313627","DOI":"10.1109\/CVPR52688.2022.01325"},{"key":"20502_CR64","doi-asserted-by":"crossref","unstructured":"Crowley E, Zisserman A (2014) The state of the art: Object retrieval in paintings using discriminative regions. In: Proceedings of the British machine vision conference. BMVA Press","DOI":"10.5244\/C.28.38"},{"key":"20502_CR65","doi-asserted-by":"crossref","unstructured":"Crowley EJ, Zisserman A (2015) In search of art. In: Computer Vision-ECCV 2014 Workshops: Zurich, Switzerland, September 6-7 and 12, 2014, Proceedings, Part I 13, pp. 54\u201370. Springer","DOI":"10.1007\/978-3-319-16178-5_4"},{"key":"20502_CR66","doi-asserted-by":"crossref","unstructured":"Crowley EJ, Zisserman A (2016) The art of detection. In: Computer Vision\u2013ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I 14. Springer, pp 721\u2013737","DOI":"10.1007\/978-3-319-46604-0_50"},{"key":"20502_CR67","doi-asserted-by":"publisher","first-page":"692","DOI":"10.1007\/978-3-030-11012-3_53","volume-title":"Computer Vision - ECCV 2018 Workshops","author":"N Gonthier","year":"2019","unstructured":"Gonthier N, Gousseau Y, Ladjal S, Bonfait O (2019) Weakly supervised object detection in artworks. In: Leal-Taix\u00e9 L, Roth S (eds) Computer Vision - ECCV 2018 Workshops. Springer, Cham, pp 692\u2013709"},{"key":"20502_CR68","doi-asserted-by":"crossref","unstructured":"Madhu P, Meyer A, Zinnen M, M\u00fchrenberg L, Suckow D, Bendschus T, Reinhardt C, Bell P, Verstegen U, Kosti R et al. (2022) One-shot object detection in heterogeneous artwork datasets. In: 2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, pp 1\u20136","DOI":"10.1109\/IPTA54936.2022.9784141"},{"key":"20502_CR69","doi-asserted-by":"crossref","unstructured":"Westlake N, Cai H, Hall P (2016) Detecting people in artwork with cnns. In: Computer Vision\u2013ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I 14. Springer, pp 825\u2013841","DOI":"10.1007\/978-3-319-46604-0_57"},{"key":"20502_CR70","doi-asserted-by":"crossref","unstructured":"Zinnen M, Madhu P, Kosti R, Bell P, Maier A, Christlein V (2022) Odor: The icpr2022 odeuropa challenge on olfactory object recognition. In: 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, pp 4989\u20134994","DOI":"10.1109\/ICPR56361.2022.9956542"},{"key":"20502_CR71","doi-asserted-by":"publisher","first-page":"124576","DOI":"10.1016\/j.eswa.2024.124576","volume":"255","author":"M Zinnen","year":"2024","unstructured":"Zinnen M, Madhu P, Leemans I, Bell P, Hussian A, Tran H, H\u00fcrriyeto\u011flu A, Maier A, Christlein V (2024) Smelly, dense, and spreaded: The Object Detection for Olfactory References (ODOR) dataset. Expert Syst Appl 255:124576","journal-title":"Expert Syst Appl"},{"key":"20502_CR72","doi-asserted-by":"crossref","unstructured":"Kim S, Park J, Bang J, Lee H (2018) Seeing is smelling: localizing odor-related objects in images. In: Proceedings of the 9th augmented human international conference. pp 1\u20139","DOI":"10.1145\/3174910.3174922"},{"key":"20502_CR73","doi-asserted-by":"crossref","unstructured":"Reshetnikov A, Marinescu M-C, Lopez JM (2022) Deart: dataset of european art. In: European conference on computer vision. Springer, pp 218\u2013233","DOI":"10.1007\/978-3-031-25056-9_15"},{"issue":"1","key":"20502_CR74","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3603618","volume":"56","author":"C Zheng","year":"2023","unstructured":"Zheng C, Wu W, Chen C, Yang T, Zhu S, Shen J, Kehtarnavaz N, Shah M (2023) Deep learning-based human pose estimation: A survey. ACM Comput Surv 56(1):1\u201337","journal-title":"ACM Comput Surv"},{"key":"20502_CR75","doi-asserted-by":"crossref","unstructured":"Cao Z, Simon T, Wei S-E, Sheikh Y (2017) Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 7291\u20137299","DOI":"10.1109\/CVPR.2017.143"},{"key":"20502_CR76","doi-asserted-by":"crossref","unstructured":"Cheng B, Xiao B, Wang J, Shi H, Huang TS, Zhang L (2020) Higherhrnet: scale-aware representation learning for bottom-up human pose estimation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 5386\u20135395","DOI":"10.1109\/CVPR42600.2020.00543"},{"key":"20502_CR77","doi-asserted-by":"crossref","unstructured":"Geng Z, Sun K, Xiao B, Zhang Z, Wang J (2021) Bottom-up human pose estimation via disentangled keypoint regression. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01444"},{"key":"20502_CR78","doi-asserted-by":"crossref","unstructured":"Kreiss S, Bertoni L, Alahi A (2019) Pifpaf: composite fields for human pose estimation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 11977\u201311986","DOI":"10.1109\/CVPR.2019.01225"},{"key":"20502_CR79","doi-asserted-by":"crossref","unstructured":"Xiao B, Wu H, Wei Y (2018) Simple baselines for human pose estimation and tracking. In: ECCV. pp 466\u2013481","DOI":"10.1007\/978-3-030-01231-1_29"},{"key":"20502_CR80","doi-asserted-by":"crossref","unstructured":"Sun K, Xiao B, Liu D, Wang J (2019) Deep high-resolution representation learning for human pose estimation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 5693\u20135703","DOI":"10.1109\/CVPR.2019.00584"},{"key":"20502_CR81","doi-asserted-by":"crossref","unstructured":"Cai Y, Wang Z, Luo Z, Yin B, Du A, Wang H, Zhang X, Zhou X, Zhou E, Sun J (2020) Learning delicate local representations for multi-person pose estimation. In: Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part III 16. Springer, pp 455\u2013472","DOI":"10.1007\/978-3-030-58580-8_27"},{"key":"20502_CR82","first-page":"38571","volume":"35","author":"Y Xu","year":"2022","unstructured":"Xu Y, Zhang J, Zhang Q, Tao D (2022) Vitpose: Simple vision transformer baselines for human pose estimation. Adv Neural Inf Process Syst 35:38571\u201338584","journal-title":"Adv Neural Inf Process Syst"},{"key":"20502_CR83","unstructured":"Yang J, Zeng A, Liu S, Li F, Zhang R, Zhang L (2023) Explicit box detection unifies end-to-end multi-person pose estimation. arXiv:2302.01593"},{"key":"20502_CR84","doi-asserted-by":"crossref","unstructured":"Impett L, Moretti F (2017) Totentanz. operationalizing aby warburg\u2019s pathosformeln","DOI":"10.64590\/nx9"},{"key":"20502_CR85","doi-asserted-by":"crossref","unstructured":"Impett L, S\u00fcsstrunk S (2016) Pose and pathosformel in aby warburg\u2019s bilderatlas. In: Computer Vision\u2013ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I 14. Springer, pp 888\u2013902","DOI":"10.1007\/978-3-319-46604-0_61"},{"key":"20502_CR86","unstructured":"Warburg A et al (1925) Mnemosyne atlas. Die Beredsamkeit des Leibes. Zur K\u00f6rpersprache in der Kunst 156\u2013173"},{"key":"20502_CR87","doi-asserted-by":"crossref","unstructured":"Bell P, Impett L (2019) Ikonographie und interaktion. computergest\u00fctzte analyse von posen in bildern der heilsgeschichte. Das Mittelalter. 24(1):31\u201353","DOI":"10.1515\/mial-2019-0004"},{"key":"20502_CR88","doi-asserted-by":"crossref","unstructured":"Springstein M, Schneider S, Althaus C, Ewerth R (2022) Semi-supervised human pose estimation in art-historical images. arXiv:2207.02976","DOI":"10.1145\/3503161.3548371"},{"key":"20502_CR89","doi-asserted-by":"crossref","unstructured":"Li K, Wang S, Zhang X, Xu Y, Xu W, Tu Z (2021) Pose recognition with cascade transformers. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 1944\u20131953","DOI":"10.1109\/CVPR46437.2021.00198"},{"key":"20502_CR90","doi-asserted-by":"crossref","unstructured":"Bernasconi V (2022) Gab-gestures for artworks browsing. In: 27th International conference on intelligent user interfaces. pp 50\u201353","DOI":"10.1145\/3490100.3516470"},{"issue":"6","key":"20502_CR91","doi-asserted-by":"publisher","first-page":"120","DOI":"10.3390\/jimaging9060120","volume":"9","author":"V Bernasconi","year":"2023","unstructured":"Bernasconi V, Cetini\u0107 E, Impett L (2023) A computational approach to hand pose recognition in early modern paintings. J Imaging 9(6):120","journal-title":"J Imaging"},{"key":"20502_CR92","doi-asserted-by":"crossref","unstructured":"Dimova T (2023) Chiroscript: transcription system for studying hand gestures in early modern painting. In: Arts, vol. 12. MDPI, p 179","DOI":"10.3390\/arts12040179"},{"key":"20502_CR93","doi-asserted-by":"crossref","unstructured":"Li, J., Wang, C., Zhu, H., Mao, Y., Fang, H.-S., Lu, C.: Crowdpose: Efficient crowded scenes pose estimation and a new benchmark. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 10863\u201310872 (2019)","DOI":"10.1109\/CVPR.2019.01112"},{"key":"20502_CR94","doi-asserted-by":"crossref","unstructured":"Luvizon DC, Picard D, Tabia H (2018) 2d\/3d pose estimation and action recognition using multitask deep learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5137\u20135146","DOI":"10.1109\/CVPR.2018.00539"},{"key":"20502_CR95","doi-asserted-by":"crossref","unstructured":"Schneider S, Vollmer R (2023) Poses of people in art: a data set for human pose estimation in digital art history. arXiv:2301.05124","DOI":"10.1145\/3696455"},{"key":"20502_CR96","doi-asserted-by":"crossref","unstructured":"Ju X, Zeng A, Wang J, Xu Q, Zhang L (2023) Human-art: A versatile human-centric dataset bridging natural and artificial scenes. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 618\u2013629","DOI":"10.1109\/CVPR52729.2023.00067"},{"key":"20502_CR97","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"20502_CR98","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF international conference on computer vision. pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"20502_CR99","doi-asserted-by":"crossref","unstructured":"Zhu K, Wu J (2021) Residual attention: a simple but effective method for multi-label recognition. In: Proceedings of the IEEE\/CVF international conference on computer vision. pp 184\u2013193","DOI":"10.1109\/ICCV48922.2021.00025"}],"container-title":["Multimedia Tools and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-024-20502-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11042-024-20502-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-024-20502-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,26]],"date-time":"2025-09-26T15:13:56Z","timestamp":1758899636000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11042-024-20502-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,26]]},"references-count":99,"journal-issue":{"issue":"32","published-online":{"date-parts":[[2025,9]]}},"alternative-id":["20502"],"URL":"https:\/\/doi.org\/10.1007\/s11042-024-20502-6","relation":{},"ISSN":["1573-7721"],"issn-type":[{"type":"electronic","value":"1573-7721"}],"subject":[],"published":{"date-parts":[[2024,12,26]]},"assertion":[{"value":"15 April 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 November 2024","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 December 2024","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 December 2024","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}