{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,7]],"date-time":"2026-01-07T07:47:29Z","timestamp":1767772049777,"version":"3.41.2"},"reference-count":52,"publisher":"Emerald","issue":"4","license":[{"start":{"date-parts":[[2021,2,16]],"date-time":"2021-02-16T00:00:00Z","timestamp":1613433600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emerald.com\/insight\/site-policies"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["JD"],"published-print":{"date-parts":[[2021,6,24]]},"abstract":"<jats:sec><jats:title content-type=\"abstract-subheading\">Purpose<\/jats:title><jats:p>Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags assigned by the museum and those produced by three computer vision systems.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Design\/methodology\/approach<\/jats:title><jats:p>This paper uses computer vision tools to generate the data and the Getty Research Institute's Art and Architecture Thesaurus (AAT) to compare the subject keyword tags.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Findings<\/jats:title><jats:p>This paper finds that there are clear opportunities to use computer vision technologies to automatically generate tags that expand the terms used by the museum. This brings a new perspective to the collection that is different from the traditional art historical one. However, the study also surfaces challenges about the accuracy and lack of context within the computer vision results.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Practical implications<\/jats:title><jats:p>This finding has important implications on how these machine-generated tags complement the current taxonomies and vocabularies inputted in the collection database. In consequence, the museum needs to consider the selection process for choosing which computer vision system to apply to their collection. Furthermore, they also need to think critically about the kind of tags they wish to use, such as colors, materials or objects.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Originality\/value<\/jats:title><jats:p>The study results add to the rapidly evolving field of computer vision within the art information context and provide recommendations of aspects to consider before selecting and implementing these technologies.<\/jats:p><\/jats:sec>","DOI":"10.1108\/jd-04-2020-0060","type":"journal-article","created":{"date-parts":[[2021,2,16]],"date-time":"2021-02-16T14:20:26Z","timestamp":1613485226000},"page":"946-964","source":"Crossref","is-referenced-by-count":8,"title":["A critical comparison analysis between human and machine-generated tags for the Metropolitan Museum of Art's collection"],"prefix":"10.1108","volume":"77","author":[{"given":"Elena","family":"Villaespesa","sequence":"first","affiliation":[]},{"given":"Seth","family":"Crider","sequence":"additional","affiliation":[]}],"member":"140","published-online":{"date-parts":[[2021,2,16]]},"reference":[{"year":"2019","key":"key2022041311150948400_ref001","article-title":"Countering inconsistent labelling by Google's vision API for rotated images"},{"issue":"4","key":"key2022041311150948400_ref002","doi-asserted-by":"publisher","first-page":"1188","DOI":"10.1111\/lsi.12353","article-title":"Computer vision and machine learning for human rights video analysis: case studies, possibilities, concerns, and limitations","volume":"43","year":"2018","journal-title":"Law and Social Inquiry"},{"year":"2008","key":"key2022041311150948400_ref003","article-title":"Tag! You're it!"},{"year":"2014","key":"key2022041311150948400_ref004","article-title":"Clear choices in tagging"},{"year":"2017","key":"key2022041311150948400_ref005","article-title":"Computer vision so good"},{"first-page":"77","article-title":"Gender shades: intersectional accuracy disparities in commercial gender classification","year":"2018","key":"key2022041311150948400_ref006"},{"issue":"1","key":"key2022041311150948400_ref007","doi-asserted-by":"publisher","first-page":"107","DOI":"10.1111\/cura.12011","article-title":"Mutualizing museum knowledge: folksonomies and the changing shape of expertise","volume":"56","year":"2013","journal-title":"Curator: The Museum Journal"},{"year":"2007","key":"key2022041311150948400_ref008","article-title":"Tagging and Searching - serendipity and museum collection databases"},{"year":"2019","key":"key2022041311150948400_ref009","article-title":"Exploring art with open access and AI: what's next?"},{"issue":"5","key":"key2022041311150948400_ref010","doi-asserted-by":"publisher","first-page":"695","DOI":"10.1016\/S0306-4573(01)00059-0","article-title":"Users' relevance criteria in image retrieval in American history","volume":"38","year":"2002","journal-title":"Information Processing and Management"},{"year":"2017","key":"key2022041311150948400_ref011"},{"volume-title":"Europeana: What Users Search for and Why","year":"2017","key":"key2022041311150948400_ref012","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-67008-9_17"},{"issue":"3\u20134","key":"key2022041311150948400_ref013","doi-asserted-by":"publisher","first-page":"142","DOI":"10.1080\/19386389.2015.1103081","article-title":"The journey to linked open data: the getty vocabularies","volume":"15","year":"2015","journal-title":"Journal of Library Metadata"},{"journal-title":"CapTech Ventures","article-title":"Image recognition services: searching for value amid hype","year":"2017","key":"key2022041311150948400_ref014"},{"journal-title":"The AI Now Institute","article-title":"Excavating AI: the politics of training sets for machine learning","year":"2019","key":"key2022041311150948400_ref015"},{"journal-title":"Elsevier Science and Technology","article-title":"Computer and machine vision: theory, algorithms, practicalities","year":"2012","key":"key2022041311150948400_ref016"},{"issue":"1","key":"key2022041311150948400_ref017","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1002\/aris.2008.1440420108","article-title":"Visual image retrieval","volume":"42","year":"2008","journal-title":"Annual Review of Information Science and Technology"},{"year":"2020","key":"key2022041311150948400_ref501","article-title":"AAT: frequently asked questions"},{"year":"2020","key":"key2022041311150948400_ref502","article-title":"How to use the AAT online"},{"issue":"1","key":"key2022041311150948400_ref018","first-page":"67","article-title":"Development of the Getty vocabularies: AAT, TGN, ULAN, and CONA","volume":"29","year":"2010","journal-title":"Art Documentation: Journal of the Art Libraries Society of North America"},{"key":"key2022041311150948400_ref503","first-page":"425","article-title":"Art vocabulary: categorizing works of art","year":"2016","journal-title":"Handbuch Sprache in der Kunstkommunikation"},{"year":"2019","key":"key2022041311150948400_ref019","article-title":"Categories for the description of works of art (CDWA): introduction"},{"key":"key2022041311150948400_ref020","doi-asserted-by":"publisher","first-page":"101","DOI":"10.1109\/ICMLA.2017.0-172","article-title":"Google's cloud vision API is not robust to noise","year":"2017"},{"key":"key2022041311150948400_ref021","doi-asserted-by":"publisher","first-page":"21","DOI":"10.5170\/CERN-1996-008.21","article-title":"Computer vision: evolution and promise","volume-title":"CERN School of Computing","year":"1996"},{"year":"2000","key":"key2022041311150948400_ref022","article-title":"A conceptual framework for indexing visual information at multiple levels"},{"volume-title":"Image Retrieval: Theory and Research","year":"2003","key":"key2022041311150948400_ref023"},{"issue":"11","key":"key2022041311150948400_ref024","doi-asserted-by":"publisher","first-page":"938","DOI":"10.1002\/asi.1161","article-title":"A conceptual framework and empirical research for classifying visual descriptors","volume":"52","year":"2001","journal-title":"Journal of the American Society for Information Science and Technology"},{"issue":"1","key":"key2022041311150948400_ref025","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1002\/asi.22950","article-title":"Subject matter categorization of tags applied to digital images from art museums","volume":"65","year":"2014","journal-title":"Journal of the Association for Information Science and Technology"},{"journal-title":"Goldsmiths, University of London","article-title":"AI: a museum planning toolkit","year":"2020","key":"key2022041311150948400_ref026"},{"year":"2018","key":"key2022041311150948400_ref027","article-title":"Assessing the usefulness of online image annotation services for destination image measurement"},{"volume-title":"Studies in Iconology: Humanistic Themes in the Art of the Renaissance","year":"1972","key":"key2022041311150948400_ref028"},{"year":"2018","key":"key2022041311150948400_ref029","article-title":"A new kind of image search"},{"key":"key2022041311150948400_ref030","doi-asserted-by":"crossref","unstructured":"Poole, A. (2019), \u201cSocial tagging and commenting in participatory archives: a critical literature review\u201d, in BenoitIII, E. and Eveleigh, A. (Eds), Participatory Archives: Theory and Practice, Facet Publishing, London, pp. 15-31, doi: 10.29085\/9781783303588.002.","DOI":"10.29085\/9781783303588.002"},{"volume-title":"Art-attack! On Style Transfers with Textures, Label Categories and Adversarial Examples","year":"2018","key":"key2022041311150948400_ref031"},{"issue":"3","key":"key2022041311150948400_ref032","doi-asserted-by":"publisher","first-page":"39","DOI":"10.1300\/J104v06n03_04","article-title":"Analyzing the subject of a picture: a theoretical approach","volume":"6","year":"1986","journal-title":"Cataloging and Classification Quarterly"},{"key":"key2022041311150948400_ref033","unstructured":"Shatford Layne, S. (2002), \u201cSubject access to art images\u201d, in Baca, M. (Ed.), Introduction to Art Image Access. Issues, Tools, Standards, Strategies, Getty Research Institute, available at: https:\/\/www.getty.edu\/publications\/resources\/virtuallibrary\/0892366664.pdf."},{"key":"key2022041311150948400_ref034","doi-asserted-by":"publisher","first-page":"110","DOI":"10.1145\/1414694.1414719","article-title":"Exploring information seeking behaviour in a digital museum context","year":"2008"},{"issue":"1","key":"key2022041311150948400_ref035","doi-asserted-by":"publisher","first-page":"11","DOI":"10.7152\/acro.v17i1.12492","article-title":"Viewer tagging in art museums: comparisons to concepts and vocabularies of art museum visitors","volume":"17","year":"2006","journal-title":"Advances in Classification Research Online"},{"year":"2020","key":"key2022041311150948400_ref036","article-title":"Computer vision and the science museum group collection"},{"key":"key2022041311150948400_ref037","unstructured":"Sundt, C.L. (2002), \u201cThe image user and the search for images\u201d, in Baca, M. (Ed.), Introduction to Art Image Access. Issues, Tools, Standards, Strategies, Getty Research Institute, available at: https:\/\/www.getty.edu\/publications\/resources\/virtuallibrary\/0892366664.pdf."},{"key":"key2022041311150948400_ref038","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/SIEDS49339.2020.9106656","article-title":"Exploring themes and bias in art using machine learning image analysis","year":"2020"},{"year":"2018","key":"key2022041311150948400_ref039","article-title":"Detecting saliency by combining speech and object detection in indoor environments"},{"key":"key2022041311150948400_ref040","unstructured":"The Metropolitan Museum of Art (2019), \u201cThe tagging initiative\u201d, available at: https:\/\/www.metmuseum.org\/about-the-met\/policies-and-documents\/open-access\/tagging-initiative (accessed 2 April 2020)."},{"issue":"1","key":"key2022041311150948400_ref041","doi-asserted-by":"publisher","first-page":"83","DOI":"10.1080\/13614560600802940","article-title":"Exploring the potential for social tagging and folksonomy in art museums: proof of concept","volume":"12","year":"2006","journal-title":"New Review of Hypermedia and Multimedia"},{"issue":"1","key":"key2022041311150948400_ref042","doi-asserted-by":"publisher","first-page":"11","DOI":"10.7152\/acro.v17i1.12495","article-title":"Social classification and folksonomy in art museums: early data from the steve.museum tagger prototype","volume":"17","year":"2006","journal-title":"Advances in Classification Research Online"},{"year":"2006","key":"key2022041311150948400_ref043","article-title":"Investigating social tagging and folksonomy in art museums with steve.museum"},{"issue":"2","key":"key2022041311150948400_ref044","doi-asserted-by":"publisher","first-page":"233","DOI":"10.1080\/10645578.2019.1668679","article-title":"Museum collections and online users: development of a segmentation model for the Metropolitan Museum of Art","volume":"22","year":"2019","journal-title":"Visitor Studies"},{"year":"2015","key":"key2022041311150948400_ref045","article-title":"Finding the motivation behind a click: definition and implementation of a website audience segmentation | MW2015: museums and the Web 2015"},{"issue":"5","key":"key2022041311150948400_ref046","doi-asserted-by":"publisher","DOI":"10.5210\/fm.v17i5.3922","article-title":"Enhancing user involvement with digital cultural heritage: the usage of social tagging and storytelling","volume":"17","year":"2012","journal-title":"First Monday"},{"issue":"1","key":"key2022041311150948400_ref047","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1007\/s00799-018-0248-8","article-title":"Characterising online museum users: a study of the National Museums Liverpool museum website","volume":"21","year":"2020","journal-title":"International Journal on Digital Libraries"},{"journal-title":"Cornell University","article-title":"The iMet collection 2019 challenge dataset","year":"2019","key":"key2022041311150948400_ref048"},{"issue":"20","key":"key2022041311150948400_ref049","doi-asserted-by":"publisher","DOI":"10.3390\/su11205673","article-title":"An improved style transfer algorithm using feedforward neural network for real-time image conversion","volume":"11","year":"2019","journal-title":"Sustainability"}],"container-title":["Journal of Documentation"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/JD-04-2020-0060\/full\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/JD-04-2020-0060\/full\/html","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,24]],"date-time":"2025-07-24T22:33:51Z","timestamp":1753396431000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.emerald.com\/jd\/article\/77\/4\/946-964\/207164"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,2,16]]},"references-count":52,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2021,2,16]]},"published-print":{"date-parts":[[2021,6,24]]}},"alternative-id":["10.1108\/JD-04-2020-0060"],"URL":"https:\/\/doi.org\/10.1108\/jd-04-2020-0060","relation":{},"ISSN":["0022-0418"],"issn-type":[{"type":"print","value":"0022-0418"}],"subject":[],"published":{"date-parts":[[2021,2,16]]}}}