{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:37:36Z","timestamp":1761007056209,"version":"build-2065373602"},"reference-count":18,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2013,1,24]],"date-time":"2013-01-24T00:00:00Z","timestamp":1358985600000},"content-version":"vor","delay-in-days":389,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc of Assoc for Info"],"published-print":{"date-parts":[[2012,1]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Gathering annotations from non\u2010expert online raters is an attractive method for quickly completing large\u2010scale annotation tasks, but the increased possibility of unreliable annotators and diminished work quality remains a cause for concern. In the context of information retrieval, where human\u2010encoded relevance judgments underlie the evaluation of new systems and methods, the ability to quickly and reliably collect trustworthy annotations allows for quicker development and iteration of research.<\/jats:p><jats:p>In the context of paid online workers, this study evaluates indicators of non\u2010expert performance along three lines: temporality, experience, and agreement. It is found that user performance is a key indicator for future performance. Additionally, the time spent by raters familiarizing themselves with a new set of tasks is important for rater quality, as is long\u2010term familiarity with a topic being rated.<\/jats:p><jats:p>These findings may inform large\u2010scale digital collections' use of non\u2010expert raters for performing more purposive and affordable online annotation activities.<\/jats:p>","DOI":"10.1002\/meet.14504901166","type":"journal-article","created":{"date-parts":[[2013,1,24]],"date-time":"2013-01-24T10:49:23Z","timestamp":1359024563000},"page":"1-10","source":"Crossref","is-referenced-by-count":0,"title":["Evaluating rater quality and rating difficulty in online annotation activities"],"prefix":"10.1002","volume":"49","author":[{"given":"Peter","family":"Organisciak","sequence":"first","affiliation":[]},{"given":"Miles","family":"Efron","sequence":"additional","affiliation":[]},{"given":"Katrina","family":"Fenlon","sequence":"additional","affiliation":[]},{"given":"Megan","family":"Senseney","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2013,1,24]]},"reference":[{"key":"e_1_2_12_2_1","unstructured":"Dekel O. &Shamir O.(2009).Vox Populi: Collecting High\u2010Quality Labels from a Crowd. COLT 2009."},{"key":"e_1_2_12_3_1","doi-asserted-by":"crossref","unstructured":"Donmez P. Carbonell J. &Schneider J.(2010).A probabilistic framework to learn from multiple annotators with time\u2010varying accuracy.SIAM International Conference on Data Mining (SDM)(pp.826\u2013837).","DOI":"10.1137\/1.9781611972801.72"},{"key":"e_1_2_12_4_1","doi-asserted-by":"crossref","unstructured":"Efron M. Organisciak P. &Fenlon K.(2011).Building Topic Models in a Federated Digital Library Through Selective Document Exclusion. Presented at the ASIS&T Annual Meeting New Orleans USA.","DOI":"10.1002\/meet.2011.14504801048"},{"key":"e_1_2_12_5_1","doi-asserted-by":"crossref","unstructured":"Efron M. Organisciak P. &Fenlon K.(2012).Improving Retrieval of Short Texts Through Document Expansion. Presented at the ACM SIGIR 2012 Portland USA.","DOI":"10.1145\/2348283.2348405"},{"key":"e_1_2_12_6_1","doi-asserted-by":"crossref","unstructured":"Eickhoff C. &Vries A. P.(2012).Increasing cheat robustness of crowdsourcing tasks. Information Retrieval.","DOI":"10.1007\/s10791-011-9181-9"},{"key":"e_1_2_12_7_1","doi-asserted-by":"crossref","unstructured":"Golovchinsky G. &Pickens J.(2010).Interactive information seeking via selective application of contextual knowledge.Proceedings of the third symposium on Information interaction in context IIiX '10 (pp.145\u2013154). New York NY USA: ACM. doi:10.1145\/1840784.1840806","DOI":"10.1145\/1840784.1840806"},{"key":"e_1_2_12_8_1","doi-asserted-by":"crossref","unstructured":"Hsueh P.\u2010Y. Melville P. &Sindhwani V.(2009).Data quality from crowdsourcing: a study of annotation selection criteria.Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing HLT '09 (pp.27\u201335). Stroudsburg PA USA: Association for Computational Linguistics.","DOI":"10.3115\/1564131.1564137"},{"key":"e_1_2_12_9_1","unstructured":"JingWang Panagiotis G.Ipeirotis &FosterProvost. (2011).Managing Crowdsourcing Workers. Presented at the Winter Conference on Business Intelligence Utah."},{"key":"e_1_2_12_10_1","unstructured":"Lease M. &Kazai G.(2011).Overview of the TREC 2011 Crowdsourcing Track (Conference Notebook). Text Retrieval Conference Notebook."},{"key":"e_1_2_12_11_1","first-page":"25","volume-title":"Personalizing web search using long term browsing history","author":"Matthijs N.","year":"2011"},{"key":"e_1_2_12_12_1","unstructured":"Novotney S. &Callison\u2010Burch C.(2010).Cheap fast and good enough: Automatic speech recognition with non\u2010expert transcription.Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics(pp.207\u2013215). Presented at the HLT '10 Stroudsburg USA."},{"key":"e_1_2_12_13_1","doi-asserted-by":"crossref","unstructured":"Organisciak P.(2012).An Iterative Reliability Measure for Semi\u2010anonymous Annotators. Presented at the Joint Conference on Digital Libraries Washington DC USA.","DOI":"10.1145\/2232817.2232885"},{"key":"e_1_2_12_14_1","doi-asserted-by":"crossref","unstructured":"Sheng V. S. Provost F. &Ipeirotis P. G.(2008).Get another label? improving data quality and data mining using multiple noisy labelers.Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining KDD '08 (pp.614\u2013622). New York NY USA: ACM.","DOI":"10.1145\/1401890.1401965"},{"key":"e_1_2_12_15_1","doi-asserted-by":"crossref","unstructured":"Snow R. O'Connor B. Jurafsky D. &Ng A. Y.(2008).Cheap and fast\u2014but is it good?: evaluating non\u2010expert annotations for natural language tasks.Proceedings of the Conference on Empirical Methods in Natural Language Processing EMNLP '08 (pp.254\u2013263). Stroudsburg PA USA: Association for Computational Linguistics.","DOI":"10.3115\/1613715.1613751"},{"key":"e_1_2_12_16_1","doi-asserted-by":"crossref","unstructured":"Urbano J. Marrero M. Mart\u00edn D. Morato J. Robles K. &Llor\u00e9ns J.(2011).The University Carlos III of Madrid at TREC 2011 Crowdsourcing Track. Retrieved fromhttp:\/\/julian\u2010urbano.info\/wp\u2010content\/uploads\/035\u2010university\u2010carlos\u2010iii\u2010madrid\u2010trec\u20102011\u2010crowdsourcing\u2010track.pdf","DOI":"10.6028\/NIST.SP.500-296.crowd-uc3m"},{"key":"e_1_2_12_17_1","doi-asserted-by":"crossref","unstructured":"Wallace B. Small K. Brodley C. &Trikalinos T.(2011).Who should label what? Instance allocation in multiple expert active learning. Proceedings of the SIAM International Conference on Data Mining (SDM).","DOI":"10.1137\/1.9781611972818.16"},{"key":"e_1_2_12_18_1","doi-asserted-by":"crossref","unstructured":"Welinder P. &Perona P.(2010).Online crowdsourcing: Rating annotators and obtaining cost\u2010effective labels.2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(pp.25\u201332). Presented at the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) IEEE.","DOI":"10.1109\/CVPRW.2010.5543189"},{"key":"e_1_2_12_19_1","first-page":"2035","article-title":"Whose vote should count more: Optimal integration of labels from labelers of unknown expertise","volume":"22","author":"Whitehill J.","year":"2009","journal-title":"Advances in Neural Information Processing Systems"}],"container-title":["Proceedings of the American Society for Information Science and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/api.wiley.com\/onlinelibrary\/tdm\/v1\/articles\/10.1002%2Fmeet.14504901166","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/api.wiley.com\/onlinelibrary\/tdm\/v1\/articles\/10.1002%2Fmeet.14504901166","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/asistdl.onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/meet.14504901166","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,20]],"date-time":"2025-10-20T11:31:23Z","timestamp":1760959883000},"score":1,"resource":{"primary":{"URL":"https:\/\/asistdl.onlinelibrary.wiley.com\/doi\/10.1002\/meet.14504901166"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2012,1]]},"references-count":18,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2012,1]]}},"alternative-id":["10.1002\/meet.14504901166"],"URL":"https:\/\/doi.org\/10.1002\/meet.14504901166","archive":["Portico"],"relation":{},"ISSN":["0044-7870","1550-8390"],"issn-type":[{"type":"print","value":"0044-7870"},{"type":"electronic","value":"1550-8390"}],"subject":[],"published":{"date-parts":[[2012,1]]}}}