{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T15:58:05Z","timestamp":1776095885773,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":76,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,7,21]],"date-time":"2021-07-21T00:00:00Z","timestamp":1626825600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,7,21]]},"DOI":"10.1145\/3461702.3462610","type":"proceedings-article","created":{"date-parts":[[2021,7,31]],"date-time":"2021-07-31T01:21:32Z","timestamp":1627694492000},"page":"368-378","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":52,"title":["Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs"],"prefix":"10.1145","author":[{"given":"Solon","family":"Barocas","sequence":"first","affiliation":[{"name":"Microsoft, New York City, NY, USA"}]},{"given":"Anhong","family":"Guo","sequence":"additional","affiliation":[{"name":"University of Michigan, Ann Arbor, MI, USA"}]},{"given":"Ece","family":"Kamar","sequence":"additional","affiliation":[{"name":"Microsoft, Redmond, WA, USA"}]},{"given":"Jacquelyn","family":"Krones","sequence":"additional","affiliation":[{"name":"Microsoft, Redmond, WA, USA"}]},{"given":"Meredith Ringel","family":"Morris","sequence":"additional","affiliation":[{"name":"Microsoft, Redmond, WA, USA"}]},{"given":"Jennifer Wortman","family":"Vaughan","sequence":"additional","affiliation":[{"name":"Microsoft, New York City, NY, USA"}]},{"given":"W. Duncan","family":"Wadsworth","sequence":"additional","affiliation":[{"name":"Microsoft, Redmond, WA, USA"}]},{"given":"Hanna","family":"Wallach","sequence":"additional","affiliation":[{"name":"Microsoft, New York City, NY, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,7,30]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"Algorithmic Justice League and the Center on Privacy and Technology at Georgetown Law. 2018. Safe Face Pledge. https:\/\/www.safefacepledge.org\/.  Algorithmic Justice League and the Center on Privacy and Technology at Georgetown Law. 2018. Safe Face Pledge. https:\/\/www.safefacepledge.org\/."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445888"},{"key":"e_1_3_2_1_3_1","volume-title":"Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks. In ProPublica.","author":"Angwin Julia","year":"2016","unstructured":"Julia Angwin , Jeff Larson , Surya Mattu , and Lauren Kirchner . 2016 . Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks. In ProPublica. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks. In ProPublica."},{"key":"e_1_3_2_1_4_1","unstructured":"Apple Inc. 2019. About Face ID advanced technology. https:\/\/support.apple.com\/en-us\/HT208108.  Apple Inc. 2019. About Face ID advanced technology. https:\/\/support.apple.com\/en-us\/HT208108."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1257\/pandp.20181003"},{"key":"e_1_3_2_1_6_1","unstructured":"Australian Border Force. 2019. Smart Gates. https:\/\/www.abf.gov.au\/entering-and-leaving-australia\/smartgates.  Australian Border Force. 2019. Smart Gates. https:\/\/www.abf.gov.au\/entering-and-leaving-australia\/smartgates."},{"key":"e_1_3_2_1_7_1","volume-title":"The problem with bias: from allocative to representational harms in machine learning","author":"Barocas Solon","year":"2017","unstructured":"Solon Barocas , Kate Crawford , Aaron Shapiro , and Hanna Wallach . 2017. The problem with bias: from allocative to representational harms in machine learning . Special Interest Group for Computing . Information and Society (SIGCIS)2 ( 2017 ). Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: from allocative to representational harms in machine learning. Special Interest Group for Computing. Information and Society (SIGCIS)2 (2017)."},{"key":"e_1_3_2_1_8_1","volume-title":"Proc. of the Conference on Fairness, Accountability, and Transparency(FAT*). 289--298","author":"Benthhall Sebastian","unstructured":"Sebastian Benthhall and Bruce D. Haynes . 2019. Racial categories in machine learning . In Proc. of the Conference on Fairness, Accountability, and Transparency(FAT*). 289--298 . Sebastian Benthhall and Bruce D. Haynes. 2019. Racial categories in machine learning. In Proc. of the Conference on Fairness, Accountability, and Transparency(FAT*). 289--298."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2017.2652466"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2008.12.007"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.imavis.2009.09.005"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372877"},{"key":"e_1_3_2_1_13_1","unstructured":"Tolga Bolukbasi Kai-Wei Chang James Y Zou Venkatesh Saligrama and Adam TKalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems. 4349--4357.  Tolga Bolukbasi Kai-Wei Chang James Y Zou Venkatesh Saligrama and Adam TKalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems. 4349--4357."},{"key":"e_1_3_2_1_14_1","unstructured":"Joy Buolamwini. 2018. When the Robot Doesn't See Dark Skin.The New York Times(2018).  Joy Buolamwini. 2018. When the Robot Doesn't See Dark Skin.The New York Times(2018)."},{"key":"e_1_3_2_1_15_1","volume-title":"Proc. of the Conference on Fairness, Accountability, and Transparency (FAT*). 77--91","author":"Buolamwini Joy","year":"2018","unstructured":"Joy Buolamwini and Timnit Gebru . 2018 . Gender shades: Intersectional accuracy disparities in commercial gender classification . In Proc. of the Conference on Fairness, Accountability, and Transparency (FAT*). 77--91 . Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proc. of the Conference on Fairness, Accountability, and Transparency (FAT*). 77--91."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1089\/big.2016.0047"},{"key":"e_1_3_2_1_17_1","volume-title":"Proc. of the Conference on Fairness, Accountability, and Transparency (FAT*).","author":"Chouldechova Alexandra","year":"2018","unstructured":"Alexandra Chouldechova , Emily Putnam-Hornstein , Diana Benavides Prado , Oleksandr Fialko , and Rhema Vaithianathan . 2018 . A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions . In Proc. of the Conference on Fairness, Accountability, and Transparency (FAT*). Alexandra Chouldechova, Emily Putnam-Hornstein, Diana Benavides Prado,Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Proc. of the Conference on Fairness, Accountability, and Transparency (FAT*)."},{"key":"e_1_3_2_1_18_1","volume-title":"Demographic effects in facial recognition and their dependence on image acquisition: An evaluation of eleven commercial systems","author":"Cook Cynthia","year":"2019","unstructured":"Cynthia Cook , John Howard , Yevgeniy Sirotin , Jerry Tipton , and Arun Vemury . 2019. Demographic effects in facial recognition and their dependence on image acquisition: An evaluation of eleven commercial systems . IEEE Transactions on Biometrics, Behavior, and Identity Science( 2019 ). Cynthia Cook, John Howard, Yevgeniy Sirotin, Jerry Tipton, and Arun Vemury. 2019. Demographic effects in facial recognition and their dependence on image acquisition: An evaluation of eleven commercial systems. IEEE Transactions on Biometrics, Behavior, and Identity Science(2019)."},{"key":"e_1_3_2_1_19_1","volume-title":"A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear. Washington Post(October","author":"Corbett-Davies Sam","year":"2016","unstructured":"Sam Corbett-Davies , Emma Pierson , Avi Feller , and Sharad Goel . 2016. A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear. Washington Post(October 2016 ). Sam Corbett-Davies, Emma Pierson, Avi Feller, and Sharad Goel. 2016. A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear. Washington Post(October 2016)."},{"key":"e_1_3_2_1_20_1","volume-title":"Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics","author":"Crenshaw Kimberl\u00e9","unstructured":"Kimberl\u00e9 Crenshaw . 1989. Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics . University of Chicago Legal Forum(1989) . Kimberl\u00e9 Crenshaw. 1989. Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal Forum(1989)."},{"key":"e_1_3_2_1_21_1","unstructured":"Terrance DeVries Ishan Misra Changhan Wang and Laurens van der Maaten. 2019. Does object recognition work for everyone? https:\/\/research.fb.com\/wp-content\/uploads\/2019\/06\/Does-Object-Recognition-Work-for-Everyone.pdf.  Terrance DeVries Ishan Misra Changhan Wang and Laurens van der Maaten. 2019. Does object recognition work for everyone? https:\/\/research.fb.com\/wp-content\/uploads\/2019\/06\/Does-Object-Recognition-Work-for-Everyone.pdf."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2014.2359646"},{"key":"e_1_3_2_1_23_1","unstructured":"Federal Bureau of Investigation. 2019. Next Generation Identification (NGI). https:\/\/www.fbi.gov\/services\/cjis\/fingerprints-and-other-biometrics\/ngi.  Federal Bureau of Investigation. 2019. Next Generation Identification (NGI). https:\/\/www.fbi.gov\/services\/cjis\/fingerprints-and-other-biometrics\/ngi."},{"key":"e_1_3_2_1_24_1","unstructured":"Electronic Frontier Foundation. 2019. Bans Bills and Moratoria. https:\/\/www.eff.org\/aboutface\/bans-bills-and-moratoria.  Electronic Frontier Foundation. 2019. Bans Bills and Moratoria. https:\/\/www.eff.org\/aboutface\/bans-bills-and-moratoria."},{"key":"e_1_3_2_1_25_1","unstructured":"Sorelle A. Friedler Carlos Scheidegger and Suresh Venkatasubramanian. 2016.On the (im)possibility of fairness. CoRR arXiv:1609.07236.  Sorelle A. Friedler Carlos Scheidegger and Suresh Venkatasubramanian. 2016.On the (im)possibility of fairness. CoRR arXiv:1609.07236."},{"key":"e_1_3_2_1_26_1","volume-title":"Hanna Wallach, Hal Daume\u00e9 III, and Kate Crawford.","author":"Gebru Timnit","year":"2018","unstructured":"Timnit Gebru , Jamie Morgenstern , Briana Vecchione , Jennifer Wortman Vaughan , Hanna Wallach, Hal Daume\u00e9 III, and Kate Crawford. 2018 . Datasheets for datasets. CoRR arXiv:1803.09010. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daume\u00e9 III, and Kate Crawford. 2018. Datasheets for datasets. CoRR arXiv:1803.09010."},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1198\/004017005000000661"},{"key":"e_1_3_2_1_28_1","unstructured":"Eden Gillespie. 2019. Are you being scanned? How facial recognition technology follows you even as you shop. https:\/\/www.theguardian.com\/technology\/2019\/feb\/24\/are-you-being-scanned-how-facial-recognition-technology-follows-you-even-as-you-shop.  Eden Gillespie. 2019. Are you being scanned? How facial recognition technology follows you even as you shop. https:\/\/www.theguardian.com\/technology\/2019\/feb\/24\/are-you-being-scanned-how-facial-recognition-technology-follows-you-even-as-you-shop."},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.csda.2013.05.025"},{"key":"e_1_3_2_1_30_1","unstructured":"Google. 2019. Google Photos Help -- Search by people things & places in your photos. https:\/\/support.google.com\/photos\/answer\/6128838.  Google. 2019. Google Photos Help -- Search by people things & places in your photos. https:\/\/support.google.com\/photos\/answer\/6128838."},{"key":"e_1_3_2_1_31_1","unstructured":"Google. 2020. Google Cloud Model Cards: Face Detection Model Card v0. https:\/\/modelcards.withgoogle.com\/face-detection.  Google. 2020. Google Cloud Model Cards: Face Detection Model Card v0. https:\/\/modelcards.withgoogle.com\/face-detection."},{"key":"e_1_3_2_1_32_1","unstructured":"Jay Greene. 2020. Microsoft won't sell police its facial-recognition technology following similar moves by Amazon and IBM. The Washington Post(2020). https:\/\/www.washingtonpost.com\/technology\/2020\/06\/11\/microsoft-facial-recognition\/  Jay Greene. 2020. Microsoft won't sell police its facial-recognition technology following similar moves by Amazon and IBM. The Washington Post(2020). https:\/\/www.washingtonpost.com\/technology\/2020\/06\/11\/microsoft-facial-recognition\/"},{"key":"e_1_3_2_1_33_1","volume-title":"Face Recognition Vendor Test (FRVT) Part 1: Verification","author":"Grother Patrick","unstructured":"Patrick Grother , Mei Ngan , and Kayee Hanaoka . 2019. Face Recognition Vendor Test (FRVT) Part 1: Verification . Interagency Report DRAFT. National Institute of Standards and Technology (NIST) . https:\/\/nist.gov\/programs-projects\/frvt-11-verification Patrick Grother, Mei Ngan, and Kayee Hanaoka. 2019. Face Recognition Vendor Test (FRVT) Part 1: Verification. Interagency Report DRAFT. National Institute of Standards and Technology (NIST). https:\/\/nist.gov\/programs-projects\/frvt-11-verification"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"crossref","unstructured":"Patrick Grother Mei Ngan and Kayee Hanaoka. 2019. Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects. Retrieved on at https:\/\/doi.org\/10.6028\/NIST.IR.8280.  Patrick Grother Mei Ngan and Kayee Hanaoka. 2019. Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects. Retrieved on at https:\/\/doi.org\/10.6028\/NIST.IR.8280.","DOI":"10.6028\/NIST.IR.8280"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372826"},{"key":"e_1_3_2_1_37_1","unstructured":"Jules. Harvey Adam. LaPlace. 2021. Exposing.ai. https:\/\/exposing.ai  Jules. Harvey Adam. LaPlace. 2021. Exposing.ai. https:\/\/exposing.ai"},{"key":"e_1_3_2_1_38_1","unstructured":"Nabil Hassein. 2017. Against Black Inclusion in Facial Recognition. https:\/\/digitaltalkingdrum.com\/2017\/08\/15\/against-black-inclusion-in-facial-recognition\/.  Nabil Hassein. 2017. Against Black Inclusion in Facial Recognition. https:\/\/digitaltalkingdrum.com\/2017\/08\/15\/against-black-inclusion-in-facial-recognition\/."},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"crossref","unstructured":"Caner Hazirbas Joanna Bitton Brian Dolhansky Jacqueline Pan Albert Gordo and Cristian Canton Ferrer. 2021. Towards measuring fairness in AI: the Casual Conversations dataset. https:\/\/ai.facebook.com\/research\/publications\/towards-measuring-fairness-in-ai-the-casual-conversations-dataset.  Caner Hazirbas Joanna Bitton Brian Dolhansky Jacqueline Pan Albert Gordo and Cristian Canton Ferrer. 2021. Towards measuring fairness in AI: the Casual Conversations dataset. https:\/\/ai.facebook.com\/research\/publications\/towards-measuring-fairness-in-ai-the-casual-conversations-dataset.","DOI":"10.1109\/TBIOM.2021.3132237"},{"key":"e_1_3_2_1_40_1","unstructured":"Deborah Hellman. 2018. Indirect Discrimination and the Duty to Avoid Compounding Injustice. In Foundations of Indirect Discrimination Law Hugh Collins and Tarunabh Khaitan (Eds.). Hart.  Deborah Hellman. 2018. Indirect Discrimination and the Duty to Avoid Compounding Injustice. In Foundations of Indirect Discrimination Law Hugh Collins and Tarunabh Khaitan (Eds.). Hart."},{"key":"e_1_3_2_1_41_1","unstructured":"HireVue. 2019. HireVue - Hiring Intelligence | Assessment & Video Interview Software. https:\/\/www.hirevue.com.  HireVue. 2019. HireVue - Hiring Intelligence | Assessment & Video Interview Software. https:\/\/www.hirevue.com."},{"key":"e_1_3_2_1_42_1","unstructured":"IBM. 2020. IBM CEO's Letter to Congress on Racial Justice Reform. https:\/\/www.ibm.com\/blogs\/policy\/facial-recognition-sunset-racial-justice-reforms\/.  IBM. 2020. IBM CEO's Letter to Congress on Racial Justice Reform. https:\/\/www.ibm.com\/blogs\/policy\/facial-recognition-sunset-racial-justice-reforms\/."},{"key":"e_1_3_2_1_43_1","volume-title":"Facial Recognition Technology: A Survey of Policy and Implementation Issues","author":"Introna Lucas D","year":"2009","unstructured":"Lucas D Introna and Helen Nissenbaum . 2009. Facial Recognition Technology: A Survey of Policy and Implementation Issues . Center for Catastrophe Preparedness and Response, New York University ( 2009 ). Lucas D Introna and Helen Nissenbaum. 2009. Facial Recognition Technology: A Survey of Policy and Implementation Issues. Center for Catastrophe Preparedness and Response, New York University(2009)."},{"key":"e_1_3_2_1_44_1","volume-title":"Proc. of the Conf. on Fairness, Accountability, and Transparency (FAccT).","author":"Abigail","unstructured":"Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and Fairness . In Proc. of the Conf. on Fairness, Accountability, and Transparency (FAccT). Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and Fairness. In Proc. of the Conf. on Fairness, Accountability, and Transparency (FAccT)."},{"key":"e_1_3_2_1_45_1","unstructured":"Kimmo K\u00e4rkk\u00e4inen and Jungseock Joo. 2019. FairFace: Face Attribute Dataset for Balanced Race Gender and Age. arXiv preprint arXiv:1908.04913(2019).  Kimmo K\u00e4rkk\u00e4inen and Jungseock Joo. 2019. FairFace: Face Attribute Dataset for Balanced Race Gender and Age. arXiv preprint arXiv:1908.04913(2019)."},{"key":"e_1_3_2_1_46_1","first-page":"1","article-title":"Methodological issues in measuring health disparities. Vital and Health Statistics","volume":"2","author":"Keppel Kenneth","year":"2005","unstructured":"Kenneth Keppel , Elsie Pamuk , John Lynch , Olivia Carter-Pokras , Insun Kim , Vickie Mays , Jeffrey Pearcy , Victor Schoenbach , and Joel S. Weissman . 2005 . Methodological issues in measuring health disparities. Vital and Health Statistics , Series 2: Data Evaluation and Methods Research 2 , 141 (2005), 1 -- 16 . Kenneth Keppel, Elsie Pamuk, John Lynch, Olivia Carter-Pokras, Insun Kim, Vickie Mays, Jeffrey Pearcy, Victor Schoenbach, and Joel S. Weissman. 2005. Methodological issues in measuring health disparities. Vital and Health Statistics, Series 2: Data Evaluation and Methods Research 2, 141 (2005), 1--16.","journal-title":"Series 2: Data Evaluation and Methods Research"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2012.2214212"},{"key":"e_1_3_2_1_48_1","volume-title":"Inherent Trade-Offs in the Fair Determination of Risk Scores. In8th Innovations in Theoretical Computer Science Conf. (ITCS","author":"Kleinberg Jon","year":"2017","unstructured":"Jon Kleinberg , Sendhil Mullainathan , and Manish Raghavan . 2017 . Inherent Trade-Offs in the Fair Determination of Risk Scores. In8th Innovations in Theoretical Computer Science Conf. (ITCS 2017). Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent Trade-Offs in the Fair Determination of Risk Scores. In8th Innovations in Theoretical Computer Science Conf. (ITCS 2017)."},{"key":"e_1_3_2_1_49_1","volume-title":"Racial disparities in automated speech recognition.Proc. of the National Academy Sciences 117, 14","author":"Koenecke Allison","year":"2020","unstructured":"Allison Koenecke , Andrew Nam , Emily Lake , Joe Nudell , Minnie Quartey , Zion Mengesha , Connor Toups , John R. Rickford , Dan Jurafsky , and Sharad Goel . 2020. Racial disparities in automated speech recognition.Proc. of the National Academy Sciences 117, 14 ( 2020 ), 7684--7689. Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R. Rickford, Dan Jurafsky, and Sharad Goel. 2020. Racial disparities in automated speech recognition.Proc. of the National Academy Sciences 117, 14 (2020), 7684--7689."},{"key":"e_1_3_2_1_50_1","volume-title":"Proc. of the IEEE\/CVF Conf. on Computer Vision and Pattern Recognition(CVPR) Workshops.","author":"Krishnapriya K. S.","year":"2019","unstructured":"K. S. Krishnapriya , Kushal Vangara , Michael C. King , Vitor Albiero , and Kevin Bowyer . 2019 . Characterizing the variability in face recognition accuracy relative to race . In Proc. of the IEEE\/CVF Conf. on Computer Vision and Pattern Recognition(CVPR) Workshops. K. S. Krishnapriya, Kushal Vangara, Michael C. King, Vitor Albiero, and Kevin Bowyer. 2019. Characterizing the variability in face recognition accuracy relative to race. In Proc. of the IEEE\/CVF Conf. on Computer Vision and Pattern Recognition(CVPR) Workshops."},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v31i1.10821"},{"key":"e_1_3_2_1_52_1","unstructured":"Jeff Larson Surya Mattu Lauren Kirchner and Julia Angwin. 2016. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica(May2016). https:\/\/www.propublica.org\/article\/how-we-analyzed-the-compas-recidivism-algorithm?token=SV45W9VHgigYbUE-m7o9xnvExqobnjcg  Jeff Larson Surya Mattu Lauren Kirchner and Julia Angwin. 2016. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica(May2016). https:\/\/www.propublica.org\/article\/how-we-analyzed-the-compas-recidivism-algorithm?token=SV45W9VHgigYbUE-m7o9xnvExqobnjcg"},{"key":"e_1_3_2_1_53_1","volume-title":"Illinois Cases To Watch In","author":"Law","year":"2021","unstructured":"Law 360. 2021. Illinois Cases To Watch In 2021 . https:\/\/www.law360.com\/articles\/1336065\/illinois-cases-to-watch-in-2021. Law 360. 2021. Illinois Cases To Watch In 2021. https:\/\/www.law360.com\/articles\/1336065\/illinois-cases-to-watch-in-2021."},{"key":"e_1_3_2_1_54_1","volume-title":"Kriegman","author":"Lee Kuang Chih","year":"2005","unstructured":"Kuang Chih Lee , Jeffrey Ho , and David J . Kriegman . 2005 . Acquiring linear subspaces for face recognition under variable lighting.IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 5 (2005), 684--698. https:\/\/doi.org\/10.1109\/TPAMI.2005.92 Kuang Chih Lee, Jeffrey Ho, and David J. Kriegman. 2005. Acquiring linear subspaces for face recognition under variable lighting.IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 5 (2005), 684--698. https:\/\/doi.org\/10.1109\/TPAMI.2005.92"},{"key":"e_1_3_2_1_55_1","unstructured":"Steve Lohr. 2018. Facial Recognition Is Accurate if You're a White Guy.The New York Times(2018).  Steve Lohr. 2018. Facial Recognition Is Accurate if You're a White Guy.The New York Times(2018)."},{"key":"e_1_3_2_1_56_1","volume-title":"Transparency Note: Azure Cognitive Services FaceAPI. https:\/\/azure.microsoft.com\/en-us\/resources\/transparency-note-azure-cognitive-services-face-api\/.","year":"2019","unstructured":"Microsoft. 2019 . Transparency Note: Azure Cognitive Services FaceAPI. https:\/\/azure.microsoft.com\/en-us\/resources\/transparency-note-azure-cognitive-services-face-api\/. Microsoft. 2019. Transparency Note: Azure Cognitive Services FaceAPI. https:\/\/azure.microsoft.com\/en-us\/resources\/transparency-note-azure-cognitive-services-face-api\/."},{"key":"e_1_3_2_1_57_1","volume-title":"Windows Hello: Discover facial recognition on Windows 10. https:\/\/www.microsoft.com\/en-us\/windows\/windows-hello.","year":"2019","unstructured":"Microsoft. 2019 . Windows Hello: Discover facial recognition on Windows 10. https:\/\/www.microsoft.com\/en-us\/windows\/windows-hello. Microsoft. 2019. Windows Hello: Discover facial recognition on Windows 10. https:\/\/www.microsoft.com\/en-us\/windows\/windows-hello."},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287596"},{"key":"e_1_3_2_1_59_1","volume-title":"Varshney","author":"Muthukumar Vidya","year":"2019","unstructured":"Vidya Muthukumar , Tejaswini Pedapati , Nalini Ratha , Prasanna Sattigeri , Chai-Wah Wu , Brian Kingsbury , Abhishek Kumar , Samuel Thomas , Aleksandra Mojsilovic , and Kush R . Varshney . 2019 . Understanding Unequal Gender Classification Accuracy from Face Images. CoRR arXiv:1812.00099. Vidya Muthukumar, Tejaswini Pedapati, Nalini Ratha, Prasanna Sattigeri, Chai-Wah Wu, Brian Kingsbury, Abhishek Kumar, Samuel Thomas, Aleksandra Mojsilovic, and Kush R. Varshney. 2019. Understanding Unequal Gender Classification Accuracy from Face Images. CoRR arXiv:1812.00099."},{"key":"e_1_3_2_1_60_1","unstructured":"Antony Nicol Chris Casey and Stuart MacFarlane. 2002. Children are ready for speech technology-but is the technology ready for them. Interaction Design and Children Eindhoven The Netherlands(2002).  Antony Nicol Chris Casey and Stuart MacFarlane. 2002. Children are ready for speech technology-but is the technology ready for them. Interaction Design and Children Eindhoven The Netherlands(2002)."},{"key":"e_1_3_2_1_61_1","volume-title":"Proc. of the AAAI Conf. on Human Computation and Crowdsourcing (HCOMP).","author":"Nushi Besmira","year":"2018","unstructured":"Besmira Nushi , Ece Kamar , and Eric Horvitz . 2018 . Towards accountable ai:Hybrid human-machine analyses for characterizing system failure . In Proc. of the AAAI Conf. on Human Computation and Crowdsourcing (HCOMP). Besmira Nushi, Ece Kamar, and Eric Horvitz. 2018. Towards accountable ai:Hybrid human-machine analyses for characterizing system failure. In Proc. of the AAAI Conf. on Human Computation and Crowdsourcing (HCOMP)."},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.aax2342"},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445870"},{"key":"e_1_3_2_1_64_1","volume-title":"CHI Workshop on Human-Centered Approaches to Fair and Responsible AI.","author":"Poursabzi-Sangdeh Forough","year":"2020","unstructured":"Forough Poursabzi-Sangdeh , Samira Samadi , Jennifer Wortman Vaughan , and Hanna Wallach . 2020 . A Human in the Loop is Not Enough: The Need for Human-Subject Experiments in Facial Recognition . In CHI Workshop on Human-Centered Approaches to Fair and Responsible AI. Forough Poursabzi-Sangdeh, Samira Samadi, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. A Human in the Loop is Not Enough: The Need for Human-Subject Experiments in Facial Recognition. In CHI Workshop on Human-Centered Approaches to Fair and Responsible AI."},{"key":"e_1_3_2_1_65_1","unstructured":"Proctorio. 2021. A Comprehensive Learning Integrity Platform - Proctorio. https:\/\/proctorio.com\/.  Proctorio. 2021. A Comprehensive Learning Integrity Platform - Proctorio. https:\/\/proctorio.com\/."},{"key":"e_1_3_2_1_66_1","unstructured":"ProPublica. 2016. Data and analysis for 'Machine Bias'. https:\/\/github.com\/propublica\/compas-analysis.  ProPublica. 2016. Data and analysis for 'Machine Bias'. https:\/\/github.com\/propublica\/compas-analysis."},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306618.3314244"},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3375627.3375820"},{"key":"e_1_3_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372873"},{"key":"e_1_3_2_1_70_1","volume-title":"A field study of the impact of gender and user's technical experience on the performance of voice-activated medical tracking application.International Journal of Human-Computer Studies 60, 5--6","author":"Rodger James A","year":"2004","unstructured":"James A Rodger and Parag C Pendharkar . 2004. A field study of the impact of gender and user's technical experience on the performance of voice-activated medical tracking application.International Journal of Human-Computer Studies 60, 5--6 ( 2004 ), 529--544. James A Rodger and Parag C Pendharkar. 2004. A field study of the impact of gender and user's technical experience on the performance of voice-activated medical tracking application.International Journal of Human-Computer Studies 60, 5--6 (2004), 529--544."},{"key":"e_1_3_2_1_71_1","volume-title":"Brubaker","author":"Scheuerman Morgan Klaus","year":"2019","unstructured":"Morgan Klaus Scheuerman , Jacob M. Paul , and Jed R . Brubaker . 2019 . How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services. Proc. of the ACM on Human--Computer Interaction( 2019). Morgan Klaus Scheuerman, Jacob M. Paul, and Jed R. Brubaker. 2019. How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services. Proc. of the ACM on Human--Computer Interaction(2019)."},{"key":"e_1_3_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287598"},{"key":"e_1_3_2_1_73_1","unstructured":"Jacob Snow. 2018. Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots. https:\/\/www.aclu.org\/blog\/privacy-technology\/surveillance-technologies\/amazons-face-recognition-falsely-matched-28.  Jacob Snow. 2018. Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots. https:\/\/www.aclu.org\/blog\/privacy-technology\/surveillance-technologies\/amazons-face-recognition-falsely-matched-28."},{"key":"e_1_3_2_1_74_1","doi-asserted-by":"crossref","unstructured":"Luke Stark. 2019. Facial Recognition is the Plutonium of AI. ACM XRDS25 3(2019) 50--55.  Luke Stark. 2019. Facial Recognition is the Plutonium of AI. ACM XRDS25 3(2019) 50--55.","DOI":"10.1145\/3313129"},{"key":"e_1_3_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W17-1606"},{"key":"e_1_3_2_1_76_1","unstructured":"Sepsis Watch. 2021. Sepsis Watch: the implementation of a Duke-specific early warning system for sepsis - Duke Institute for Health Innovation. https:\/\/dihi.org\/project\/sepsiswatch\/.  Sepsis Watch. 2021. Sepsis Watch: the implementation of a Duke-specific early warning system for sepsis - Duke Institute for Health Innovation. https:\/\/dihi.org\/project\/sepsiswatch\/."},{"key":"e_1_3_2_1_77_1","volume-title":"IARPA Janus Benchmark-B Face Dataset. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops 2017-July (2017","author":"Whitelam Cameron","year":"2017","unstructured":"Cameron Whitelam , Emma Taborsky , Austin Blanton , Brianna Maze , Jocelyn Adams , Tim Miller , Nathan Kalka , Anil K. Jain , James A. Duncan , Kristen Allen , Jordan Cheney , and Patrick Grother . 2017 . IARPA Janus Benchmark-B Face Dataset. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops 2017-July (2017 ), 592--600. https:\/\/doi.org\/10.1109\/CVPRW.2017.87 Cameron Whitelam, Emma Taborsky, Austin Blanton, Brianna Maze, Jocelyn Adams, Tim Miller, Nathan Kalka, Anil K. Jain, James A. Duncan, Kristen Allen, Jordan Cheney, and Patrick Grother. 2017. IARPA Janus Benchmark-B Face Dataset. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops 2017-July (2017), 592--600. https:\/\/doi.org\/10.1109\/CVPRW.2017.87"}],"event":{"name":"AIES '21: AAAI\/ACM Conference on AI, Ethics, and Society","location":"Virtual Event USA","acronym":"AIES '21","sponsor":["SIGAI ACM Special Interest Group on Artificial Intelligence","AAAI"]},"container-title":["Proceedings of the 2021 AAAI\/ACM Conference on AI, Ethics, and Society"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3461702.3462610","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3461702.3462610","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:17:06Z","timestamp":1750191426000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3461702.3462610"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,7,21]]},"references-count":76,"alternative-id":["10.1145\/3461702.3462610","10.1145\/3461702"],"URL":"https:\/\/doi.org\/10.1145\/3461702.3462610","relation":{},"subject":[],"published":{"date-parts":[[2021,7,21]]},"assertion":[{"value":"2021-07-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}