{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,27]],"date-time":"2026-04-27T13:38:48Z","timestamp":1777297128588,"version":"3.51.4"},"publisher-location":"New York, NY, USA","reference-count":118,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T00:00:00Z","timestamp":1686528000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,6,12]]},"DOI":"10.1145\/3593013.3593981","type":"proceedings-article","created":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T14:40:46Z","timestamp":1686580846000},"page":"111-122","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":51,"title":["The Gradient of Generative AI Release: Methods and Considerations"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0007-5845-1938","authenticated-orcid":false,"given":"Irene","family":"Solaiman","sequence":"first","affiliation":[{"name":"Hugging Face, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,6,12]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"Zou","author":"Abid Abubakar","year":"2019","unstructured":"Abubakar Abid , Ali Abdalla , Ali Abid , Dawood Khan , Abdulrahman Alfozan , and James Y . Zou . 2019 . Gradio : Hassle-Free Sharing and Testing of ML Models in the Wild. CoRR abs\/1906.02569 (2019). arXiv:1906.02569http:\/\/arxiv.org\/abs\/1906.02569 Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, and James Y. Zou. 2019. Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild. CoRR abs\/1906.02569 (2019). arXiv:1906.02569http:\/\/arxiv.org\/abs\/1906.02569"},{"key":"e_1_3_2_2_2_1","article-title":"NN-Lock: A Lightweight Authorization to Prevent IP Threats of Deep Learning Models","volume":"18","author":"Alam Manaar","year":"2022","unstructured":"Manaar Alam , Sayandeep Saha , Debdeep Mukhopadhyay , and Sandip Kundu . 2022 . NN-Lock: A Lightweight Authorization to Prevent IP Threats of Deep Learning Models . J. Emerg. Technol. Comput. Syst. 18 , 3, Article 51 (apr 2022), 19 pages. https:\/\/doi.org\/10.1145\/3505634 10.1145\/3505634 Manaar Alam, Sayandeep Saha, Debdeep Mukhopadhyay, and Sandip Kundu. 2022. NN-Lock: A Lightweight Authorization to Prevent IP Threats of Deep Learning Models. J. Emerg. Technol. Comput. Syst. 18, 3, Article 51 (apr 2022), 19 pages. https:\/\/doi.org\/10.1145\/3505634","journal-title":"J. Emerg. Technol. Comput. Syst."},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"crossref","unstructured":"E. Awad S. Dsouza R. Kim J. Schulz J. Henrich A. Shariff J. F. Bonnefon and I. Rahwan. 2018. The Moral Machine Experiment. Nature 563 7729 (2018) 59\u201364. https:\/\/www.nature.com\/articles\/s41586-018-0637-6  E. Awad S. Dsouza R. Kim J. Schulz J. Henrich A. Shariff J. F. Bonnefon and I. Rahwan. 2018. The Moral Machine Experiment. Nature 563 7729 (2018) 59\u201364. https:\/\/www.nature.com\/articles\/s41586-018-0637-6","DOI":"10.1038\/s41586-018-0637-6"},{"key":"e_1_3_2_2_4_1","unstructured":"[\n  4\n  ]  Nathan Benaich and Ian Hogarth. 2022. https:\/\/www.stateof.ai\/2022-report-launch.html  [4] Nathan Benaich and Ian Hogarth. 2022. https:\/\/www.stateof.ai\/2022-report-launch.html"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00041"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445922"},{"key":"e_1_3_2_2_7_1","volume-title":"Race after technology: Abolitionist Tools for the new jim code","author":"Benjamin Ruha","unstructured":"Ruha Benjamin . 2020. Race after technology: Abolitionist Tools for the new jim code . Polity . Ruha Benjamin. 2020. Race after technology: Abolitionist Tools for the new jim code. Polity."},{"key":"e_1_3_2_2_8_1","volume-title":"Don\u2019t let industry write the rules for AI. Nature News (May","author":"Benkler Yochai","year":"2019","unstructured":"Yochai Benkler . 2019. Don\u2019t let industry write the rules for AI. Nature News (May 2019 ). https:\/\/www.nature.com\/articles\/d41586-019-01413-1 Yochai Benkler. 2019. Don\u2019t let industry write the rules for AI. Nature News (May 2019). https:\/\/www.nature.com\/articles\/d41586-019-01413-1"},{"key":"e_1_3_2_2_9_1","volume-title":"Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics","author":"Bergman A. Stevie","year":"2022","unstructured":"A. Stevie Bergman , Gavin Abercrombie , Shannon Spruit , Dirk Hovy , Emily Dinan , Y- Lan Boureau , and Verena Rieser . 2022 . Guiding the Release of Safer E2E Conversational AI through Value Sensitive Design . In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics , Edinburgh, UK, 39\u201352. https:\/\/aclanthology.org\/ 2022.sigdial-1.4 A. Stevie Bergman, Gavin Abercrombie, Shannon Spruit, Dirk Hovy, Emily Dinan, Y-Lan Boureau, and Verena Rieser. 2022. Guiding the Release of Safer E2E Conversational AI through Value Sensitive Design. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Edinburgh, UK, 39\u201352. https:\/\/aclanthology.org\/2022.sigdial-1.4"},{"key":"e_1_3_2_2_10_1","unstructured":"BigScience. 2022. BigScience RAIL License v1.0. https:\/\/huggingface.co\/spaces\/bigscience\/license  BigScience. 2022. BigScience RAIL License v1.0. https:\/\/huggingface.co\/spaces\/bigscience\/license"},{"key":"e_1_3_2_2_11_1","volume-title":"The Values Encoded in Machine Learning Research. In 2022 ACM Conference on Fairness, Accountability, and Transparency","author":"Birhane Abeba","year":"2022","unstructured":"Abeba Birhane , Pratyusha Kalluri , Dallas Card , William Agnew , Ravit Dotan , and Michelle Bao . 2022 . The Values Encoded in Machine Learning Research. In 2022 ACM Conference on Fairness, Accountability, and Transparency ( Seoul, Republic of Korea) (FAccT \u201922). Association for Computing Machinery, New York, NY, USA, 173\u2013184. https:\/\/doi.org\/10.1145\/3531146.3533083 10.1145\/3531146.3533083 Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2022. The Values Encoded in Machine Learning Research. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT \u201922). Association for Computing Machinery, New York, NY, USA, 173\u2013184. https:\/\/doi.org\/10.1145\/3531146.3533083"},{"key":"e_1_3_2_2_12_1","volume-title":"Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models. https:\/\/arxiv.org\/abs\/2204","author":"Black Sid","year":"2022","unstructured":"Sid Black , Stella Biderman , Eric Hallahan , Quentin Anthony , Leo Gao , Laurence Golding , Horace He , Connor Leahy , Kyle McDonell , Jason Phang , Michael Pieler , USVSN Sai Prashanth , Shivanshu Purohit , Laria Reynolds , Jonathan Tow , Ben Wang , and Samuel Weinbach . 2022 . GPT-NeoX-20B: An Open-Source Autoregressive Language Model . In Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models. https:\/\/arxiv.org\/abs\/2204 .06745 Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An Open-Source Autoregressive Language Model. In Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models. https:\/\/arxiv.org\/abs\/2204.06745"},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.485"},{"key":"e_1_3_2_2_14_1","volume-title":"A Systematic Review on Model Watermarking for Neural Networks. Frontiers in Big Data 4","author":"Boenisch Franziska","year":"2021","unstructured":"Franziska Boenisch . 2021. A Systematic Review on Model Watermarking for Neural Networks. Frontiers in Big Data 4 ( 2021 ). https:\/\/doi.org\/10.3389\/fdata.2021.729663 10.3389\/fdata.2021.729663 Franziska Boenisch. 2021. A Systematic Review on Model Watermarking for Neural Networks. Frontiers in Big Data 4 (2021). https:\/\/doi.org\/10.3389\/fdata.2021.729663"},{"key":"e_1_3_2_2_15_1","volume-title":"2021. On the Opportunities and Risks of Foundation Models. CoRR abs\/2108.07258","author":"Bommasani Rishi","year":"2021","unstructured":"Rishi Bommasani , Drew A. Hudson , Ehsan Adeli , Russ Altman , 2021. On the Opportunities and Risks of Foundation Models. CoRR abs\/2108.07258 ( 2021 ). arXiv:2108.07258https:\/\/arxiv.org\/abs\/2108.07258 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, et al.2021. On the Opportunities and Risks of Foundation Models. CoRR abs\/2108.07258 (2021). arXiv:2108.07258https:\/\/arxiv.org\/abs\/2108.07258"},{"key":"e_1_3_2_2_16_1","unstructured":"Greg Brockman Mira Murati Peter Welinder and OpenAI. 2020. OpenAI API. https:\/\/openai.com\/blog\/openai-api\/  Greg Brockman Mira Murati Peter Welinder and OpenAI. 2020. OpenAI API. https:\/\/openai.com\/blog\/openai-api\/"},{"key":"e_1_3_2_2_17_1","volume-title":"Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei.","author":"Brundage Miles","year":"2018","unstructured":"Miles Brundage , Shahar Avin , Jack Clark , Helen Toner , Peter Eckersley , Ben Garfinkel , Allan Dafoe , Paul Scharre , Thomas Zeitzoff , Bobby Filar , Hyrum S. Anderson , Heather Roff , Gregory C. Allen , Jacob Steinhardt , Carrick Flynn , Se\u00e1n \u00d3 h\u00c9igeartaigh , Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei. 2018 . The Malicious Use of Artificial Intelligence: Forecasting, Prevention , and Mitigation. CoRR abs\/1802.07228 (2018). arXiv:1802.07228http:\/\/arxiv.org\/abs\/1802.07228 Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum S. Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Se\u00e1n \u00d3 h\u00c9igeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. CoRR abs\/1802.07228 (2018). arXiv:1802.07228http:\/\/arxiv.org\/abs\/1802.07228"},{"key":"e_1_3_2_2_18_1","volume-title":"et. al","author":"Brundage Miles","year":"2020","unstructured":"Miles Brundage , Shahar Avin , Jasmine Wang , Haydn Belfield , et. al .. 2020 . Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims . https:\/\/doi.org\/10.48550\/ARXIV.2004.07213 10.48550\/ARXIV.2004.07213 Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, et. al.. 2020. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. https:\/\/doi.org\/10.48550\/ARXIV.2004.07213"},{"key":"e_1_3_2_2_19_1","volume-title":"The Challenges of Data Quality and Data Quality Assessment in the Big Data Era. Data Sci. J. 14","author":"Cai Li","year":"2015","unstructured":"Li Cai and Yangyong Zhu . 2015. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era. Data Sci. J. 14 ( 2015 ), 2. Li Cai and Yangyong Zhu. 2015. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era. Data Sci. J. 14 (2015), 2."},{"key":"e_1_3_2_2_20_1","volume-title":"Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334","author":"Caliskan Aylin","year":"2017","unstructured":"Aylin Caliskan , Joanna J. Bryson , and Arvind Narayanan . 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 ( 2017 ), 183\u2013186. https:\/\/doi.org\/10.1126\/science.aal4230 arXiv:https:\/\/www.science.org\/doi\/pdf\/10.1126\/science.aal4230 10.1126\/science.aal4230 Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183\u2013186. https:\/\/doi.org\/10.1126\/science.aal4230 arXiv:https:\/\/www.science.org\/doi\/pdf\/10.1126\/science.aal4230"},{"key":"e_1_3_2_2_21_1","volume-title":"Extracting Training Data from Large Language Models. CoRR abs\/2012.07805","author":"Carlini Nicholas","year":"2020","unstructured":"Nicholas Carlini , Florian Tram\u00e8r , Eric Wallace , Matthew Jagielski , Ariel Herbert-Voss , Katherine Lee , Adam Roberts , Tom B. Brown , Dawn Song , \u00dalfar Erlingsson , Alina Oprea , and Colin Raffel . 2020. Extracting Training Data from Large Language Models. CoRR abs\/2012.07805 ( 2020 ). arXiv:2012.07805https:\/\/arxiv.org\/abs\/2012.07805 Nicholas Carlini, Florian Tram\u00e8r, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, \u00dalfar Erlingsson, Alina Oprea, and Colin Raffel. 2020. Extracting Training Data from Large Language Models. CoRR abs\/2012.07805 (2020). arXiv:2012.07805https:\/\/arxiv.org\/abs\/2012.07805"},{"key":"e_1_3_2_2_22_1","unstructured":"CarperAI. 2022. Carperai an ELEUTHERAI lab announces plans for the first open-source \"instruction-tuned\" language model.https:\/\/carper.ai\/instruct-gpt-announcement\/  CarperAI. 2022. Carperai an ELEUTHERAI lab announces plans for the first open-source \"instruction-tuned\" language model.https:\/\/carper.ai\/instruct-gpt-announcement\/"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.5555\/3437539.3437711"},{"key":"e_1_3_2_2_24_1","volume-title":"et. al","author":"Chowdhery Aakanksha","year":"2022","unstructured":"Aakanksha Chowdhery , Sharan Narang , Jacob Devlin , et. al . 2022 . PaLM: Scaling Language Modeling with Pathways . https:\/\/doi.org\/10.48550\/ARXIV.2204.02311 10.48550\/ARXIV.2204.02311 Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, et. al. 2022. PaLM: Scaling Language Modeling with Pathways. https:\/\/doi.org\/10.48550\/ARXIV.2204.02311"},{"key":"e_1_3_2_2_25_1","volume-title":"Jenny Lee, Christopher Hines, and Brent J. Hecht.","author":"Contractor Danish","year":"2020","unstructured":"Danish Contractor , Daniel McDuff , Julia Katherine Haines , Jenny Lee, Christopher Hines, and Brent J. Hecht. 2020 . Behavioral Use Licensing for Responsible AI. CoRR abs\/2011.03116 (2020). arXiv:2011.03116https:\/\/arxiv.org\/abs\/2011.03116 Danish Contractor, Daniel McDuff, Julia Katherine Haines, Jenny Lee, Christopher Hines, and Brent J. Hecht. 2020. Behavioral Use Licensing for Responsible AI. CoRR abs\/2011.03116 (2020). arXiv:2011.03116https:\/\/arxiv.org\/abs\/2011.03116"},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533108"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/2145204.2145396"},{"key":"e_1_3_2_2_28_1","volume-title":"AI research - documents","author":"Dave Paresh","year":"2020","unstructured":"Paresh Dave and Jeffrey Dastin . 2020. Google told its scientists to \u2019strike a positive tone \u2019 in AI research - documents . Reuters ( 2020 ). https:\/\/www.reuters.com\/article\/us-alphabet-google-research-focus\/google-told-its-scientists-to-strike-a-positive-tone-in-ai-research-documents-idUSKBN28X1CB Paresh Dave and Jeffrey Dastin. 2020. Google told its scientists to \u2019strike a positive tone\u2019 in AI research - documents. Reuters (2020). https:\/\/www.reuters.com\/article\/us-alphabet-google-research-focus\/google-told-its-scientists-to-strike-a-positive-tone-in-ai-research-documents-idUSKBN28X1CB"},{"key":"e_1_3_2_2_29_1","volume-title":"Scarecrow: A Framework for Scrutinizing Machine Text. CoRR abs\/2107.01294","author":"Dou Yao","year":"2021","unstructured":"Yao Dou , Maxwell Forbes , Rik Koncel-Kedziorski , Noah A. Smith , and Yejin Choi . 2021 . Scarecrow: A Framework for Scrutinizing Machine Text. CoRR abs\/2107.01294 (2021). arXiv:2107.01294https:\/\/arxiv.org\/abs\/2107.01294 Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, and Yejin Choi. 2021. Scarecrow: A Framework for Scrutinizing Machine Text. CoRR abs\/2107.01294 (2021). arXiv:2107.01294https:\/\/arxiv.org\/abs\/2107.01294"},{"key":"e_1_3_2_2_30_1","volume-title":"Towards intellectual freedom in an AI Ethics Global Community. AI and Ethics 1 (04","author":"Ebell Christoph","year":"2021","unstructured":"Christoph Ebell , Ricardo Baeza-Yates , Richard Benjamins , Hengjin Cai , Mark Coeckelbergh , Tania Duarte , Merve Hickok , Aurelie Jacquet , Angela Kim , Joris Krijger , John Macintyre , Piyush Madhamshettiwar , Lauren Maffeo , Jeanna Matthews , Larry Medsker , Peter Smith , and Savannah Thais . 2021. Towards intellectual freedom in an AI Ethics Global Community. AI and Ethics 1 (04 2021 ). https:\/\/doi.org\/10.1007\/s43681-021-00052-5 10.1007\/s43681-021-00052-5 Christoph Ebell, Ricardo Baeza-Yates, Richard Benjamins, Hengjin Cai, Mark Coeckelbergh, Tania Duarte, Merve Hickok, Aurelie Jacquet, Angela Kim, Joris Krijger, John Macintyre, Piyush Madhamshettiwar, Lauren Maffeo, Jeanna Matthews, Larry Medsker, Peter Smith, and Savannah Thais. 2021. Towards intellectual freedom in an AI Ethics Global Community. AI and Ethics 1 (04 2021). https:\/\/doi.org\/10.1007\/s43681-021-00052-5"},{"key":"e_1_3_2_2_31_1","volume-title":"Proceedings of BigScience Episode #5 \u2013 Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, virtual+Dublin. https:\/\/aclanthology.org\/2022","author":"Fan Angela","year":"2022","unstructured":"Angela Fan , Suzana Ilic , Thomas Wolf , and Matthias Gall\u00e9 ( Eds .). 2022 . Proceedings of BigScience Episode #5 \u2013 Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, virtual+Dublin. https:\/\/aclanthology.org\/2022 .bigscience-1.0 Angela Fan, Suzana Ilic, Thomas Wolf, and Matthias Gall\u00e9 (Eds.). 2022. Proceedings of BigScience Episode #5 \u2013 Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, virtual+Dublin. https:\/\/aclanthology.org\/2022.bigscience-1.0"},{"key":"e_1_3_2_2_32_1","unstructured":"Leo Gao. 2021. On the sizes of openai API models. https:\/\/blog.eleuther.ai\/gpt3-model-sizes\/  Leo Gao. 2021. On the sizes of openai API models. https:\/\/blog.eleuther.ai\/gpt3-model-sizes\/"},{"key":"e_1_3_2_2_33_1","volume-title":"The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR abs\/2101.00027","author":"Gao Leo","year":"2021","unstructured":"Leo Gao , Stella Biderman , Sid Black , Laurence Golding , Travis Hoppe , Charles Foster , Jason Phang , Horace He , Anish Thite , Noa Nabeshima , Shawn Presser , and Connor Leahy . 2021 . The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR abs\/2101.00027 (2021). arXiv:2101.00027https:\/\/arxiv.org\/abs\/2101.00027 Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR abs\/2101.00027 (2021). arXiv:2101.00027https:\/\/arxiv.org\/abs\/2101.00027"},{"key":"e_1_3_2_2_34_1","volume-title":"Hanna M. Wallach, Hal Daum\u00e9 III, and Kate Crawford.","author":"Gebru Timnit","year":"2018","unstructured":"Timnit Gebru , Jamie Morgenstern , Briana Vecchione , Jennifer Wortman Vaughan , Hanna M. Wallach, Hal Daum\u00e9 III, and Kate Crawford. 2018 . Datasheets for Datasets. CoRR abs\/1803.09010 (2018). arXiv:1803.09010http:\/\/arxiv.org\/abs\/1803.09010 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daum\u00e9 III, and Kate Crawford. 2018. Datasheets for Datasets. CoRR abs\/1803.09010 (2018). arXiv:1803.09010http:\/\/arxiv.org\/abs\/1803.09010"},{"key":"e_1_3_2_2_35_1","first-page":"19","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics","author":"Gehrmann Sebastian","year":"2019","unstructured":"Sebastian Gehrmann , Hendrik Strobelt , and Alexander Rush . 2019 . GLTR: Statistical Detection and Visualization of Generated Text . In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics , Florence, Italy, 111\u2013116. https:\/\/doi.org\/10. 18653\/v1\/P 19 - 3019 10.18653\/v1 Sebastian Gehrmann, Hendrik Strobelt, and Alexander Rush. 2019. GLTR: Statistical Detection and Visualization of Generated Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Florence, Italy, 111\u2013116. https:\/\/doi.org\/10.18653\/v1\/P19-3019"},{"key":"e_1_3_2_2_36_1","volume-title":"Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media","author":"Gillespie Tarleton","unstructured":"Tarleton Gillespie . 2021. Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media . Yale University Press . Tarleton Gillespie. 2021. Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press."},{"key":"#cr-split#-e_1_3_2_2_37_1.1","unstructured":"Josh A. Goldstein Girish Sastry Micah Musser Renee DiResta Matthew Gentzel and Katerina Sedova. 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. https:\/\/doi.org\/10.48550\/ARXIV.2301.04246 10.48550\/ARXIV.2301.04246"},{"key":"#cr-split#-e_1_3_2_2_37_1.2","unstructured":"Josh A. Goldstein Girish Sastry Micah Musser Renee DiResta Matthew Gentzel and Katerina Sedova. 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. https:\/\/doi.org\/10.48550\/ARXIV.2301.04246"},{"key":"e_1_3_2_2_38_1","volume-title":"Detection of AI-Generated Synthetic Faces","author":"Gragnaniello Diego","unstructured":"Diego Gragnaniello , Francesco Marra , and Luisa Verdoliva . 2022. Detection of AI-Generated Synthetic Faces . Springer International Publishing , Cham , 191\u2013212. https:\/\/doi.org\/10.1007\/978-3-030-87664-7_9 10.1007\/978-3-030-87664-7_9 Diego Gragnaniello, Francesco Marra, and Luisa Verdoliva. 2022. Detection of AI-Generated Synthetic Faces. Springer International Publishing, Cham, 191\u2013212. https:\/\/doi.org\/10.1007\/978-3-030-87664-7_9"},{"key":"e_1_3_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3479610"},{"key":"e_1_3_2_2_40_1","unstructured":"Will Douglas Heaven. 2022. Why Meta\u2019s latest large language model survived only three days online. https:\/\/www.technologyreview.com\/2022\/11\/18\/1063487\/meta-large-language-model-ai-only-survived-three-days-gpt-3-science\/  Will Douglas Heaven. 2022. Why Meta\u2019s latest large language model survived only three days online. https:\/\/www.technologyreview.com\/2022\/11\/18\/1063487\/meta-large-language-model-ai-only-survived-three-days-gpt-3-science\/"},{"key":"e_1_3_2_2_41_1","first-page":"16","volume-title":"Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics","author":"Hovy Dirk","year":"1865","unstructured":"Dirk Hovy and Shannon L. Spruit . 2016. The Social Impact of Natural Language Processing . In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics , Berlin, Germany, 591\u2013598. https:\/\/doi.org\/10. 1865 3\/v1\/P 16 - 2096 10.18653\/v1 Dirk Hovy and Shannon L. Spruit. 2016. The Social Impact of Natural Language Processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, 591\u2013598. https:\/\/doi.org\/10.18653\/v1\/P16-2096"},{"key":"e_1_3_2_2_42_1","volume-title":"Lakshmanan","author":"Jawahar Ganesh","year":"2020","unstructured":"Ganesh Jawahar , Muhammad Abdul-Mageed , and Laks V. S . Lakshmanan . 2020 . Automatic Detection of Machine Generated Text: A Critical Survey. CoRR abs\/2011.01314 (2020). arXiv:2011.01314https:\/\/arxiv.org\/abs\/2011.01314 Ganesh Jawahar, Muhammad Abdul-Mageed, and Laks V. S. Lakshmanan. 2020. Automatic Detection of Machine Generated Text: A Critical Survey. CoRR abs\/2011.01314 (2020). arXiv:2011.01314https:\/\/arxiv.org\/abs\/2011.01314"},{"key":"e_1_3_2_2_43_1","volume-title":"Don\u2019t ask if artificial intelligence is good or fair, ask how it shifts power. Nature News (July","author":"Kalluri Pratyusha","year":"2020","unstructured":"Pratyusha Kalluri . 2020. Don\u2019t ask if artificial intelligence is good or fair, ask how it shifts power. Nature News (July 2020 ). https:\/\/www.nature.com\/articles\/d41586-020-02003-2 Pratyusha Kalluri. 2020. Don\u2019t ask if artificial intelligence is good or fair, ask how it shifts power. Nature News (July 2020). https:\/\/www.nature.com\/articles\/d41586-020-02003-2"},{"key":"e_1_3_2_2_44_1","volume-title":"Article 20 (feb","author":"Kaloudi Nektaria","year":"2020","unstructured":"Nektaria Kaloudi and Jingyue Li. 2020. The AI-Based Cyber Threat Landscape: A Survey. ACM Comput. Surv. 53, 1 , Article 20 (feb 2020 ), 34 pages. https:\/\/doi.org\/10.1145\/3372823 10.1145\/3372823 Nektaria Kaloudi and Jingyue Li. 2020. The AI-Based Cyber Threat Landscape: A Survey. ACM Comput. Surv. 53, 1, Article 20 (feb 2020), 34 pages. https:\/\/doi.org\/10.1145\/3372823"},{"key":"#cr-split#-e_1_3_2_2_45_1.1","unstructured":"Heidy Khlaaf Pamela Mishkin Joshua Achiam Gretchen Krueger and Miles Brundage. 2022. A Hazard Analysis Framework for Code Synthesis Large Language Models. https:\/\/doi.org\/10.48550\/ARXIV.2207.14157 10.48550\/ARXIV.2207.14157"},{"key":"#cr-split#-e_1_3_2_2_45_1.2","unstructured":"Heidy Khlaaf Pamela Mishkin Joshua Achiam Gretchen Krueger and Miles Brundage. 2022. A Hazard Analysis Framework for Code Synthesis Large Language Models. https:\/\/doi.org\/10.48550\/ARXIV.2207.14157"},{"key":"#cr-split#-e_1_3_2_2_46_1.1","unstructured":"John Kirchenbauer Jonas Geiping Yuxin Wen Jonathan Katz Ian Miers and Tom Goldstein. 2023. A Watermark for Large Language Models. https:\/\/doi.org\/10.48550\/ARXIV.2301.10226 10.48550\/ARXIV.2301.10226"},{"key":"#cr-split#-e_1_3_2_2_46_1.2","unstructured":"John Kirchenbauer Jonas Geiping Yuxin Wen Jonathan Katz Ian Miers and Tom Goldstein. 2023. A Watermark for Large Language Models. https:\/\/doi.org\/10.48550\/ARXIV.2301.10226"},{"key":"e_1_3_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1017\/XPS.2020.37"},{"key":"e_1_3_2_2_48_1","unstructured":"Percy Liang Rishi Bommasani Kathleen Creel and Rob Reich. 2022. The time is now to develop community norms for the release of Foundation Models. https:\/\/hai.stanford.edu\/news\/time-now-develop-community-norms-release-foundation-models  Percy Liang Rishi Bommasani Kathleen Creel and Rob Reich. 2022. The time is now to develop community norms for the release of Foundation Models. https:\/\/hai.stanford.edu\/news\/time-now-develop-community-norms-release-foundation-models"},{"key":"e_1_3_2_2_49_1","volume-title":"Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda.","author":"Liang Percy","year":"2022","unstructured":"Percy Liang , Rishi Bommasani , Tony Lee , Dimitris Tsipras , Dilara Soylu , Michihiro Yasunaga , Yian Zhang , Deepak Narayanan , Yuhuai Wu , Ananya Kumar , Benjamin Newman , Binhang Yuan , Bobby Yan , Ce Zhang , Christian Cosgrove , Christopher D. Manning , Christopher R\u00e9 , Diana Acosta-Navas , Drew A. Hudson , Eric Zelikman , Esin Durmus , Faisal Ladhak , Frieda Rong , Hongyu Ren , Huaxiu Yao , Jue Wang , Keshav Santhanam , Laurel Orr , Lucia Zheng , Mert Yuksekgonul , Mirac Suzgun , Nathan Kim , Neel Guha , Niladri Chatterji , Omar Khattab , Peter Henderson , Qian Huang , Ryan Chi , Sang Michael Xie , Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022 . Holistic Evaluation of Language Models . https:\/\/doi.org\/10.48550\/ARXIV.2211.09110 10.48550\/ARXIV.2211.09110 Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher R\u00e9, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic Evaluation of Language Models. https:\/\/doi.org\/10.48550\/ARXIV.2211.09110"},{"key":"e_1_3_2_2_50_1","unstructured":"Thomas Liao. 2022. Foundation Model Tracker. https:\/\/foundationmodeltracker.com.  Thomas Liao. 2022. Foundation Model Tracker. https:\/\/foundationmodeltracker.com."},{"key":"e_1_3_2_2_51_1","unstructured":"Zachary C. Lipton. 2019. OpenAI trains language model mass hysteria ensues. https:\/\/www.approximatelycorrect.com\/2019\/02\/17\/openai-trains-language-model-mass-hysteria-ensues\/  Zachary C. Lipton. 2019. OpenAI trains language model mass hysteria ensues. https:\/\/www.approximatelycorrect.com\/2019\/02\/17\/openai-trains-language-model-mass-hysteria-ensues\/"},{"key":"#cr-split#-e_1_3_2_2_52_1.1","unstructured":"Travis Mandel Jahnu Best Randall H. Tanaka Hiram Temple Chansen Haili Kayla Schlectinger and Roy Szeto. 2019. Let's Keep It Safe: Designing User Interfaces that Allow Everyone to Contribute to AI Safety. https:\/\/doi.org\/10.48550\/ARXIV.1907.04446 10.48550\/ARXIV.1907.04446"},{"key":"#cr-split#-e_1_3_2_2_52_1.2","unstructured":"Travis Mandel Jahnu Best Randall H. Tanaka Hiram Temple Chansen Haili Kayla Schlectinger and Roy Szeto. 2019. Let's Keep It Safe: Designing User Interfaces that Allow Everyone to Contribute to AI Safety. https:\/\/doi.org\/10.48550\/ARXIV.1907.04446"},{"key":"e_1_3_2_2_53_1","volume-title":"The Radicalization Risks of GPT-3 and Advanced Neural Language Models. CoRR abs\/2009.06807","author":"McGuffie Kris","year":"2020","unstructured":"Kris McGuffie and Alex Newhouse . 2020. The Radicalization Risks of GPT-3 and Advanced Neural Language Models. CoRR abs\/2009.06807 ( 2020 ). arXiv:2009.06807https:\/\/arxiv.org\/abs\/2009.06807 Kris McGuffie and Alex Newhouse. 2020. The Radicalization Risks of GPT-3 and Advanced Neural Language Models. CoRR abs\/2009.06807 (2020). arXiv:2009.06807https:\/\/arxiv.org\/abs\/2009.06807"},{"key":"e_1_3_2_2_54_1","volume-title":"Jason Phang, and Samuel R. Bowman.","author":"Michael Julian","year":"2022","unstructured":"Julian Michael , Ari Holtzman , Alicia Parrish , Aaron Mueller , Alex Wang , Angelica Chen , Divyam Madaan , Nikita Nangia , Richard Yuanzhe Pang , Jason Phang, and Samuel R. Bowman. 2022 . What Do NLP Researchers Believe? Results of the NLP Community Metasurvey . https:\/\/doi.org\/10.48550\/ARXIV.2208.12852 10.48550\/ARXIV.2208.12852 Julian Michael, Ari Holtzman, Alicia Parrish, Aaron Mueller, Alex Wang, Angelica Chen, Divyam Madaan, Nikita Nangia, Richard Yuanzhe Pang, Jason Phang, and Samuel R. Bowman. 2022. What Do NLP Researchers Believe? Results of the NLP Community Metasurvey. https:\/\/doi.org\/10.48550\/ARXIV.2208.12852"},{"key":"e_1_3_2_2_55_1","unstructured":"Microsoft. 2022. Microsoft Turing Academic Program (MS-TAP). https:\/\/www.microsoft.com\/en-us\/research\/collaboration\/microsoft-turing-academic-program\/  Microsoft. 2022. Microsoft Turing Academic Program (MS-TAP). https:\/\/www.microsoft.com\/en-us\/research\/collaboration\/microsoft-turing-academic-program\/"},{"key":"e_1_3_2_2_56_1","unstructured":"Midjourney. 2022. Quick start guide. https:\/\/midjourney.gitbook.io\/docs\/  Midjourney. 2022. Quick start guide. https:\/\/midjourney.gitbook.io\/docs\/"},{"key":"e_1_3_2_2_57_1","unstructured":"Pamela Mishkin Lama Ahmad Miles Brundage Gretchen Krueger and Girish Sastry. 2022. DALL\u00b7E 2 Preview - Risks and Limitations. (2022). [https:\/\/github.com\/openai\/dalle-2-preview\/blob\/main\/system-card.md](https:\/\/github.com\/openai\/dalle-2-preview\/blob\/main\/system-card.md)  Pamela Mishkin Lama Ahmad Miles Brundage Gretchen Krueger and Girish Sastry. 2022. DALL\u00b7E 2 Preview - Risks and Limitations. (2022). [https:\/\/github.com\/openai\/dalle-2-preview\/blob\/main\/system-card.md](https:\/\/github.com\/openai\/dalle-2-preview\/blob\/main\/system-card.md)"},{"key":"e_1_3_2_2_58_1","volume-title":"Inioluwa Deborah Raji, and Timnit Gebru","author":"Mitchell Margaret","year":"2018","unstructured":"Margaret Mitchell , Simone Wu , Andrew Zaldivar , Parker Barnes , Lucy Vasserman , Ben Hutchinson , Elena Spitzer , Inioluwa Deborah Raji, and Timnit Gebru . 2018 . Model Cards for Model Reporting. CoRR abs\/1810.03993 (2018). arXiv:1810.03993http:\/\/arxiv.org\/abs\/1810.03993 Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2018. Model Cards for Model Reporting. CoRR abs\/1810.03993 (2018). arXiv:1810.03993http:\/\/arxiv.org\/abs\/1810.03993"},{"key":"e_1_3_2_2_59_1","volume-title":"Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. CoRR abs\/2007.04068","author":"Mohamed Shakir","year":"2020","unstructured":"Shakir Mohamed , Marie-Therese Png , and William Isaac . 2020 . Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. CoRR abs\/2007.04068 (2020). arXiv:2007.04068https:\/\/arxiv.org\/abs\/2007.04068 Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. CoRR abs\/2007.04068 (2020). arXiv:2007.04068https:\/\/arxiv.org\/abs\/2007.04068"},{"key":"e_1_3_2_2_60_1","unstructured":"Emad Mostaque. 2022. Stable diffusion public release. https:\/\/stability.ai\/blog\/stable-diffusion-public-release  Emad Mostaque. 2022. Stable diffusion public release. https:\/\/stability.ai\/blog\/stable-diffusion-public-release"},{"key":"e_1_3_2_2_61_1","unstructured":"NAIRRTF. 2021. The National Artificial Intelligence Research Resource Task Force (NAIRRTF). https:\/\/www.ai.gov\/nairrtf\/  NAIRRTF. 2021. The National Artificial Intelligence Research Resource Task Force (NAIRRTF). https:\/\/www.ai.gov\/nairrtf\/"},{"key":"e_1_3_2_2_62_1","volume-title":"Bowman","author":"Nangia Nikita","year":"2020","unstructured":"Nikita Nangia , Clara Vania , Rasika Bhalerao , and Samuel R . Bowman . 2020 . CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. CoRR abs\/2010.00133 (2020). arXiv:2010.00133https:\/\/arxiv.org\/abs\/2010.00133 Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. CoRR abs\/2010.00133 (2020). arXiv:2010.00133https:\/\/arxiv.org\/abs\/2010.00133"},{"key":"e_1_3_2_2_63_1","volume-title":"Duc Thanh Nguyen, and Saeid Nahavandi.","author":"Nguyen Thanh Thi","year":"2019","unstructured":"Thanh Thi Nguyen , Cuong M. Nguyen , Dung Tien Nguyen , Duc Thanh Nguyen, and Saeid Nahavandi. 2019 . Deep Learning for Deepfakes Creation and Detection. CoRR abs\/1909.11573 (2019). arXiv:1909.11573http:\/\/arxiv.org\/abs\/1909.11573 Thanh Thi Nguyen, Cuong M. Nguyen, Dung Tien Nguyen, Duc Thanh Nguyen, and Saeid Nahavandi. 2019. Deep Learning for Deepfakes Creation and Detection. CoRR abs\/1909.11573 (2019). arXiv:1909.11573http:\/\/arxiv.org\/abs\/1909.11573"},{"key":"e_1_3_2_2_64_1","volume-title":"Algorithms of oppression how search engines reinforce racism","author":"Noble Safiya Umoja","unstructured":"Safiya Umoja Noble . 2018. Algorithms of oppression how search engines reinforce racism . New York University Press . Safiya Umoja Noble. 2018. Algorithms of oppression how search engines reinforce racism. New York University Press."},{"key":"e_1_3_2_2_65_1","unstructured":"OpenAI. 2019. GPT-2 Model Card. https:\/\/github.com\/openai\/gpt-2\/blob\/master\/model_card.md  OpenAI. 2019. GPT-2 Model Card. https:\/\/github.com\/openai\/gpt-2\/blob\/master\/model_card.md"},{"key":"e_1_3_2_2_66_1","unstructured":"OpenAI. 2020. GPT-3 Model Card. https:\/\/github.com\/openai\/gpt-3\/blob\/master\/model-card.md  OpenAI. 2020. GPT-3 Model Card. https:\/\/github.com\/openai\/gpt-3\/blob\/master\/model-card.md"},{"key":"e_1_3_2_2_67_1","unstructured":"OpenAI. 2022. ChatGPT: Optimizing language models for dialogue. https:\/\/openai.com\/blog\/chatgpt\/  OpenAI. 2022. ChatGPT: Optimizing language models for dialogue. https:\/\/openai.com\/blog\/chatgpt\/"},{"key":"e_1_3_2_2_68_1","unstructured":"OpenAI. 2022. Dall\u00b7E API now available in public beta. https:\/\/openai.com\/blog\/dall-e-api-now-available-in-public-beta\/  OpenAI. 2022. Dall\u00b7E API now available in public beta. https:\/\/openai.com\/blog\/dall-e-api-now-available-in-public-beta\/"},{"key":"e_1_3_2_2_69_1","unstructured":"OpenAI. 2022. What\u2019s the rate limit for the dall\u00b7e API? how can I request an increase?https:\/\/help.openai.com\/en\/articles\/6696591-what-s-the-rate-limit-for-the-dall-e-api-how-can-i-request-an-increase  OpenAI. 2022. What\u2019s the rate limit for the dall\u00b7e API? how can I request an increase?https:\/\/help.openai.com\/en\/articles\/6696591-what-s-the-rate-limit-for-the-dall-e-api-how-can-i-request-an-increase"},{"key":"#cr-split#-e_1_3_2_2_70_1.1","unstructured":"Long Ouyang Jeff Wu Xu Jiang Diogo Almeida Carroll L. Wainwright Pamela Mishkin Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens Amanda Askell Peter Welinder Paul Christiano Jan Leike and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. https:\/\/doi.org\/10.48550\/ARXIV.2203.02155 10.48550\/ARXIV.2203.02155"},{"key":"#cr-split#-e_1_3_2_2_70_1.2","unstructured":"Long Ouyang Jeff Wu Xu Jiang Diogo Almeida Carroll L. Wainwright Pamela Mishkin Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens Amanda Askell Peter Welinder Paul Christiano Jan Leike and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. https:\/\/doi.org\/10.48550\/ARXIV.2203.02155"},{"key":"e_1_3_2_2_71_1","volume-title":"Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning. CoRR abs\/1907.11274","author":"Ovadya Aviv","year":"2019","unstructured":"Aviv Ovadya and Jess Whittlestone . 2019. Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning. CoRR abs\/1907.11274 ( 2019 ). arXiv:1907.11274http:\/\/arxiv.org\/abs\/1907.11274 Aviv Ovadya and Jess Whittlestone. 2019. Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning. CoRR abs\/1907.11274 (2019). arXiv:1907.11274http:\/\/arxiv.org\/abs\/1907.11274"},{"key":"e_1_3_2_2_72_1","unstructured":"Chavez Procope Adeel Cheema David Adkins Bilal Alsallakh Nekesha Green Emily McReynolds Grace Pehl Erin Wang and Polina Zvyagina. 2022. System-Level Transparency of Machine Learning. (2022). https:\/\/ai.facebook.com\/research\/publications\/system-level-transparency-of-machine-learning\/  Chavez Procope Adeel Cheema David Adkins Bilal Alsallakh Nekesha Green Emily McReynolds Grace Pehl Erin Wang and Polina Zvyagina. 2022. System-Level Transparency of Machine Learning. (2022). https:\/\/ai.facebook.com\/research\/publications\/system-level-transparency-of-machine-learning\/"},{"key":"e_1_3_2_2_73_1","volume-title":"et. al","author":"Rae Jack W.","year":"2021","unstructured":"Jack W. Rae , Sebastian Borgeaud , Trevor Cai , et. al . 2021 . Scaling Language Models: Methods, Analysis & Insights from Training Gopher. CoRR abs\/2112.11446 (2021). arXiv:2112.11446https:\/\/arxiv.org\/abs\/2112.11446 Jack W. Rae, Sebastian Borgeaud, Trevor Cai, et. al. 2021. Scaling Language Models: Methods, Analysis & Insights from Training Gopher. CoRR abs\/2112.11446 (2021). arXiv:2112.11446https:\/\/arxiv.org\/abs\/2112.11446"},{"key":"e_1_3_2_2_74_1","volume-title":"AI and the Everything in the Whole Wide World Benchmark. CoRR abs\/2111.15366","author":"Raji Inioluwa Deborah","year":"2021","unstructured":"Inioluwa Deborah Raji , Emily M. Bender , Amandalynne Paullada , Emily Denton , and Alex Hanna . 2021. AI and the Everything in the Whole Wide World Benchmark. CoRR abs\/2111.15366 ( 2021 ). arXiv:2111.15366https:\/\/arxiv.org\/abs\/2111.15366 Inioluwa Deborah Raji, Emily M. Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna. 2021. AI and the Everything in the Whole Wide World Benchmark. CoRR abs\/2111.15366 (2021). arXiv:2111.15366https:\/\/arxiv.org\/abs\/2111.15366"},{"key":"e_1_3_2_2_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372873"},{"key":"e_1_3_2_2_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3514094.3534181"},{"key":"#cr-split#-e_1_3_2_2_77_1.1","unstructured":"Aditya Ramesh Prafulla Dhariwal Alex Nichol Casey Chu and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. https:\/\/doi.org\/10.48550\/ARXIV.2204.06125 10.48550\/ARXIV.2204.06125"},{"key":"#cr-split#-e_1_3_2_2_77_1.2","unstructured":"Aditya Ramesh Prafulla Dhariwal Alex Nichol Casey Chu and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. https:\/\/doi.org\/10.48550\/ARXIV.2204.06125"},{"key":"#cr-split#-e_1_3_2_2_78_1.1","unstructured":"Javier Rando Daniel Paleka David Lindner Lennart Heim and Florian Tram\u00e8r. 2022. Red-Teaming the Stable Diffusion Safety Filter. https:\/\/doi.org\/10.48550\/ARXIV.2210.04610 10.48550\/ARXIV.2210.04610"},{"key":"#cr-split#-e_1_3_2_2_78_1.2","unstructured":"Javier Rando Daniel Paleka David Lindner Lennart Heim and Florian Tram\u00e8r. 2022. Red-Teaming the Stable Diffusion Safety Filter. https:\/\/doi.org\/10.48550\/ARXIV.2210.04610"},{"key":"e_1_3_2_2_79_1","volume-title":"Now AI can write students","author":"Reich Rob","year":"2022","unstructured":"Rob Reich . 2022. Now AI can write students \u2019 essays for them, will everyone become a cheat?https:\/\/www.theguardian.com\/commentisfree\/ 2022 \/nov\/28\/ai-students-essays-cheat-teachers-plagiarism-tech Rob Reich. 2022. Now AI can write students\u2019 essays for them, will everyone become a cheat?https:\/\/www.theguardian.com\/commentisfree\/2022\/nov\/28\/ai-students-essays-cheat-teachers-plagiarism-tech"},{"key":"e_1_3_2_2_80_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_3_2_2_81_1","unstructured":"Robin Rombach and Patrick Esser. 2022. Stable-diffusion License. https:\/\/github.com\/CompVis\/stable-diffusion\/blob\/main\/LICENSE  Robin Rombach and Patrick Esser. 2022. Stable-diffusion License. https:\/\/github.com\/CompVis\/stable-diffusion\/blob\/main\/LICENSE"},{"key":"e_1_3_2_2_82_1","unstructured":"JB Rubinovitz. 2018. Bias bounty programs as a method of combatting bias in AI. https:\/\/rubinovitz.com\/2018\/08\/01\/bias-bounty-programs-as-a-method-of-combatting\/  JB Rubinovitz. 2018. Bias bounty programs as a method of combatting bias in AI. https:\/\/rubinovitz.com\/2018\/08\/01\/bias-bounty-programs-as-a-method-of-combatting\/"},{"key":"e_1_3_2_2_83_1","volume-title":"Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi.","author":"Saharia Chitwan","year":"2022","unstructured":"Chitwan Saharia , William Chan , Saurabh Saxena , Lala Li , Jay Whang , Emily Denton , Seyed Kamyar Seyed Ghasemipour , Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022 . Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding . https:\/\/doi.org\/10.48550\/ARXIV.2205.11487 10.48550\/ARXIV.2205.11487 Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. https:\/\/doi.org\/10.48550\/ARXIV.2205.11487"},{"key":"e_1_3_2_2_84_1","volume-title":"\u201cRelease","author":"Sastry Girish","year":"2021","unstructured":"Girish Sastry . 2021. Beyond \u201cRelease \u201d vs. \u201cNot Release\u201d . https:\/\/crfm.stanford.edu\/commentary\/ 2021 \/10\/18\/sastry.html Girish Sastry. 2021. Beyond \u201cRelease\u201d vs. \u201cNot Release\u201d. https:\/\/crfm.stanford.edu\/commentary\/2021\/10\/18\/sastry.html"},{"key":"e_1_3_2_2_85_1","volume-title":"Emilio Garcia, and Gurleen Virk.","author":"Shelby Renee","year":"2022","unstructured":"Renee Shelby , Shalaleh Rismani , Kathryn Henne , A Jung Moon , Negar Rostamzadeh , Paul Nicholas , N\u2019Mah Yilla , Jess Gallegos , Andrew Smart , Emilio Garcia, and Gurleen Virk. 2022 . Sociotechnical Harms : Scoping a Taxonomy for Harm Reduction . https:\/\/doi.org\/10.48550\/ARXIV.2210.05791 10.48550\/ARXIV.2210.05791 Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N\u2019Mah Yilla, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2022. Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction. https:\/\/doi.org\/10.48550\/ARXIV.2210.05791"},{"key":"#cr-split#-e_1_3_2_2_86_1.1","doi-asserted-by":"crossref","unstructured":"Toby Shevlane. 2022. Structured access: an emerging paradigm for safe AI deployment. https:\/\/doi.org\/10.48550\/ARXIV.2201.05159 10.48550\/ARXIV.2201.05159","DOI":"10.1093\/oxfordhb\/9780197579329.013.39"},{"key":"#cr-split#-e_1_3_2_2_86_1.2","doi-asserted-by":"crossref","unstructured":"Toby Shevlane. 2022. Structured access: an emerging paradigm for safe AI deployment. https:\/\/doi.org\/10.48550\/ARXIV.2201.05159","DOI":"10.1093\/oxfordhb\/9780197579329.013.39"},{"key":"e_1_3_2_2_87_1","volume-title":"The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?CoRR abs\/2001.00463","author":"Shevlane Toby","year":"2020","unstructured":"Toby Shevlane and Allan Dafoe . 2020. The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?CoRR abs\/2001.00463 ( 2020 ). arXiv:2001.00463http:\/\/arxiv.org\/abs\/2001.00463 Toby Shevlane and Allan Dafoe. 2020. The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?CoRR abs\/2001.00463 (2020). arXiv:2001.00463http:\/\/arxiv.org\/abs\/2001.00463"},{"key":"e_1_3_2_2_88_1","unstructured":"Tom Simonite. 2021. It began as an AI-fueled dungeon game. it got much darker. https:\/\/www.wired.com\/story\/ai-fueled-dungeon-game-got-much-darker\/  Tom Simonite. 2021. It began as an AI-fueled dungeon game. it got much darker. https:\/\/www.wired.com\/story\/ai-fueled-dungeon-game-got-much-darker\/"},{"key":"#cr-split#-e_1_3_2_2_89_1.1","unstructured":"Uriel Singer Adam Polyak Thomas Hayes Xi Yin Jie An Songyang Zhang Qiyuan Hu Harry Yang Oron Ashual Oran Gafni Devi Parikh Sonal Gupta and Yaniv Taigman. 2022. Make-A-Video: Text-to-Video Generation without Text-Video Data. https:\/\/doi.org\/10.48550\/ARXIV.2209.14792 10.48550\/ARXIV.2209.14792"},{"key":"#cr-split#-e_1_3_2_2_89_1.2","unstructured":"Uriel Singer Adam Polyak Thomas Hayes Xi Yin Jie An Songyang Zhang Qiyuan Hu Harry Yang Oron Ashual Oran Gafni Devi Parikh Sonal Gupta and Yaniv Taigman. 2022. Make-A-Video: Text-to-Video Generation without Text-Video Data. https:\/\/doi.org\/10.48550\/ARXIV.2209.14792"},{"key":"e_1_3_2_2_90_1","volume-title":"Release Strategies and the Social Impacts of Language Models. CoRR abs\/1908.09203","author":"Solaiman Irene","year":"2019","unstructured":"Irene Solaiman , Miles Brundage , Jack Clark , Amanda Askell , Ariel Herbert-Voss , Jeff Wu , Alec Radford , and Jasmine Wang . 2019. Release Strategies and the Social Impacts of Language Models. CoRR abs\/1908.09203 ( 2019 ). arXiv:1908.09203http:\/\/arxiv.org\/abs\/1908.09203 Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. 2019. Release Strategies and the Social Impacts of Language Models. CoRR abs\/1908.09203 (2019). arXiv:1908.09203http:\/\/arxiv.org\/abs\/1908.09203"},{"key":"e_1_3_2_2_91_1","unstructured":"Irene Solaiman and Christy Dennison. 2021. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. arxiv:2106.10328 [cs.CL]  Irene Solaiman and Christy Dennison. 2021. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. arxiv:2106.10328 [cs.CL]"},{"key":"e_1_3_2_2_92_1","unstructured":"PAI Staff. 2022. Publication norms for responsible AI. https:\/\/partnershiponai.org\/workstream\/publication-norms-for-responsible-ai\/  PAI Staff. 2022. Publication norms for responsible AI. https:\/\/partnershiponai.org\/workstream\/publication-norms-for-responsible-ai\/"},{"key":"#cr-split#-e_1_3_2_2_93_1.1","doi-asserted-by":"crossref","unstructured":"Luke Stark Daniel Greene and Anna Hoffmann. 2021. Critical Perspectives on Governance Mechanisms for AI\/ML Systems. 257-280. https:\/\/doi.org\/10.1007\/978-3-030-56286-1_9 10.1007\/978-3-030-56286-1_9","DOI":"10.1007\/978-3-030-56286-1_9"},{"key":"#cr-split#-e_1_3_2_2_93_1.2","doi-asserted-by":"crossref","unstructured":"Luke Stark Daniel Greene and Anna Hoffmann. 2021. Critical Perspectives on Governance Mechanisms for AI\/ML Systems. 257-280. https:\/\/doi.org\/10.1007\/978-3-030-56286-1_9","DOI":"10.1007\/978-3-030-56286-1_9"},{"key":"e_1_3_2_2_94_1","volume-title":"Proceedings of BigScience Episode #5 \u2013 Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, virtual+Dublin, 26\u201341","author":"Talat Zeerak","year":"2022","unstructured":"Zeerak Talat , Aur\u00e9lie N\u00e9v\u00e9ol , Stella Biderman , Miruna Clinciu , Manan Dey , Shayne Longpre , Sasha Luccioni , Maraim Masoud , Margaret Mitchell , Dragomir Radev , Shanya Sharma , Arjun Subramonian , Jaesung Tae , Samson Tan , Deepak Tunuguntla , and Oskar Van Der Wal . 2022 . You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings . In Proceedings of BigScience Episode #5 \u2013 Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, virtual+Dublin, 26\u201341 . https:\/\/doi.org\/10.18653\/v1\/2022.bigscience-1.3 10.18653\/v1 Zeerak Talat, Aur\u00e9lie N\u00e9v\u00e9ol, Stella Biderman, Miruna Clinciu, Manan Dey, Shayne Longpre, Sasha Luccioni, Maraim Masoud, Margaret Mitchell, Dragomir Radev, Shanya Sharma, Arjun Subramonian, Jaesung Tae, Samson Tan, Deepak Tunuguntla, and Oskar Van Der Wal. 2022. You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings. In Proceedings of BigScience Episode #5 \u2013 Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, virtual+Dublin, 26\u201341. https:\/\/doi.org\/10.18653\/v1\/2022.bigscience-1.3"},{"key":"e_1_3_2_2_95_1","volume-title":"Jamie Hall, et. al..","author":"Thoppilan Romal","year":"2022","unstructured":"Romal Thoppilan , Daniel De Freitas , Jamie Hall, et. al.. 2022 . LaMDA: Language Models for Dialog Applications . https:\/\/doi.org\/10.48550\/ARXIV.2201.08239 10.48550\/ARXIV.2201.08239 Romal Thoppilan, Daniel De Freitas, Jamie Hall, et. al.. 2022. LaMDA: Language Models for Dialog Applications. https:\/\/doi.org\/10.48550\/ARXIV.2201.08239"},{"key":"e_1_3_2_2_96_1","volume-title":"AI as the next GPT: a Political-Economy Perspective. Working Paper 24245","author":"Trajtenberg Manuel","unstructured":"Manuel Trajtenberg . 2018. AI as the next GPT: a Political-Economy Perspective. Working Paper 24245 . National Bureau of Economic Research . https:\/\/doi.org\/10.3386\/w24245 10.3386\/w24245 Manuel Trajtenberg. 2018. AI as the next GPT: a Political-Economy Perspective. Working Paper 24245. National Bureau of Economic Research. https:\/\/doi.org\/10.3386\/w24245"},{"key":"e_1_3_2_2_97_1","volume-title":"Nitish Shirish Keskar, and Richard Socher","author":"Varshney Lav R.","year":"2020","unstructured":"Lav R. Varshney , Nitish Shirish Keskar, and Richard Socher . 2020 . Limits of Detecting Text Generated by Large-Scale Language Models. CoRR abs\/2002.03438 (2020). arXiv:2002.03438https:\/\/arxiv.org\/abs\/2002.03438 Lav R. Varshney, Nitish Shirish Keskar, and Richard Socher. 2020. Limits of Detecting Text Generated by Large-Scale Language Models. CoRR abs\/2002.03438 (2020). arXiv:2002.03438https:\/\/arxiv.org\/abs\/2002.03438"},{"key":"e_1_3_2_2_98_1","article-title":"Automated Trolling: The Case of GPT-4Chan When Artificial Intelligence is as Easy as Writing. Interfaces","volume":"3","author":"Vee Annette","year":"2022","unstructured":"Annette Vee . 2022 . Automated Trolling: The Case of GPT-4Chan When Artificial Intelligence is as Easy as Writing. Interfaces : Essays and Reviews in Computing and Culture Vol. 3 , Charles Babbage Institute, University of Minnesota (2022), 102\u2013111. Annette Vee. 2022. Automated Trolling: The Case of GPT-4Chan When Artificial Intelligence is as Easy as Writing. Interfaces: Essays and Reviews in Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota (2022), 102\u2013111.","journal-title":"Essays and Reviews in Computing and Culture"},{"key":"e_1_3_2_2_99_1","unstructured":"Ben Wang. 2021. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https:\/\/github.com\/kingoflolz\/mesh-transformer-jax.  Ben Wang. 2021. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https:\/\/github.com\/kingoflolz\/mesh-transformer-jax."},{"key":"e_1_3_2_2_100_1","volume-title":"William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel.","author":"Weidinger Laura","year":"2021","unstructured":"Laura Weidinger , John Mellor , Maribeth Rauh , Conor Griffin , Jonathan Uesato , Po-Sen Huang , Myra Cheng , Mia Glaese , Borja Balle , Atoosa Kasirzadeh , Zac Kenton , Sasha Brown , Will Hawkins , Tom Stepleton , Courtney Biles , Abeba Birhane , Julia Haas , Laura Rimell , Lisa Anne Hendricks , William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021 . Ethical and social risks of harm from Language Models . https:\/\/doi.org\/10.48550\/ARXIV.2112.04359 10.48550\/ARXIV.2112.04359 Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from Language Models. https:\/\/doi.org\/10.48550\/ARXIV.2112.04359"},{"key":"e_1_3_2_2_101_1","doi-asserted-by":"publisher","DOI":"10.1145\/3488666"},{"key":"e_1_3_2_2_102_1","volume-title":"The tension between openness and prudence in AI research. CoRR abs\/1910.01170","author":"Whittlestone Jess","year":"2019","unstructured":"Jess Whittlestone and Aviv Ovadya . 2019. The tension between openness and prudence in AI research. CoRR abs\/1910.01170 ( 2019 ). arXiv:1910.01170http:\/\/arxiv.org\/abs\/1910.01170 Jess Whittlestone and Aviv Ovadya. 2019. The tension between openness and prudence in AI research. CoRR abs\/1910.01170 (2019). arXiv:1910.01170http:\/\/arxiv.org\/abs\/1910.01170"},{"key":"e_1_3_2_2_103_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533779"},{"key":"e_1_3_2_2_104_1","volume-title":"Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, et. al.","year":"2022","unstructured":"BigScience Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, et. al. 2022 . BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. https:\/\/doi.org\/10.48550\/ARXIV.2211.05100 10.48550\/ARXIV.2211.05100 BigScience Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, et. al. 2022. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. https:\/\/doi.org\/10.48550\/ARXIV.2211.05100"},{"key":"e_1_3_2_2_105_1","volume-title":"Methods, Attack Resistance, and Evaluations. CoRR abs\/2011.13564","author":"Xue Mingfu","year":"2020","unstructured":"Mingfu Xue , Can He , Jian Wang , and Weiqiang Liu . 2020. DNN Intellectual Property Protection: Taxonomy , Methods, Attack Resistance, and Evaluations. CoRR abs\/2011.13564 ( 2020 ). arXiv:2011.13564https:\/\/arxiv.org\/abs\/2011.13564 Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu. 2020. DNN Intellectual Property Protection: Taxonomy, Methods, Attack Resistance, and Evaluations. CoRR abs\/2011.13564 (2020). arXiv:2011.13564https:\/\/arxiv.org\/abs\/2011.13564"},{"key":"e_1_3_2_2_106_1","volume-title":"Proceedings of the 2018 on Asia Conference on Computer and Communications Security (Incheon, Republic of Korea) (ASIACCS \u201918)","author":"Zhang Jialong","year":"2018","unstructured":"Jialong Zhang , Zhongshu Gu , Jiyong Jang , Hui Wu , Marc Ph. Stoecklin , Heqing Huang , and Ian Molloy . 2018 . Protecting Intellectual Property of Deep Neural Networks with Watermarking . In Proceedings of the 2018 on Asia Conference on Computer and Communications Security (Incheon, Republic of Korea) (ASIACCS \u201918) . Association for Computing Machinery, New York, NY, USA, 159\u2013172. https:\/\/doi.org\/10.1145\/3 196494.3196550 10.1145\/3196494.3196550 Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph. Stoecklin, Heqing Huang, and Ian Molloy. 2018. Protecting Intellectual Property of Deep Neural Networks with Watermarking. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security (Incheon, Republic of Korea) (ASIACCS \u201918). Association for Computing Machinery, New York, NY, USA, 159\u2013172. https:\/\/doi.org\/10.1145\/3196494.3196550"},{"key":"e_1_3_2_2_107_1","volume-title":"Robust Invisible Video Watermarking with Attention. CoRR abs\/1909.01285","author":"Zhang Kevin Alex","year":"2019","unstructured":"Kevin Alex Zhang , Lei Xu , Alfredo Cuesta-Infante , and Kalyan Veeramachaneni . 2019. Robust Invisible Video Watermarking with Attention. CoRR abs\/1909.01285 ( 2019 ). arXiv:1909.01285http:\/\/arxiv.org\/abs\/1909.01285 Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. 2019. Robust Invisible Video Watermarking with Attention. CoRR abs\/1909.01285 (2019). arXiv:1909.01285http:\/\/arxiv.org\/abs\/1909.01285"},{"key":"e_1_3_2_2_108_1","volume-title":"Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer.","author":"Zhang Susan","year":"2022","unstructured":"Susan Zhang , Stephen Roller , Naman Goyal , Mikel Artetxe , Moya Chen , Shuohui Chen , Christopher Dewan , Mona Diab , Xian Li , Xi Victoria Lin , Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022 . OPT : Open Pre-trained Transformer Language Models . https:\/\/doi.org\/10.48550\/ARXIV.2205.01068 10.48550\/ARXIV.2205.01068 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pre-trained Transformer Language Models. https:\/\/doi.org\/10.48550\/ARXIV.2205.01068"}],"event":{"name":"FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency","location":"Chicago IL USA","acronym":"FAccT '23"},"container-title":["2023 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3593013.3593981","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3593013.3593981","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:48:02Z","timestamp":1750178882000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3593013.3593981"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,12]]},"references-count":118,"alternative-id":["10.1145\/3593013.3593981","10.1145\/3593013"],"URL":"https:\/\/doi.org\/10.1145\/3593013.3593981","relation":{},"subject":[],"published":{"date-parts":[[2023,6,12]]},"assertion":[{"value":"2023-06-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}