{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,15]],"date-time":"2026-01-15T15:02:22Z","timestamp":1768489342336,"version":"3.49.0"},"reference-count":226,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2025,3,22]],"date-time":"2025-03-22T00:00:00Z","timestamp":1742601600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,22]],"date-time":"2025-03-22T00:00:00Z","timestamp":1742601600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Authorship Attribution (AA) approaches in Natural Language Processing (NLP) are important in various domains, including forensic analysis and cybercrime. However, they pose Ethical, Legal, and Societal Implications\/Aspects (ELSI\/ELSA) challenges that remain underexplored. Inspired by foundational AI ethics guidelines and frameworks, this research introduces a comprehensive framework of responsible guidelines that focuses on AA tasks in NLP, which are tailored to different stakeholders and development phases. These guidelines are structured around four core principles: privacy and data protection, fairness and non-discrimination, transparency and explainability, and societal impact. Furthermore, to illustrate a practical application of our guidelines, we apply them to a recent AA study that targets identifying and linking potential human trafficking vendors. We believe the proposed guidelines can assist researchers and practitioners in justifying their decisions, assisting ethical committees in promoting responsible practices, and identifying ethical concerns related to NLP-based AA approaches. Our study aims to contribute to ensuring the responsible development and deployment of AA tools.<\/jats:p>","DOI":"10.1007\/s10676-025-09821-w","type":"journal-article","created":{"date-parts":[[2025,3,22]],"date-time":"2025-03-22T23:16:36Z","timestamp":1742685396000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Responsible guidelines for authorship attribution tasks in NLP"],"prefix":"10.1007","volume":"27","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-3282-7737","authenticated-orcid":false,"given":"Vageesh","family":"Saxena","sequence":"first","affiliation":[]},{"given":"Aurelia","family":"Tam\u00f2-Larrieux","sequence":"additional","affiliation":[]},{"given":"Gijs","family":"Van Dijck","sequence":"additional","affiliation":[]},{"given":"Gerasimos","family":"Spanakis","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,22]]},"reference":[{"key":"9821_CR1","doi-asserted-by":"publisher","first-page":"98415","DOI":"10.1109\/ACCESS.2023.3310813","volume":"11","author":"S Abbas","year":"2023","unstructured":"Abbas, S., Alsubai, S., Sampedro, G. A., et al. (2023). Active learning for news article\u2019s authorship identification. IEEE Access, 11, 98415\u201398426. https:\/\/doi.org\/10.1109\/ACCESS.2023.3310813","journal-title":"IEEE Access"},{"key":"9821_CR2","unstructured":"Agarwal, S. (2020). Trade-offs between fairness, interpretability, and privacy in machine learning. https:\/\/api.semanticscholar.org\/CorpusID:229087464"},{"key":"9821_CR3","doi-asserted-by":"crossref","unstructured":"Ai, B., Wang, Y., & Tan, Y., et\u00a0al. (2022). Whodunit? learning to contrast for authorship attribution. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online only, pp 1142\u20131157, https:\/\/aclanthology.org\/2022.aacl-main.84","DOI":"10.18653\/v1\/2022.aacl-main.84"},{"key":"9821_CR4","doi-asserted-by":"publisher","DOI":"10.1080\/10447318.2023.2225931","author":"Jos\u00e9 G Alejo","year":"2023","unstructured":"Alejo, Jos\u00e9 G., & Sison RGBMarco Tulio Daza, Garrido-Merch\u00e1n EC,. (2023). Chatgpt: More than a \u201cweapon of mass deception\u201d ethical challenges and responses from the human-centered artificial intelligence (hcai) perspective. International Journal of Human-Computer Interaction. https:\/\/doi.org\/10.1080\/10447318.2023.2225931","journal-title":"International Journal of Human-Computer Interaction"},{"key":"9821_CR5","unstructured":"Ali, I., Mughal, N., & Khand, Z.H., et\u00a0al. (2022). Mehran University Research Journal Of Engineering & Technology 41(1):65\u201379. https:\/\/search.informit.org\/doi\/10.3316\/informit.263278216314684"},{"key":"9821_CR6","unstructured":"Alikhademi, K., Richardson, B., & Drobina, E., et\u00a0al. (2021). Can explainable ai explain unfairness? a framework for evaluating explainable ai. arXiv:2106.07483"},{"issue":"4","key":"9821_CR7","doi-asserted-by":"publisher","first-page":"473","DOI":"10.1016\/j.jksuci.2014.06.006","volume":"26","author":"AS Altheneyan","year":"2014","unstructured":"Altheneyan, A. S., & Menai, M. E. B. (2014). Na\u00efve bayes classifiers for authorship attribution of arabic texts. Journal of King Saud University-Computer and Information Sciences, 26(4), 473\u2013484.","journal-title":"Journal of King Saud University-Computer and Information Sciences"},{"key":"9821_CR8","doi-asserted-by":"publisher","unstructured":"Angelov, P., Soares, E. V., Jiang, et al. (2021). Explainable artificial intelligence: An analytical review. WIREs Data Mining Knowl Discov, 11,. https:\/\/doi.org\/10.1002\/widm.1424","DOI":"10.1002\/widm.1424"},{"issue":"2","key":"9821_CR9","doi-asserted-by":"publisher","first-page":"556","DOI":"10.3390\/make4020026","volume":"4","author":"A Angerschmid","year":"2022","unstructured":"Angerschmid, A., Zhou, J., Theuermann, K., et al. (2022). Fairness and explanation in ai-informed decision making. Machine Learning and Knowledge Extraction, 4(2), 556\u2013579.","journal-title":"Machine Learning and Knowledge Extraction"},{"key":"9821_CR10","unstructured":"Anthony, L.F.W., Kanding, B., & Selvan, R. (2020). Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv:2007.03051"},{"issue":"3","key":"9821_CR11","first-page":"283","volume":"10","author":"J Ausloos","year":"2019","unstructured":"Ausloos, J., Mahieu, R., & Veale, M. (2019). Getting data subject rights right: A submission to the european data protection board from international data rights academics, to inform regulatory guidance. JIPITEC-Journal of Intellectual Property, Information Technology and E-Commerce Law, 10(3), 283\u2013309.","journal-title":"JIPITEC-Journal of Intellectual Property, Information Technology and E-Commerce Law"},{"key":"9821_CR12","doi-asserted-by":"crossref","unstructured":"Ausloos, J., et\u00a0al. (2019b). Gdpr transparency as a research method. SSRN Electronic Journal, May pp 1\u201323","DOI":"10.2139\/ssrn.3465680"},{"key":"9821_CR13","doi-asserted-by":"publisher","unstructured":"Balkir, E., Kiritchenko, S., & Nejadgholi, I., et\u00a0al. (2022a). Challenges in applying explainability methods to improve the fairness of NLP models. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022). Association for Computational Linguistics, Seattle, U.S.A., pp 80\u201392, https:\/\/doi.org\/10.18653\/v1\/2022.trustnlp-1.8, https:\/\/aclanthology.org\/2022.trustnlp-1.8","DOI":"10.18653\/v1\/2022.trustnlp-1.8"},{"key":"9821_CR14","doi-asserted-by":"crossref","unstructured":"Balkir, E., Kiritchenko, S., & Nejadgholi, I., et\u00a0al. (2022b). Challenges in applying explainability methods to improve the fairness of nlp models. arXiv preprint arXiv:2206.03945","DOI":"10.18653\/v1\/2022.trustnlp-1.8"},{"key":"9821_CR15","doi-asserted-by":"crossref","unstructured":"Balkir, E., Kiritchenko, S., & Nejadgholi, I., et\u00a0al. (2022c). Challenges in applying explainability methods to improve the fairness of nlp models. ArXiv abs\/2206.03945","DOI":"10.18653\/v1\/2022.trustnlp-1.8"},{"key":"9821_CR16","doi-asserted-by":"publisher","unstructured":"Banko, M., MacKeen, B., & Ray, L. (2020). A unified taxonomy of harmful content. In: Proceedings of the Fourth Workshop on Online Abuse and Harms. Association for Computational Linguistics, Online, pp 125\u2013137, https:\/\/doi.org\/10.18653\/v1\/2020.alw-1.16, https:\/\/aclanthology.org\/2020.alw-1.16","DOI":"10.18653\/v1\/2020.alw-1.16"},{"key":"9821_CR17","doi-asserted-by":"publisher","unstructured":"Bannour, N., Ghannay, S., & N\u00e9v\u00e9ol, A., et\u00a0al. (2021). Evaluating the carbon footprint of NLP methods: a survey and analysis of existing tools. In Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing. Association for Computational Linguistics, Virtual, pp 11\u201321, https:\/\/doi.org\/10.18653\/v1\/2021.sustainlp-1.2, https:\/\/aclanthology.org\/2021.sustainlp-1.2","DOI":"10.18653\/v1\/2021.sustainlp-1.2"},{"issue":"3","key":"9821_CR18","doi-asserted-by":"publisher","first-page":"3213","DOI":"10.1007\/s11042-016-3899-8","volume":"76","author":"S Barbon","year":"2017","unstructured":"Barbon, S., Igawa, R. A., & Bogaz Zarpel\u00e3o, B. (2017). Authorship verification applied to detection of compromised accounts on online social networks. Multimedia Tools and Applications, 76(3), 3213\u20133233. https:\/\/doi.org\/10.1007\/s11042-016-3899-8","journal-title":"Multimedia Tools and Applications"},{"issue":"3","key":"9821_CR19","doi-asserted-by":"publisher","first-page":"625","DOI":"10.1007\/s12530-021-09377-2","volume":"12","author":"G Barlas","year":"2021","unstructured":"Barlas, G., & Stamatatos, E. (2021). A transfer learning approach to cross-domain authorship attribution. Evolving Systems, 12(3), 625\u2013643.","journal-title":"Evolving Systems"},{"key":"9821_CR20","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.12.012","author":"A Barredo Arrieta","year":"2019","unstructured":"Barredo Arrieta, A. (2019). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion. https:\/\/doi.org\/10.1016\/j.inffus.2019.12.012","journal-title":"Information Fusion"},{"issue":"32","key":"9821_CR21","doi-asserted-by":"publisher","first-page":"15849","DOI":"10.1073\/pnas.1903070116","volume":"116","author":"M Belkin","year":"2019","unstructured":"Belkin, M., Hsu, D., Ma, S., et al. (2019). Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences, 116(32), 15849\u201315854.","journal-title":"Proceedings of the National Academy of Sciences"},{"key":"9821_CR22","doi-asserted-by":"publisher","first-page":"587","DOI":"10.1162\/tacl_a_00041","volume":"6","author":"EM Bender","year":"2018","unstructured":"Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587\u2013604.","journal-title":"Transactions of the Association for Computational Linguistics"},{"key":"9821_CR23","doi-asserted-by":"publisher","unstructured":"Bender, E.M., Gebru, T., & McMillan-Major, A., et\u00a0al. (2021). On the dangers of stochastic parrots: Can language models be too big??? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. association for computing machinery, New York, NY, USA, FAccT \u201921, p 610\u2013623, https:\/\/doi.org\/10.1145\/3442188.3445922,","DOI":"10.1145\/3442188.3445922"},{"key":"9821_CR24","first-page":"1","volume-title":"Bias, privacy and mistrust: Considering the ethical challenges of artificial intelligence","author":"A Benzie","year":"2023","unstructured":"Benzie, A., & Montasari, R. (2023). Bias, privacy and mistrust: Considering the ethical challenges of artificial intelligence (pp. 1\u201314). Springer Nature Switzerland."},{"key":"9821_CR25","doi-asserted-by":"publisher","unstructured":"Bevendorff, J., Hagen, M., & Stein, B., et\u00a0al. (2019). Bias analysis and mitigation in the evaluation of authorship verification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, pp 6301\u20136306, https:\/\/doi.org\/10.18653\/v1\/P19-1634, https:\/\/aclanthology.org\/P19-1634","DOI":"10.18653\/v1\/P19-1634"},{"key":"9821_CR26","doi-asserted-by":"publisher","unstructured":"Bevendorff, J., Chulvi, B., & Fersini, E., et\u00a0al. (2022). Overview of pan 2022: Authorship verification, profiling irony, stereotype spreaders, style change detection. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 13th International Conference of the CLEF Association, CLEF 2022, Bologna, Italy, September 5\u20138, 2022, Proceedings. Springer-Verlag, Berlin, Heidelberg, p 382\u2013394, https:\/\/doi.org\/10.1007\/978-3-031-13643-6_24,","DOI":"10.1007\/978-3-031-13643-6_24"},{"key":"9821_CR27","doi-asserted-by":"crossref","unstructured":"Beyer, L., Zhai, X., & Royer, A., et\u00a0al. (2022). Knowledge distillation: A good teacher is patient and consistent. arXiv:2106.05237","DOI":"10.1109\/CVPR52688.2022.01065"},{"key":"9821_CR28","doi-asserted-by":"crossref","unstructured":"Bhatt, S., Dev, S., & Talukdar, P.P., et\u00a0al. (2022). Re-contextualizing fairness in nlp: The case of india. In: AACL","DOI":"10.18653\/v1\/2022.aacl-main.55"},{"key":"9821_CR29","doi-asserted-by":"crossref","unstructured":"Bieker, F., Friedewald, M., & Hansen, M., et\u00a0al. (2016). A process for data protection impact assessment under the european general data protection regulation. In: Privacy Technologies and Policy: 4th Annual Privacy Forum, APF 2016, Frankfurt\/Main, Germany, September 7-8, 2016, Proceedings 4, Springer, pp 21\u201337","DOI":"10.1007\/978-3-319-44760-5_2"},{"key":"9821_CR30","doi-asserted-by":"publisher","first-page":"574","DOI":"10.21552\/edpl\/2020\/4\/14","volume":"6","author":"G Bincoletto","year":"2020","unstructured":"Bincoletto, G. (2020). Edpb guidelines 4\/2019 on data protection by design and by default. Eur Data Prot L Rev, 6, 574.","journal-title":"Eur Data Prot L Rev"},{"issue":"1\u20132","key":"9821_CR31","doi-asserted-by":"publisher","first-page":"1654","DOI":"10.1177\/08862605221090571","volume":"38","author":"A Birze","year":"2023","unstructured":"Birze, A., Regehr, K., & Regehr, C. (2023). Workplace trauma in a digital age: The impact of video evidence of violent crime on criminal justice professionals. Journal of interpersonal violence, 38(1\u20132), 1654\u20131689.","journal-title":"Journal of interpersonal violence"},{"key":"9821_CR32","unstructured":"Bischoff, S., Deckers, N., & Schliebs, M., et\u00a0al. (2020). The importance of suppressing domain style in authorship analysis. ArXiv abs\/2005.14714"},{"key":"9821_CR33","doi-asserted-by":"crossref","unstructured":"Blodgett, S.L., Barocas, S., & au2, H.D.I., et\u00a0al. (2020). Language (technology) is power: A critical survey of \"bias\" in nlp. arXiv:2005.14050","DOI":"10.18653\/v1\/2020.acl-main.485"},{"key":"9821_CR34","doi-asserted-by":"crossref","unstructured":"Boenninghoff, B., Hessler, S., & Kolossa, D., et\u00a0al. (2019). Explainable authorship verification in social media via attention-based similarity learning. In: 2019 IEEE International Conference on Big Data (Big Data), IEEE, pp 36\u201345","DOI":"10.1109\/BigData47090.2019.9005650"},{"issue":"1","key":"9821_CR35","doi-asserted-by":"publisher","first-page":"012011","DOI":"10.1088\/1742-6596\/2134\/1\/012011","volume":"2134","author":"A Bogdanova","year":"2021","unstructured":"Bogdanova, A., & Romanov, V. (2021). Explainable source code authorship attribution algorithm. Journal of Physics: Conference Series, 2134(1), 012011. https:\/\/doi.org\/10.1088\/1742-6596\/2134\/1\/012011","journal-title":"Journal of Physics: Conference Series"},{"key":"9821_CR36","first-page":"1","volume":"1","author":"V Bogina","year":"2021","unstructured":"Bogina, V., Hartman, A., Kuflik, T., et al. (2021). Educating software and ai stakeholders about algorithmic fairness, accountability, transparency and ethics. International Journal of Artificial Intelligence in Education., 1, 1\u201326.","journal-title":"International Journal of Artificial Intelligence in Education."},{"key":"9821_CR37","unstructured":"Bolukbasi, T., Chang, K.W., & Zou, J., et\u00a0al. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. arXiv:1607.06520"},{"key":"9821_CR38","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/978-3-319-11397-5_16","volume-title":"Statistical Language and Speech Processing","author":"MA Boukhaled","year":"2014","unstructured":"Boukhaled, M. A., & Ganascia, J. G. (2014). Probabilistic anomaly detection method for authorship verification. In L. Besacier, A. H. Dediu, & C. Mart\u00edn-Vide (Eds.), Statistical Language and Speech Processing (pp. 211\u2013219). Springer International Publishing."},{"key":"9821_CR39","doi-asserted-by":"crossref","unstructured":"Brad, F., Manolache, A., & Burceanu, E., et\u00a0al. (2021). Rethinking the authorship verification experimental setups. In: Conference on Empirical Methods in Natural Language Processing","DOI":"10.18653\/v1\/2022.emnlp-main.380"},{"key":"9821_CR40","doi-asserted-by":"publisher","unstructured":"Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183\u2013186. https:\/\/doi.org\/10.1126\/science.aal4230, https:\/\/www.science.org\/doi\/abs\/10.1126\/science.aal4230","DOI":"10.1126\/science.aal4230"},{"key":"9821_CR41","first-page":"2079","volume":"11","author":"GC Cawley","year":"2010","unstructured":"Cawley, G. C., & Talbot, N. L. (2010). On over-fitting in model selection and subsequent selection bias in performance evaluation. The Journal of Machine Learning Research, 11, 2079\u20132107.","journal-title":"The Journal of Machine Learning Research"},{"key":"9821_CR42","unstructured":"Chang, K.W., Prabhakaran, V., & Ordonez, V. (2019). Bias and fairness in natural language processing. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts. Association for Computational Linguistics, Hong Kong, China, https:\/\/aclanthology.org\/D19-2004"},{"key":"9821_CR43","doi-asserted-by":"crossref","unstructured":"Charles\u00a0Radclyffe RHWMafalda\u00a0Ribeiro (2023) The assessment list for trustworthy artificial intelligence: A review and recommendations \u2014 frontiersin.org. https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2023.1020592\/full, [Accessed 19-Jul-2023]","DOI":"10.3389\/frai.2023.1020592"},{"issue":"233","key":"9821_CR44","first-page":"15","volume":"233","author":"C Chaski","year":"1997","unstructured":"Chaski, C. (1997). Who wrote it? steps toward a science of authorship identification. National Institute of Justice Journal, 233(233), 15\u201322.","journal-title":"National Institute of Justice Journal"},{"key":"9821_CR45","unstructured":"Chaski, C.E. (2005a). Who\u2019s at the keyboard? authorship attribution in digital evidence investigations. Int J Digit EVid 4. https:\/\/api.semanticscholar.org\/CorpusID:12767441"},{"issue":"1","key":"9821_CR46","first-page":"1","volume":"4","author":"CE Chaski","year":"2005","unstructured":"Chaski, C. E. (2005). Who\u2019s at the keyboard? authorship attribution in digital evidence investigations. International Journal of Digital Evidence, 4(1), 1\u201313.","journal-title":"International Journal of Digital Evidence"},{"key":"9821_CR47","doi-asserted-by":"crossref","unstructured":"Chen, J., Berlot-Attwell, I., & Hossain, S., et\u00a0al. (2020). Exploring text specific and blackbox fairness algorithms in multimodal clinical nlp. ArXiv abs\/2011.09625","DOI":"10.18653\/v1\/2020.clinicalnlp-1.33"},{"key":"9821_CR48","unstructured":"Chiticariu, L., Li, Y., & Reiss, F. (2015). Transparent machine learning for information extraction: state-of-the-art and the future. EMNLP (tutorial)"},{"key":"9821_CR49","doi-asserted-by":"crossref","unstructured":"Crook, B., Schl\u00fcter, M., & Speith, T. (2023). Revisiting the performance-explainability trade-off in explainable artificial intelligence (xai). arXiv:2307.14239","DOI":"10.1109\/REW57809.2023.00060"},{"key":"9821_CR50","first-page":"28","volume":"92","author":"K Cukier","year":"2013","unstructured":"Cukier, K., & Mayer-Schoenberger, V. (2013). The rise of big data: How it\u2019s changing the way we think about the world. Foreign Aff, 92, 28.","journal-title":"Foreign Aff"},{"key":"9821_CR51","doi-asserted-by":"publisher","unstructured":"Czarnowska, P., Vyas, Y., & Shah, K. (2021). Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics. Transactions of the Association for Computational Linguistics, 9, 1249\u20131267. https:\/\/doi.org\/10.1162\/tacl_a_00425, https:\/\/aclanthology.org\/2021.tacl-1.74","DOI":"10.1162\/tacl_a_00425"},{"key":"9821_CR52","first-page":"49","volume":"6","author":"J De Cooman","year":"2022","unstructured":"De Cooman, J. (2022). Humpty dumpty and high-risk ai systems: The ratione materiae dimension of the proposal for an eu artificial intelligence act. Mkt & Competition L Rev, 6, 49.","journal-title":"Mkt & Competition L Rev"},{"key":"9821_CR53","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1007\/978-94-007-2543-0_2","volume-title":"Privacy impact assessment","author":"P De Hert","year":"2012","unstructured":"De Hert, P. (2012). A human rights perspective on privacy and data protection impact assessments. Privacy impact assessment (pp. 33\u201376). Springer."},{"key":"9821_CR54","first-page":"309","volume-title":"Principles relating to processing of personal data","author":"C De Terwangne","year":"2020","unstructured":"De Terwangne, C. (2020). Principles relating to processing of personal data (pp. 309\u2013320). Oxford University Press."},{"key":"9821_CR55","doi-asserted-by":"publisher","first-page":"309","DOI":"10.1093\/oso\/9780198826491.003.0034","volume-title":"The EU general data protection (GDPR): a commentary","author":"C De Terwangne","year":"2020","unstructured":"De Terwangne, C. (2020). Principles relating to processing of personal data. The EU general data protection (GDPR): a commentary (pp. 309\u2013320). Oxford University Press."},{"key":"9821_CR56","doi-asserted-by":"publisher","unstructured":"Delobelle, P., Tokpo, E., & Calders, T., et\u00a0al. (2022). Measuring fairness with biased rulers: A comparative study on bias metrics for pre-trained language models. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States, pp 1693\u20131706https:\/\/doi.org\/10.18653\/v1\/2022.naacl-main.122, https:\/\/aclanthology.org\/2022.naacl-main.122","DOI":"10.18653\/v1\/2022.naacl-main.122"},{"key":"9821_CR57","doi-asserted-by":"publisher","unstructured":"Demetzou, K. (2019). Data protection impact assessment: A tool for accountability and the unclarified concept of \u2018high risk\u2019 in the general data protection regulation. Computer Law & Security Review, 35(6), 105342. https:\/\/doi.org\/10.1016\/j.clsr.2019.105342, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0267364918304357","DOI":"10.1016\/j.clsr.2019.105342"},{"issue":"6","key":"9821_CR58","doi-asserted-by":"publisher","first-page":"105342","DOI":"10.1016\/j.clsr.2019.105342","volume":"35","author":"K Demetzou","year":"2019","unstructured":"Demetzou, K. (2019). Data protection impact assessment: A tool for accountability and the unclarified concept of \u2018high risk\u2019 in the general data protection regulation. Computer Law & Security Review, 35(6), 105342.","journal-title":"Computer Law & Security Review"},{"key":"9821_CR59","doi-asserted-by":"crossref","unstructured":"Dev, S., Sheng, E., & Zhao, J., et\u00a0al. (2022). On measures of biases and harms in NLP. In: Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022. Association for Computational Linguistics, Online only, pp 246\u2013267, https:\/\/aclanthology.org\/2022.findings-aacl.24","DOI":"10.18653\/v1\/2022.findings-aacl.24"},{"key":"9821_CR60","unstructured":"Devlin, J., Chang, M.W., & Lee, K., et\u00a0al. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. https:\/\/arxiv.org\/abs\/1810.04805, arXiv:1810.04805"},{"key":"9821_CR61","doi-asserted-by":"publisher","first-page":"109","DOI":"10.1023\/A:1023824908771","volume":"19","author":"J Diederich","year":"2003","unstructured":"Diederich, J., Kindermann, J., Leopold, E., et al. (2003). Authorship attribution with support vector machines. Applied Intelligence, 19, 109\u2013123.","journal-title":"Applied Intelligence"},{"key":"9821_CR62","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-30371-6","volume-title":"Responsible artificial intelligence: how to develop and use AI in a responsible way,","author":"V Dignum","year":"2019","unstructured":"Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way,. Springer."},{"key":"9821_CR63","doi-asserted-by":"publisher","unstructured":"Ding, S.H.H., Fung, B.C.M., & Debbabi, M. (2015). A visualizable evidence-driven approach for authorship attribution. ACM Trans Inf Syst Secur 17(3https:\/\/doi.org\/10.1145\/2699910,","DOI":"10.1145\/2699910"},{"key":"9821_CR64","doi-asserted-by":"publisher","unstructured":"Dixon, L., Li, J., & Sorensen, J., et\u00a0al. (2018). Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, AIES \u201918, p 67\u201373, https:\/\/doi.org\/10.1145\/3278721.3278729,","DOI":"10.1145\/3278721.3278729"},{"key":"9821_CR65","first-page":"1","volume-title":"Making secondary trauma a primary issue: A study of eyewitness media and vicarious trauma on the digital frontline","author":"S Dubberley","year":"2015","unstructured":"Dubberley, S., Griffin, E., & Bal, H. M. (2015). Making secondary trauma a primary issue: A study of eyewitness media and vicarious trauma on the digital frontline (pp. 1\u201369). Eyewitness Media Hub."},{"issue":"4","key":"9821_CR66","doi-asserted-by":"publisher","first-page":"904","DOI":"10.1007\/s11896-022-09532-8","volume":"37","author":"F Duran","year":"2022","unstructured":"Duran, F., & Woodhams, J. (2022). Impact of traumatic material on professionals in analytical and secondary investigative roles working in criminal justice settings: a qualitative approach. Journal of Police and Criminal Psychology, 37(4), 904\u2013917.","journal-title":"Journal of Police and Criminal Psychology"},{"key":"9821_CR67","unstructured":"Edwards, L. (2021). The eu ai act: a summary of its significance and scope. Artificial Intelligence (the EU AI Act) 1"},{"key":"9821_CR68","doi-asserted-by":"publisher","unstructured":"Enriquez, D., Christensen, G., & Donovan, H., et\u00a0al. (2023). Authorship verification for hired plagiarism detection. In: Proceedings of the 9th International Conference on Applied Computing & Information Technology. Association for Computing Machinery, New York, NY, USA, ACIT \u201922, p 19\u201324, https:\/\/doi.org\/10.1145\/3543895.3543928,","DOI":"10.1145\/3543895.3543928"},{"key":"9821_CR69","unstructured":"Escart\u2019in, C.P., Lynn, T., & Moorkens, J., et\u00a0al. (2021). Towards transparency in nlp shared tasks. ArXiv abs\/2105.05020"},{"key":"9821_CR70","doi-asserted-by":"crossref","unstructured":"Ethayarajh, K., & Jurafsky, D. (2021). Utility is in the eye of the user: A critique of nlp leaderboards. arXiv:2009.13888","DOI":"10.18653\/v1\/2020.emnlp-main.393"},{"issue":"1","key":"9821_CR71","doi-asserted-by":"publisher","first-page":"205395171986054","DOI":"10.1177\/2053951719860542","volume":"6","author":"H Felzmann","year":"2019","unstructured":"Felzmann, H., Villaronga, E. F., Lutz, C., et al. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 2053951719860542.","journal-title":"Big Data & Society"},{"issue":"6","key":"9821_CR72","doi-asserted-by":"publisher","first-page":"3333","DOI":"10.1007\/s11948-020-00276-4","volume":"26","author":"H Felzmann","year":"2020","unstructured":"Felzmann, H., Fosch-Villaronga, E., Lutz, C., et al. (2020). Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333\u20133361. https:\/\/doi.org\/10.1007\/s11948-020-00276-4","journal-title":"Science and Engineering Ethics"},{"key":"9821_CR73","doi-asserted-by":"crossref","unstructured":"Fjeld, J., Achten, N., & Hilligoss, H., et\u00a0al. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for ai. Berkman Klein Center Research Publication (2020)","DOI":"10.2139\/ssrn.3518482"},{"key":"9821_CR74","doi-asserted-by":"crossref","unstructured":"Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review 1(1). Https:\/\/hdsr.mitpress.mit.edu\/pub\/l0jsh9d1","DOI":"10.1162\/99608f92.8cd550d1"},{"key":"9821_CR75","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-81907-1","volume-title":"Ethics, governance, and policies in artificial intelligence","author":"L Floridi","year":"2021","unstructured":"Floridi, L., et al. (2021). Ethics, governance, and policies in artificial intelligence. Springer."},{"key":"9821_CR76","doi-asserted-by":"crossref","unstructured":"Fobbe, E. (2021). Text-linguistic analysis in forensic authorship attribution","DOI":"10.1007\/978-3-030-84330-4_7"},{"key":"9821_CR77","unstructured":"Frye, R.H., & Wilson, D.C. (2018). Defining forensic authorship attribution for limited samples from social media. In: The Thirty-First International Flairs Conference"},{"key":"9821_CR78","first-page":"61","volume":"130","author":"U Gasser","year":"2016","unstructured":"Gasser, U. (2016). Recoding privacy law: Reflections on the future relationship among law, technology, and privacy. Harv L Rev F, 130, 61.","journal-title":"Harv L Rev F"},{"key":"9821_CR79","doi-asserted-by":"crossref","unstructured":"Gebru, T., Morgenstern, J., & Vecchione, B., et\u00a0al. (2021). Datasheets for datasets. arXiv:1803.09010","DOI":"10.1145\/3458723"},{"key":"9821_CR80","unstructured":"Gellman, R. (2022). Fair information practices: A basic history-version 2.22. Available at SSRN"},{"key":"9821_CR81","unstructured":"Gerards, J., Sch\u00e4fer, M.T., & Muis, I., et\u00a0al. (2022). Fundamental rights and algorithms impact assessment (fraia)"},{"issue":"24","key":"9821_CR82","doi-asserted-by":"publisher","first-page":"2964","DOI":"10.1001\/jama.1993.03510240076036","volume":"270","author":"JA Gold","year":"1993","unstructured":"Gold, J. A., Zaremski, M. J., Lev, E. R., et al. (1993). Daubert v merrell dow: The supreme court tackles scientific evidence in the courtroom. JAMA, 270(24), 2964\u20132967.","journal-title":"JAMA"},{"key":"9821_CR83","doi-asserted-by":"crossref","unstructured":"Gr\u00fcnewald, E., & Pallas, F. (2021). Tilt: A gdpr-aligned transparency information language and toolkit for practical privacy engineering. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp 636\u2013646","DOI":"10.1145\/3442188.3445925"},{"key":"9821_CR84","doi-asserted-by":"publisher","unstructured":"Gupta, A., Thadani, K., & O\u2019Hare, N. (2020). Effective few-shot classification with transfer learning. In: Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Barcelona, Spain (Online), pp 1061\u20131066, https:\/\/doi.org\/10.18653\/v1\/2020.coling-main.92, https:\/\/aclanthology.org\/2020.coling-main.92","DOI":"10.18653\/v1\/2020.coling-main.92"},{"key":"9821_CR85","unstructured":"Guthrie, D., Guthrie, L., & Allison, B., et\u00a0al. (2007). Unsupervised anomaly detection. pp 1624\u20131628"},{"key":"9821_CR86","doi-asserted-by":"crossref","unstructured":"Habernal, I., Mireshghallah, F., & Thaine, P., et\u00a0al. (2023). Privacy-preserving natural language processing. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts. Association for Computational Linguistics, Dubrovnik, Croatia, pp 27\u201330, https:\/\/aclanthology.org\/2023.eacl-tutorials.6","DOI":"10.18653\/v1\/2023.eacl-tutorials.6"},{"key":"9821_CR87","doi-asserted-by":"crossref","unstructured":"Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under eu law. Common Market Law Review 55(4)","DOI":"10.54648\/COLA2018095"},{"key":"9821_CR88","doi-asserted-by":"crossref","unstructured":"Hacker, P., & Passoth, J.H. (2020). Varieties of ai explanations under the law. from the gdpr to the aia, and beyond. In: International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, Springer, pp 343\u2013373","DOI":"10.1007\/978-3-031-04083-2_17"},{"key":"9821_CR89","doi-asserted-by":"crossref","unstructured":"Hacker, P., Cordes, J., & Rochon, J. (2022). Regulating gatekeeper ai and data: Transparency, access, and fairness under the dma, the gdpr, and beyond. arXiv preprint arXiv:2212.04997","DOI":"10.2139\/ssrn.4316944"},{"key":"9821_CR90","doi-asserted-by":"crossref","unstructured":"Haduong, N., Gao, A., & Smith, N.A. (2023). Risks and NLP design: A case study on procedural document QA. In: Findings of the Association for Computational Linguistics: ACL 2023. Association for Computational Linguistics, Toronto, Canada, pp 1248\u20131269, https:\/\/aclanthology.org\/2023.findings-acl.81","DOI":"10.18653\/v1\/2023.findings-acl.81"},{"key":"9821_CR91","doi-asserted-by":"publisher","unstructured":"Halvani, O., & Graner, L. (2021). Posnoise: An effective countermeasure against topic biases in authorship analysis. In: Proceedings of the 16th International Conference on Availability, Reliability and Security. Association for Computing Machinery, New York, NY, USA, ARES 21, https:\/\/doi.org\/10.1145\/3465481.3470050,","DOI":"10.1145\/3465481.3470050"},{"key":"9821_CR92","unstructured":"H\u00e4m\u00e4l\u00e4inen, M., & Alnajjar, K. (2021). The great misalignment problem in human evaluation of NLP methods. In: Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval). Association for Computational Linguistics, Online, pp 69\u201374, https:\/\/aclanthology.org\/2021.humeval-1.8"},{"key":"9821_CR93","doi-asserted-by":"crossref","unstructured":"Hessenthaler, M., Strubell, E., & Hovy, D., et\u00a0al. (2022). Bridging fairness and environmental sustainability in natural language processing. arXiv:2211.04256","DOI":"10.18653\/v1\/2022.emnlp-main.533"},{"issue":"8","key":"9821_CR94","doi-asserted-by":"publisher","first-page":"e12432","DOI":"10.1111\/lnc3.12432","volume":"15","author":"D Hovy","year":"2021","unstructured":"Hovy, D., & Prabhumoye, S. (2021). Five sources of bias in natural language processing. Language and Linguistics Compass, 15(8), e12432.","journal-title":"Language and Linguistics Compass"},{"key":"9821_CR95","doi-asserted-by":"publisher","unstructured":"Hovy, D., & Spruit, S.L. (2016). The social impact of natural language processing. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pp 591\u2013598, https:\/\/doi.org\/10.18653\/v1\/P16-2096, https:\/\/aclanthology.org\/P16-2096","DOI":"10.18653\/v1\/P16-2096"},{"key":"9821_CR96","doi-asserted-by":"crossref","unstructured":"Howard, B.S. (2008). Authorship attribution under the rules of evidence: empirical approaches\u2013a layperson\u2019s legal system. International Journal of Speech, Language & the Law 15(2)","DOI":"10.1558\/ijsll.v15i2.219"},{"key":"9821_CR97","doi-asserted-by":"crossref","unstructured":"Huertas-Tato, J., Mart\u00edn, A., & Huertas-Garc\u00eda, \u00c1., et\u00a0al. (2022). Generating authorship embeddings with transformers. 2022 International Joint Conference on Neural Networks (IJCNN) pp 1\u20138. https:\/\/api.semanticscholar.org\/CorpusID:252626603","DOI":"10.1109\/IJCNN55064.2022.9892173"},{"issue":"5","key":"9821_CR98","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/MC.2023.3235712","volume":"56","author":"I Hupont","year":"2023","unstructured":"Hupont, I., Micheli, M., Delipetrev, B., et al. (2023). Documenting high-risk ai: a european regulatory perspective. Computer, 56(5), 18\u201327.","journal-title":"Computer"},{"key":"9821_CR99","doi-asserted-by":"publisher","unstructured":"Iqbal, F., Hadjidj, R., Fung, B. C., et al. (2008). A novel approach of mining write-prints for authorship attribution in e-mail forensics. Digital Investigation, 5, S42\u2013S51. https:\/\/doi.org\/10.1016\/j.diin.2008.05.001, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1742287608000315, the Proceedings of the Eighth Annual DFRWS Conference","DOI":"10.1016\/j.diin.2008.05.001"},{"key":"9821_CR100","first-page":"978","volume":"70","author":"H Jabbar","year":"2015","unstructured":"Jabbar, H., & Khan, R. Z. (2015). Methods to avoid over-fitting and under-fitting in supervised machine learning (comparative study). Computer Science, Communication and Instrumentation Devices, 70, 978\u2013981.","journal-title":"Computer Science, Communication and Instrumentation Devices"},{"key":"9821_CR101","unstructured":"Jafariakinabad, F., Tarnpradab, S., & Hua, K.A. (2019). Syntactic recurrent neural network for authorship attribution. arXiv preprint arXiv:1902.09723"},{"issue":"9","key":"9821_CR102","doi-asserted-by":"publisher","first-page":"389","DOI":"10.1038\/s42256-019-0088-2","volume":"1","author":"A Jobin","year":"2019","unstructured":"Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of ai ethics guidelines. Nature Machine Intelligence, 1(9), 389\u2013399.","journal-title":"Nature Machine Intelligence"},{"key":"9821_CR103","unstructured":"John\u00a0Albert AMSarah\u00a0Michot, & M\u00fcller, A. (2022). Policy Brief: Our recommendations for strengthening data access for public interest research - AlgorithmWatch \u2014 algorithmwatch.org. https:\/\/algorithmwatch.org\/en\/policy-brief-platforms-data-access\/, [Accessed 17-Jul-2023]"},{"issue":"4","key":"9821_CR104","doi-asserted-by":"publisher","first-page":"945","DOI":"10.1007\/s10551-022-05055-8","volume":"178","author":"JM John-Mathews","year":"2022","unstructured":"John-Mathews, J. M., Cardon, D., & Balagu\u00e9, C. (2022). From reality to world a critical perspective on ai fairness. Journal of Business Ethics, 178(4), 945\u201395. https:\/\/doi.org\/10.1007\/s10551-022-05055-8","journal-title":"Journal of Business Ethics"},{"key":"9821_CR105","doi-asserted-by":"publisher","unstructured":"Julian, H., & van\u00a0den Berg\u00a0Esther, Ines R. (2017). Authorship attribution with convolutional neural networks and POS-eliding. In: Proceedings of the Workshop on Stylistic Variation. Association for Computational Linguistics, Copenhagen, Denmark, pp 53\u201358, https:\/\/doi.org\/10.18653\/v1\/W17-4907, https:\/\/aclanthology.org\/W17-4907","DOI":"10.18653\/v1\/W17-4907"},{"key":"9821_CR106","doi-asserted-by":"publisher","first-page":"156","DOI":"10.3897\/jucs.2020.009","volume":"26","author":"P Juola","year":"2020","unstructured":"Juola, P. (2020). Authorship studies and the dark side of social media analytics. Journal of Universal Computer Science, 26, 156\u2013170. https:\/\/doi.org\/10.3897\/jucs.2020.009","journal-title":"Journal of Universal Computer Science"},{"key":"9821_CR107","doi-asserted-by":"publisher","first-page":"81","DOI":"10.4018\/IJRSDA.2017040106","volume":"4","author":"SD Kale","year":"2017","unstructured":"Kale, S. D., & Prasad, R. S. (2017). A systematic review on author identification methods. Int J Rough Sets Data Anal, 4, 81\u201391.","journal-title":"Int J Rough Sets Data Anal"},{"key":"9821_CR108","doi-asserted-by":"crossref","unstructured":"Keenan\u00a0Jones SLJason, R. & Nurse, C. (2022). Are you robert or roberta? deceiving online authorship attribution models using neural text generators. arXiv:2203.09813","DOI":"10.1609\/icwsm.v16i1.19304"},{"key":"9821_CR109","doi-asserted-by":"publisher","unstructured":"Kennedy, E., & Millard, C. (2016). Data security and multi-factor authentication: Analysis of requirements under eu law and in selected eu member states. Computer Law & Security Review, 32(1), 91\u2013110. https:\/\/doi.org\/10.1016\/j.clsr.2015.12.004, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0267364915001697","DOI":"10.1016\/j.clsr.2015.12.004"},{"key":"9821_CR110","doi-asserted-by":"crossref","unstructured":"Khonji, M., Iraqi, Y., & Jones, A. (2015). An evaluation of authorship attribution using random forests. In: 2015 international conference on information and communication technology research (ictrc), IEEE, pp 68\u201371","DOI":"10.1109\/ICTRC.2015.7156423"},{"key":"9821_CR111","doi-asserted-by":"publisher","DOI":"10.1016\/j.dss.2020.113302","volume":"134","author":"B Kim","year":"2020","unstructured":"Kim, B., Park, J., & Suh, J. (2020). Transparency and accountability in ai decision support: Explaining and visualizing convolutional neural networks for text information. Decis Support Syst, 134, 113302.","journal-title":"Decis Support Syst"},{"key":"9821_CR112","doi-asserted-by":"crossref","unstructured":"Kirk, H., Birhane, A., & Vidgen, B., et\u00a0al. (2022). Handling and presenting harmful text in NLP research. In: Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, pp 497\u2013510, https:\/\/aclanthology.org\/2022.findings-emnlp.35","DOI":"10.18653\/v1\/2022.findings-emnlp.35"},{"key":"9821_CR113","doi-asserted-by":"publisher","unstructured":"Klymenko, O., Meisenbacher, S., & Matthes, F. (2022). Differential privacy in natural language processing the story so far. In: Proceedings of the Fourth Workshop on Privacy in Natural Language Processing. Association for Computational Linguistics, Seattle, United States, pp 1\u201311, https:\/\/doi.org\/10.18653\/v1\/2022.privatenlp-1.1, https:\/\/aclanthology.org\/2022.privatenlp-1.1","DOI":"10.18653\/v1\/2022.privatenlp-1.1"},{"key":"9821_CR114","unstructured":"Kondyurin, I. (2022). Explainability of transformers for authorship attribution. Master\u2019s thesis"},{"key":"9821_CR115","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1080\/17579961.2021.1898299","volume":"13","author":"BJ Koops","year":"2021","unstructured":"Koops, B. J. (2021). The concept of function creep. Law, Innovation and Technology, 13, 1\u201328. https:\/\/doi.org\/10.1080\/17579961.2021.1898299","journal-title":"Law, Innovation and Technology"},{"key":"9821_CR116","doi-asserted-by":"crossref","unstructured":"Koppel, M., & Schler, J. (2004). Authorship verification as a one-class classification problem. Proceedings of the twenty-first international conference on Machine learning","DOI":"10.1145\/1015330.1015448"},{"key":"9821_CR117","doi-asserted-by":"crossref","unstructured":"Koppel, M., Argamon, S.E., & Shimoni, A.R. (2002). Automatically categorizing written texts by author gender. Lit Linguistic Comput 17:401\u2013412. https:\/\/api.semanticscholar.org\/CorpusID:1057413","DOI":"10.1093\/llc\/17.4.401"},{"key":"9821_CR118","doi-asserted-by":"publisher","unstructured":"Kumar, R., Yadav, S., & Daniulaityte, R., et\u00a0al. (2020). Edarkfind: Unsupervised multi-view learning for sybil account detection. In: Proceedings of The Web Conference 2020. Association for Computing Machinery, New York, NY, USA, WWW \u201920, p 1955\u20131965, https:\/\/doi.org\/10.1145\/3366423.3380263,","DOI":"10.1145\/3366423.3380263"},{"key":"9821_CR119","unstructured":"Lacoste, A., Luccioni, A., & Schmidt, V., et\u00a0al. (2019). Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700"},{"key":"9821_CR120","doi-asserted-by":"publisher","unstructured":"Lalor, J., Yang, Y., & Smith, K., et\u00a0al. (2022). Benchmarking intersectional biases in NLP. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States, pp 3598\u20133609, https:\/\/doi.org\/10.18653\/v1\/2022.naacl-main.263, https:\/\/aclanthology.org\/2022.naacl-main.263","DOI":"10.18653\/v1\/2022.naacl-main.263"},{"key":"9821_CR121","doi-asserted-by":"publisher","unstructured":"Laufer, B., Jain, S., & Cooper, A.F., et\u00a0al. (2022). Four years of facct: A reflexive, mixed-methods analysis of research contributions, shortcomings, and future prospects. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, New York, NY, USA, FAccT \u201922, p 401\u2013426, https:\/\/doi.org\/10.1145\/3531146.3533107,","DOI":"10.1145\/3531146.3533107"},{"key":"9821_CR122","doi-asserted-by":"crossref","unstructured":"Lawrence, S., & Giles, C.L. (2000). Overfitting and neural networks: conjugate gradient and backpropagation. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, pp 114\u2013119","DOI":"10.1109\/IJCNN.2000.857823"},{"key":"9821_CR123","unstructured":"Lei, Z., Qi, H., & Han, Y., et\u00a0al. (2022). Application of bert in author verification task. In: Conference and Labs of the Evaluation Forum"},{"key":"9821_CR124","unstructured":"Locatelli, M., Tagliabue, L.C., & Di\u00a0Giuda, G.M., et\u00a0al. (2022). Archiberto: a hierarchization quality objectives nlp tool in the italian architecture, engineering and construction sector. In: CEUR WORKSHOP PROCEEDINGS, Lops, Pasquale; Basile, Pierpaolo; Siciliani, Lucia; Taccardi, Vincenzo; Di ..., pp 8\u201325"},{"key":"9821_CR125","doi-asserted-by":"crossref","unstructured":"Loi, M., & Spielkamp, M. (2021). Towards accountability in the use of artificial intelligence for public administrations. In: Proceedings of the 2021 AAAI\/ACM Conference on AI, Ethics, and Society, pp 757\u2013766","DOI":"10.1145\/3461702.3462631"},{"key":"9821_CR126","volume-title":"Automated decision-making systems in the public sector an impact assessment tool for public authorities","author":"M Loi","year":"2021","unstructured":"Loi, M., M\u00e4tzener, A., M\u00fcller, A., et al. (2021). Automated decision-making systems in the public sector an impact assessment tool for public authorities. AW AlgorithmWatch gGmbH."},{"issue":"5","key":"9821_CR127","doi-asserted-by":"publisher","first-page":"570","DOI":"10.1002\/asi.24750","volume":"74","author":"BD Lund","year":"2023","unstructured":"Lund, B. D., Wang, T., Mannuru, N. R., et al. (2023). chatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570\u2013581. https:\/\/doi.org\/10.1002\/asi.24750","journal-title":"Journal of the Association for Information Science and Technology"},{"key":"9821_CR128","doi-asserted-by":"publisher","first-page":"252","DOI":"10.1017\/cel.2016.15","volume":"19","author":"O Lynskey","year":"2017","unstructured":"Lynskey, O. (2017). The \u2018europeanisation\u2019 of data protection law. Cambridge Yearbook of European Legal Studies, 19, 252\u2013286.","journal-title":"Cambridge Yearbook of European Legal Studies"},{"key":"9821_CR129","doi-asserted-by":"crossref","unstructured":"Ma, P., Wang, S., & Liu, J. (2020). Metamorphic testing and certified mitigation of fairness violations in nlp models. In: International Joint Conference on Artificial Intelligence","DOI":"10.24963\/ijcai.2020\/64"},{"key":"9821_CR130","volume-title":"Artificial intelligence act","author":"T Madiega","year":"2021","unstructured":"Madiega, T. (2021). Artificial intelligence act. European Parliament: European Parliamentary Research Service."},{"key":"9821_CR131","unstructured":"Manolache, A., Brad, F., & Burceanu, E., et\u00a0al. (2021). Transferring bert-like transformers\u2019 knowledge for authorship verification. arXiv preprint arXiv:2112.05125"},{"key":"9821_CR132","unstructured":"Manolache, A., Brad, F., & Barbalau, A., et\u00a0al. (2022). Veridark: A large-scale benchmark for authorship verification on the dark web. arXiv:2207.03477"},{"issue":"4","key":"9821_CR133","doi-asserted-by":"publisher","first-page":"754","DOI":"10.1016\/j.clsr.2018.05.017","volume":"34","author":"A Mantelero","year":"2018","unstructured":"Mantelero, A. (2018). Ai and big data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754\u2013772.","journal-title":"Computer Law & Security Review"},{"key":"9821_CR134","doi-asserted-by":"publisher","unstructured":"Martin, N., Friedewald, M., & Schiering, I., et\u00a0al. (2020a). The Data Protection Impact Assessment According to Article 35 GDPR. Fraunhofer Verlag, https:\/\/doi.org\/10.24406\/publica-fhg-300244, https:\/\/publica.fraunhofer.de\/handle\/publica\/300244","DOI":"10.24406\/publica-fhg-300244"},{"key":"9821_CR135","unstructured":"Martin, N., Friedewald, M., & Schiering, I., et\u00a0al. (2020b). The data protection impact assessment according to article 35 gdpr"},{"key":"9821_CR136","doi-asserted-by":"publisher","first-page":"49","DOI":"10.1016\/j.future.2020.10.020","volume":"116","author":"R Mateless","year":"2021","unstructured":"Mateless, R., Tsur, O., & Moskovitch, R. (2021). Pkg2vec: Hierarchical package embedding for code authorship attribution. Future Generation Computer Systems, 116, 49\u201360.","journal-title":"Future Generation Computer Systems"},{"key":"9821_CR137","doi-asserted-by":"crossref","unstructured":"Mehrabi, N., Morstatter, F., & Saxena, N., et\u00a0al. (2022). A survey on bias and fairness in machine learning. arXiv:1908.09635","DOI":"10.1145\/3457607"},{"issue":"11","key":"9821_CR138","doi-asserted-by":"publisher","first-page":"501","DOI":"10.1038\/s42256-019-0114-4","volume":"1","author":"B Mittelstadt","year":"2019","unstructured":"Mittelstadt, B. (2019). Principles alone cannot guarantee ethical ai. Nature Machine Intelligence, 1(11), 501\u2013507. https:\/\/doi.org\/10.1038\/s42256-019-0114-4","journal-title":"Nature Machine Intelligence"},{"key":"9821_CR139","doi-asserted-by":"crossref","unstructured":"Mohsen, A.M., El-Makky, N.M., & Ghanem, N.M. (2016). Author identification using deep learning. 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA) pp 898\u2013903","DOI":"10.1109\/ICMLA.2016.0161"},{"key":"9821_CR140","unstructured":"Mollen, A. (2023). New study highlights crucial role of trade unions for algorithmic transparency and accountability in the world of work - AlgorithmWatch \u2014 algorithmwatch.org. https:\/\/algorithmwatch.org\/en\/study-trade-unions-algorithmic-transparency\/, [Accessed 17-Jul-2023]"},{"key":"9821_CR141","doi-asserted-by":"publisher","unstructured":"Mondschein CF, Monda C (2019) The EU\u2019s General Data Protection Regulation (GDPR) in a\u00c2 Research Context, Springer International Publishing, Cham, pp 55\u201371. https:\/\/doi.org\/10.1007\/978-3-319-99713-1_5,","DOI":"10.1007\/978-3-319-99713-1_5"},{"issue":"2","key":"9821_CR142","doi-asserted-by":"publisher","first-page":"159","DOI":"10.1007\/s43681-020-00014-3","volume":"1","author":"TG Moraes","year":"2021","unstructured":"Moraes, T. G., Almeida, E. C., & de Pereira, J. R. L. (2021). Smile, you are being identified! risks and measures for the use of facial recognition in (semi-)public spaces. AI and Ethics, 1(2), 159\u2013172. https:\/\/doi.org\/10.1007\/s43681-020-00014-3","journal-title":"AI and Ethics"},{"key":"9821_CR143","doi-asserted-by":"publisher","unstructured":"Murauer, B., & Specht, G. (2021a). Developing a benchmark for reducing data bias in authorship attribution. In: Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems. Association for Computational Linguistics, Punta Cana, Dominican Republic, pp 179\u2013188, https:\/\/doi.org\/10.18653\/v1\/2021.eval4nlp-1.18, https:\/\/aclanthology.org\/2021.eval4nlp-1.18","DOI":"10.18653\/v1\/2021.eval4nlp-1.18"},{"key":"9821_CR144","doi-asserted-by":"crossref","unstructured":"Murauer, B., & Specht, G. (2021b). Developing a benchmark for reducing data bias in authorship attribution. In: Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pp 179\u2013188","DOI":"10.18653\/v1\/2021.eval4nlp-1.18"},{"key":"9821_CR145","first-page":"225","volume":"8","author":"A Neme","year":"2011","unstructured":"Neme, A., Lugo, B., & Cervera, A. (2011). Authorship attribution as a case of anomaly detection: A neural network model. Int J Hybrid Intell Syst, 8, 225\u2013235.","journal-title":"Int J Hybrid Intell Syst"},{"key":"9821_CR146","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2023.105798","volume":"48","author":"RJ Neuwirth","year":"2023","unstructured":"Neuwirth, R. J. (2023). Prohibited artificial intelligence practices in the proposed eu artificial intelligence act (aia). Computer Law & Security Review, 48, 105798.","journal-title":"Computer Law & Security Review"},{"key":"9821_CR147","doi-asserted-by":"publisher","unstructured":"Nirkhi, S., & Dharaskar, Dr.R.V. (2013). Comparative study of authorship identification techniques for cyber forensics analysis. International Journal of Advanced Computer Science and Applications 4(5). https:\/\/doi.org\/10.14569\/IJACSA.2013.040505,","DOI":"10.14569\/IJACSA.2013.040505"},{"issue":"2128","key":"9821_CR148","doi-asserted-by":"publisher","first-page":"20170358","DOI":"10.1098\/rsta.2017.0358","volume":"376","author":"K Nissim","year":"2018","unstructured":"Nissim, K., & Wood, A. (2018). Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170358.","journal-title":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences"},{"key":"9821_CR149","unstructured":"OECD (2013) OECD Legal Instruments \u2014 legalinstruments.oecd.org. https:\/\/legalinstruments.oecd.org\/en\/instruments\/OECD-LEGAL-0188, [Accessed 20-Jul-2023]"},{"key":"9821_CR150","doi-asserted-by":"publisher","unstructured":"Panov, V., Kovalchuk, M., Filatova, A., et al. (2022). Mucaat: Multilingual contextualized authorship anonymization of texts from social networks. Procedia Computer Science, 212, 322\u2013329. https:\/\/doi.org\/10.1016\/j.procs.2022.11.016, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1877050922017070, 11th International Young Scientist Conference on Computational Science","DOI":"10.1016\/j.procs.2022.11.016"},{"key":"9821_CR151","unstructured":"Plank, B., Hovy, D., & McDonald, R., et\u00a0al. (2014). Adapting taggers to Twitter with not-so-distant supervision. In: Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, Dublin, Ireland, pp 1783\u20131792, https:\/\/aclanthology.org\/C14-1168"},{"key":"9821_CR152","unstructured":"Potthast, M., Hagen, M., Stein, B. (2016). Author obfuscation: Attacking the state of the art in authorship verification. In: Conference and Labs of the Evaluation Forum"},{"key":"9821_CR153","unstructured":"Powers, D.M. (2008). Evaluation evaluation. In: ECAI 2008. IOS Press, p 843\u2013844"},{"key":"9821_CR154","doi-asserted-by":"publisher","unstructured":"Prabhu, A., Dognin, C., & Singh, M. (2019). Sampling bias in deep active classification: An empirical study. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, pp 4058\u20134068, https:\/\/doi.org\/10.18653\/v1\/D19-1417, https:\/\/aclanthology.org\/D19-1417","DOI":"10.18653\/v1\/D19-1417"},{"key":"9821_CR155","doi-asserted-by":"publisher","unstructured":"Prasad, S.N., Narsimha, V., & Reddy, P.V., et\u00a0al. (2015). Influence of lexical, syntactic and structural features and their combination on authorship attribution for telugu text. Procedia Computer Science 48:58\u201364. https:\/\/doi.org\/10.1016\/j.procs.2015.04.110, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1877050915006195, international Conference on Computer, Communication and Convergence (ICCC 2015)","DOI":"10.1016\/j.procs.2015.04.110"},{"issue":"3","key":"9821_CR156","doi-asserted-by":"publisher","first-page":"699","DOI":"10.1007\/s43681-023-00258-9","volume":"3","author":"E Prem","year":"2023","unstructured":"Prem, E. (2023). From ethical ai frameworks to tools: a review of approaches. AI and Ethics, 3(3), 699\u2013716. https:\/\/doi.org\/10.1007\/s43681-023-00258-9","journal-title":"AI and Ethics"},{"key":"9821_CR157","unstructured":"Procter, R.N., Rouncefield, M., & Tolmie, P. (2020). Accounts, accountability and agency for safe and ethical ai. ArXiv abs\/2010.01316"},{"issue":"4","key":"9821_CR158","doi-asserted-by":"publisher","first-page":"325","DOI":"10.1023\/A:1024405716529","volume":"16","author":"CM Pyevich","year":"2003","unstructured":"Pyevich, C. M., Newman, E., & Daleiden, E. (2003). The relationship among cognitive schemas, job-related traumatic exposure, and posttraumatic stress disorder in journalists. Journal of Traumatic Stress: Official Publication of the International Society for Traumatic Stress Studies, 16(4), 325\u2013328.","journal-title":"Journal of Traumatic Stress: Official Publication of the International Society for Traumatic Stress Studies"},{"key":"9821_CR159","doi-asserted-by":"publisher","unstructured":"Qian, K., Danilevsky, M., & Katsis, Y., et\u00a0al. (2021). Xnlp: A living survey for xai research in natural language processing. In: 26th International Conference on Intelligent User Interfaces - Companion. Association for Computing Machinery, New York, NY, USA, IUI \u201921 Companion, p 78\u201380, https:\/\/doi.org\/10.1145\/3397482.3450728,","DOI":"10.1145\/3397482.3450728"},{"issue":"8","key":"9821_CR160","first-page":"9","volume":"1","author":"A Radford","year":"2019","unstructured":"Radford, A., Wu, J., Child, R., et al. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.","journal-title":"OpenAI blog"},{"key":"9821_CR161","doi-asserted-by":"crossref","unstructured":"Rawal, A., McCoy, J., & Rawat, D.B., et\u00a0al. (2021). Recent advances in trustworthy explainable artificial intelligence: Status, challenges and perspectives. IEEE Transactions on Artificial Intelligence PP:1\u20131","DOI":"10.36227\/techrxiv.17054396"},{"key":"9821_CR162","first-page":"1","volume":"25","author":"P Regulation","year":"2018","unstructured":"Regulation, P. (2018). General data protection regulation. Intouch, 25, 1\u20135.","journal-title":"General data protection regulation. Intouch"},{"key":"9821_CR163","volume-title":"Study to support an impact assessment of regulatory requirements for artificial intelligence in europe","author":"A Renda","year":"2021","unstructured":"Renda, A., Arroyo, J., Fanni, R., et al. (2021). Study to support an impact assessment of regulatory requirements for artificial intelligence in europe. Brussels."},{"key":"9821_CR164","doi-asserted-by":"crossref","unstructured":"Rudin, C.(2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. https:\/\/arxiv.org\/abs\/1811.10154, arXiv:1811.10154","DOI":"10.1038\/s42256-019-0048-x"},{"issue":"3","key":"9821_CR165","doi-asserted-by":"publisher","first-page":"8","DOI":"10.1145\/1764810.1764814","volume":"35","author":"NB Ruparelia","year":"2010","unstructured":"Ruparelia, N. B. (2010). Software development lifecycle models. SIGSOFT Softw Eng Notes, 35(3), 8\u201313. https:\/\/doi.org\/10.1145\/1764810.1764814","journal-title":"SIGSOFT Softw Eng Notes"},{"key":"9821_CR166","doi-asserted-by":"publisher","unstructured":"Saeed, W., & Omlin, C. (2023). Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263, 110273. https:\/\/doi.org\/10.1016\/j.knosys.2023.110273, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0950705123000230","DOI":"10.1016\/j.knosys.2023.110273"},{"key":"9821_CR167","doi-asserted-by":"publisher","first-page":"58080","DOI":"10.1109\/ACCESS.2020.2982538","volume":"8","author":"MU Salur","year":"2020","unstructured":"Salur, M. U., & Aydin, I. (2020). A novel hybrid deep learning model for sentiment classification. IEEE Access, 8, 58080\u201358093.","journal-title":"IEEE Access"},{"key":"9821_CR168","doi-asserted-by":"publisher","unstructured":"Sapkota, U., Bethard, S., & Montes, M., et\u00a0al. (2015). Not all character n-grams are created equal: A study in authorship attribution. In: Mihalcea R, Chai J, Sarkar A (eds) Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pp 93\u2013102, https:\/\/doi.org\/10.3115\/v1\/N15-1010, https:\/\/aclanthology.org\/N15-1010","DOI":"10.3115\/v1\/N15-1010"},{"key":"9821_CR169","doi-asserted-by":"crossref","unstructured":"Saxena, V., Bashpole, B., & van Dijck, G., et\u00a0al. (2023a). Idtraffickers: An authorship attribution dataset to link and connect potential human-trafficking operations on text escort advertisements. arXiv:2310.05484","DOI":"10.18653\/v1\/2023.emnlp-main.524"},{"key":"9821_CR170","doi-asserted-by":"crossref","unstructured":"Saxena, V., Rethmeier, N., & van Dijck, G., et\u00a0al. (2023b). VendorLink: An NLP approach for identifying & linking vendor migrants & potential aliases on Darknet markets. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Toronto, Canada, pp 8619\u20138639, https:\/\/aclanthology.org\/2023.acl-long.481","DOI":"10.18653\/v1\/2023.acl-long.481"},{"key":"9821_CR171","doi-asserted-by":"crossref","unstructured":"Saxon, M.S., Levy, S., & Wang, X., et\u00a0al. (2021). Modeling disclosive transparency in nlp application descriptions. In: Conference on Empirical Methods in Natural Language Processing","DOI":"10.18653\/v1\/2021.emnlp-main.153"},{"key":"9821_CR172","doi-asserted-by":"publisher","first-page":"153","DOI":"10.1023\/A:1022653209073","volume":"10","author":"C Schaffer","year":"1993","unstructured":"Schaffer, C. (1993). Overfitting avoidance as bias. Machine Learning, 10, 153\u2013178.","journal-title":"Machine Learning"},{"key":"9821_CR173","unstructured":"Sennewald, B., Herpers, R., & H\u00fclsmann, M., et\u00a0al. (2020). Voting for authorship attribution applied to dark web data. In: Proceedings of the 30th Annual International Conference on Computer Science and Software Engineering, pp 217\u2013226"},{"key":"9821_CR174","doi-asserted-by":"crossref","unstructured":"Shah, D., Schwartz, H.A., & Hovy, D. (2019). Predictive biases in natural language processing models: A conceptual framework and overview. arXiv preprint arXiv:1912.11078","DOI":"10.18653\/v1\/2020.acl-main.468"},{"key":"9821_CR175","doi-asserted-by":"publisher","unstructured":"Shah, D.S., Schwartz, H.A., & Hovy, D. (2020). Predictive biases in natural language processing models: A conceptual framework and overview. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, pp 5248\u20135264, https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.468, https:\/\/aclanthology.org\/2020.acl-main.468","DOI":"10.18653\/v1\/2020.acl-main.468"},{"issue":"15","key":"9821_CR176","doi-asserted-by":"publisher","first-page":"2886","DOI":"10.1002\/sec.1485","volume":"9","author":"JA Shamsi","year":"2016","unstructured":"Shamsi, J. A., Zeadally, S., Sheikh, F., et al. (2016). Attribution in cyberspace: Techniques and legal implications. Security and Communication Networks, 9(15), 2886\u20132900.","journal-title":"Security and Communication Networks"},{"key":"9821_CR177","doi-asserted-by":"crossref","unstructured":"Shmueli, B., Fell, J., & Ray, S., et\u00a0al. (2021). Beyond fair pay: Ethical implications of nlp crowdsourcing. arXiv:2104.10097","DOI":"10.18653\/v1\/2021.naacl-main.295"},{"key":"9821_CR178","doi-asserted-by":"publisher","first-page":"443","DOI":"10.37419\/JPL.V4.I5.2","volume":"4","author":"J Shook","year":"2017","unstructured":"Shook, J., Smith, R., & Antonio, A. (2017). Transparency and fairness in machine learning applications. Tex A &M J Prop L, 4, 443.","journal-title":"Tex A &M J Prop L"},{"key":"9821_CR179","doi-asserted-by":"crossref","unstructured":"Shrestha, P., Sierra, S., & Gonz\u00e1lez, F., et\u00a0al. (2017). Convolutional neural networks for authorship attribution of short texts. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, Valencia, Spain, pp 669\u2013674, https:\/\/aclanthology.org\/E17-2106","DOI":"10.18653\/v1\/E17-2106"},{"key":"9821_CR180","doi-asserted-by":"publisher","unstructured":"Silva, K., Can, B., & Blain, F., et\u00a0al. (2023). Authorship attribution of late 19th century novels using GAN-BERT. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop). Association for Computational Linguistics, Toronto, Canada, pp 310\u2013320, https:\/\/doi.org\/10.18653\/v1\/2023.acl-srw.44, https:\/\/aclanthology.org\/2023.acl-srw.44","DOI":"10.18653\/v1\/2023.acl-srw.44"},{"key":"9821_CR181","doi-asserted-by":"publisher","unstructured":"Simbeck, K. (2022). Facct-check on ai regulation: Systematic evaluation of ai regulation on the example of the legislation on the use of ai in the public sector in the german federal state of schleswig-holstein. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, New York, NY, USA, FAccT \u201922, p 89\u201396, https:\/\/doi.org\/10.1145\/3531146.3533076,","DOI":"10.1145\/3531146.3533076"},{"key":"9821_CR182","doi-asserted-by":"crossref","unstructured":"Sion, L., Van\u00a0Landuyt, D., & Joosen, W. (2021). An overview of runtime data protection enforcement approaches. In: 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS &PW), IEEE, pp 351\u2013358","DOI":"10.1109\/EuroSPW54576.2021.00044"},{"key":"9821_CR183","doi-asserted-by":"crossref","unstructured":"Sj\u00f6berg, C.M. (2021). Legal ai from a privacy point of view: Data protection and transparency in focus. Digital Human Sciences p 181","DOI":"10.16993\/bbk.h"},{"key":"9821_CR184","unstructured":"S\u00f8gaard, A., Plank, B., & Hovy, D. (2014). Selection bias, label bias, and bias in ground truth. In: Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Tutorial Abstracts, pp 11\u201313"},{"key":"9821_CR185","doi-asserted-by":"publisher","unstructured":"Solanke, A. A. (2022). Explainable digital forensics ai: Towards mitigating distrust in ai-based digital forensics analysis using interpretable models. Forensic Science International: Digital Investigation, 42, 301403. https:\/\/doi.org\/10.1016\/j.fsidi.2022.301403, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2666281722000841, proceedings of the Twenty-Second Annual DFRWS USA","DOI":"10.1016\/j.fsidi.2022.301403"},{"key":"9821_CR186","doi-asserted-by":"publisher","first-page":"477","DOI":"10.2307\/40041279","volume":"154","author":"DJ Solove","year":"2005","unstructured":"Solove, D. J. (2005). A taxonomy of privacy. U Pa l Rev, 154, 477.","journal-title":"U Pa l Rev"},{"issue":"2","key":"9821_CR187","doi-asserted-by":"publisher","first-page":"1427","DOI":"10.1007\/s10462-022-10204-6","volume":"56","author":"S Sousa","year":"2023","unstructured":"Sousa, S., & Kern, R. (2023). How to keep text private? a systematic review of deep learning methods for privacy-preserving natural language processing. Artificial Intelligence Review, 56(2), 1427\u20131492. https:\/\/doi.org\/10.1007\/s10462-022-10204-6","journal-title":"Artificial Intelligence Review"},{"issue":"2","key":"9821_CR188","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1145\/3466132.3466134","volume":"19","author":"R Srinivasan","year":"2021","unstructured":"Srinivasan, R., & Chander, A. (2021). Biases in ai systems: A survey for practitioners. Queue, 19(2), 45\u201364. https:\/\/doi.org\/10.1145\/3466132.3466134","journal-title":"Queue"},{"key":"9821_CR189","doi-asserted-by":"publisher","unstructured":"Stamatatos, E. (2009). A survey of modern authorship attribution methods. Journal of the American Society for Information Science and Technology, 60(3), 538\u2013556. https:\/\/doi.org\/10.1002\/asi.21001, https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/asi.21001","DOI":"10.1002\/asi.21001"},{"key":"9821_CR190","unstructured":"Stamatatos, E., et\u00a0al. (2006). Ensemble-based author identification using character n-grams. In: Proceedings of the 3rd International Workshop on Text-based Information Retrieval, pp 41\u201346"},{"issue":"8","key":"9821_CR191","doi-asserted-by":"publisher","first-page":"1159","DOI":"10.1038\/s41431-019-0386-5","volume":"27","author":"C Staunton","year":"2019","unstructured":"Staunton, C., Slokenberga, S., & Mascalzoni, D. (2019). The gdpr and the research exemption: Considerations on the necessary safeguards for research biobanks. European Journal of Human Genetics, 27(8), 1159\u20131167.","journal-title":"European Journal of Human Genetics"},{"key":"9821_CR192","doi-asserted-by":"publisher","first-page":"11974","DOI":"10.1109\/ACCESS.2021.3051315","volume":"9","author":"I Stepin","year":"2021","unstructured":"Stepin, I., Alonso, J. M., Catala, A., et al. (2021). A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence. IEEE Access, 9, 11974\u201312001.","journal-title":"IEEE Access"},{"key":"9821_CR193","doi-asserted-by":"publisher","unstructured":"Stevens, A., Deruyck, P., & Veldhoven, Z.V., et\u00a0al. (2020). Explainability and fairness in machine learning: Improve fair end-to-end lending for kiva. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp 1241\u20131248, https:\/\/doi.org\/10.1109\/SSCI47803.2020.9308371","DOI":"10.1109\/SSCI47803.2020.9308371"},{"key":"9821_CR194","doi-asserted-by":"crossref","unstructured":"Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in nlp. https:\/\/arxiv.org\/abs\/1906.02243, arXiv:1906.02243","DOI":"10.18653\/v1\/P19-1355"},{"key":"9821_CR195","doi-asserted-by":"publisher","unstructured":"Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In: Equity and Access in Algorithms, Mechanisms, and Optimization. ACM, https:\/\/doi.org\/10.1145\/3465416.3483305,","DOI":"10.1145\/3465416.3483305"},{"key":"9821_CR196","unstructured":"Sweeney, L., Crosas, M., & Bar-Sinai, M. (2015). Sharing sensitive data with confidence: The datatags system. Technology Science"},{"key":"9821_CR197","doi-asserted-by":"crossref","unstructured":"Tabassi, E. (2023). Artificial intelligence risk management framework (ai rmf 1.0)","DOI":"10.6028\/NIST.AI.100-1"},{"key":"9821_CR198","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2021.105541","volume":"41","author":"A Tamo-Larrieux","year":"2021","unstructured":"Tamo-Larrieux, A. (2021). Decision-making by machines: Is the \u2018law of everything\u2019 enough? Computer Law & Security Review, 41, 105541.","journal-title":"Computer Law & Security Review"},{"key":"9821_CR199","doi-asserted-by":"crossref","unstructured":"Tam\u00f2-Larrieux, A. (2018). Designing for Privacy and its Legal Framework \u2014 link.springer.com. https:\/\/link.springer.com\/book\/10.1007\/978-3-319-98624-1, [Accessed 18-Jul-2023]","DOI":"10.21257\/sg.89"},{"key":"9821_CR200","doi-asserted-by":"publisher","unstructured":"Theophilo, A., Padilha, R., & Andal\u00f3, F.A., et\u00a0al. (2022). Explainable artificial intelligence for authorship attribution on social media. In: ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 2909\u20132913, https:\/\/doi.org\/10.1109\/ICASSP43922.2022.9746262","DOI":"10.1109\/ICASSP43922.2022.9746262"},{"key":"9821_CR201","unstructured":"Tubella, A.A., Theodorou, A., & Dignum, V., et\u00a0al. (2019). Governance by glass-box: Implementing transparent moral bounds for ai behaviour. arXiv preprint arXiv:1905.04994"},{"issue":"1","key":"9821_CR202","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3606274.3606276","volume":"25","author":"A Uchendu","year":"2023","unstructured":"Uchendu, A., Le, T., & Lee, D. (2023). Attribution and obfuscation of neural text authorship: A data mining perspective. ACM SIGKDD Explorations Newsletter, 25(1), 1\u201318.","journal-title":"ACM SIGKDD Explorations Newsletter"},{"key":"9821_CR203","volume-title":"The datafied society","author":"K Van Es","year":"2017","unstructured":"Van Es, K., & Sch\u00e4fer, M. T. (2017). The datafied society. Amsterdam University Press."},{"issue":"3","key":"9821_CR204","doi-asserted-by":"publisher","first-page":"213","DOI":"10.1007\/s43681-021-00043-6","volume":"1","author":"A Van Wynsberghe","year":"2021","unstructured":"Van Wynsberghe, A. (2021). Sustainable ai: Ai for sustainability and the sustainability of ai. AI and Ethics, 1(3), 213\u2013218.","journal-title":"AI and Ethics"},{"key":"9821_CR205","doi-asserted-by":"publisher","first-page":"10","DOI":"10.1007\/978-3-319-57959-7","volume-title":"The eu general data protection regulation (gdpr). A practical guide","author":"P Voigt","year":"2017","unstructured":"Voigt, P., & Von dem Bussche, A. (2017). The eu general data protection regulation (gdpr). A practical guide (1st ed., pp. 10\u20135555). Springer International Publishing.","edition":"1"},{"key":"9821_CR206","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-57959-7","volume-title":"The EU General Data Protection Regulation (GDPR): A Practical Guide","author":"P Voigt","year":"2017","unstructured":"Voigt, P., & Avd, Bussche. (2017). The EU General Data Protection Regulation (GDPR): A Practical Guide (1st ed.). Springer Publishing Company.","edition":"1"},{"key":"9821_CR207","unstructured":"Voisin, G., Boardman, R., & Assion, S., et\u00a0al.(2020). Ico, cnil, german and spanish dpa revised cookies guidelines: Convergence and divergence. Recuperado de https:\/\/iapporg\/media\/pdf\/resource_center\/CNIL_ICO_chartpdf"},{"key":"9821_CR208","doi-asserted-by":"publisher","unstructured":"Wegmann, A., Schraagen, M., & Nguyen, D. (2022). Same author or just same topic? towards content-independent style representations. In: Proceedings of the 7th Workshop on Representation Learning for NLP. Association for Computational Linguistics, Dublin, Ireland, pp 249\u2013268, https:\/\/doi.org\/10.18653\/v1\/2022.repl4nlp-1.26, https:\/\/aclanthology.org\/2022.repl4nlp-1.26","DOI":"10.18653\/v1\/2022.repl4nlp-1.26"},{"key":"9821_CR209","unstructured":"Weidinger, L., Mellor, J., & Rauh, M., et\u00a0al. (2021). Ethical and social risks of harm from language models. arXiv:2112.04359"},{"key":"9821_CR210","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1613\/jair.1.13196","volume":"74","author":"L Weinberg","year":"2022","unstructured":"Weinberg, L. (2022). Rethinking fairness: An interdisciplinary survey of critiques of hegemonic ml fairness approaches. Journal of Artificial Intelligence Research, 74, 75\u2013109.","journal-title":"Journal of Artificial Intelligence Research"},{"key":"9821_CR211","doi-asserted-by":"crossref","unstructured":"Wolfe, R., & Caliskan, A. (2021). Low frequency names exhibit bias and overfitting in contextualizing language models. arXiv preprint arXiv:2110.00672","DOI":"10.18653\/v1\/2021.emnlp-main.41"},{"key":"9821_CR212","first-page":"37","volume":"1","author":"D Wright","year":"2014","unstructured":"Wright, D., & May, A. (2014). Identifying idiolect in forensic authorship attribution: an n-gram textbite approach. Language and Law (Linguagem e Direito), 1, 37\u201369.","journal-title":"Language and Law (Linguagem e Direito)"},{"key":"9821_CR213","first-page":"795","volume":"4","author":"CJ Wu","year":"2022","unstructured":"Wu, C. J., Raghavendra, R., Gupta, U., et al. (2022). Sustainable ai: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4, 795\u2013813.","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"9821_CR214","doi-asserted-by":"publisher","first-page":"339","DOI":"10.1007\/978-3-642-55415-5_28","volume-title":"ICT Systems Security and Privacy Protection","author":"M Yang","year":"2014","unstructured":"Yang, M., & Chow, K. P., et al. (2014). Authorship attribution for forensic investigation with thousands of authors. In N. Cuppens-Boulahia, F. Cuppens, & S. Jajodia (Eds.), ICT Systems Security and Privacy Protection (pp. 339\u2013350). Berlin Heidelberg, Berlin, Heidelberg: Springer."},{"key":"9821_CR215","doi-asserted-by":"crossref","unstructured":"Yang, M., & Chow, K.P. (2014b). Authorship attribution for forensic investigation with thousands of authors. In: ICT Systems Security and Privacy Protection: 29th IFIP TC 11 International Conference, SEC 2014, Marrakech, Morocco, June 2-4, 2014. Proceedings 29, Springer, pp 339\u2013350","DOI":"10.1007\/978-3-642-55415-5_28"},{"key":"9821_CR216","doi-asserted-by":"crossref","unstructured":"Yenduri, G., M R, G CS, & et\u00a0al. (2023). Generative pre-trained transformer: A comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. arXiv:2305.10435","DOI":"10.1109\/ACCESS.2024.3389497"},{"key":"9821_CR217","doi-asserted-by":"publisher","unstructured":"Young, M., Katell, M., & Krafft, P. (2022). Confronting power and corporate capture at the facct conference. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, New York, NY, USA, FAccT \u201922, p 1375\u20131386, https:\/\/doi.org\/10.1145\/3531146.3533194,","DOI":"10.1145\/3531146.3533194"},{"key":"9821_CR218","doi-asserted-by":"publisher","unstructured":"Yuluce, I., & Dalk\u0131\u00e7, F. (2022). Author identification with machine learning algorithms. International Journal of Multidisciplinary Studies and Innovative Technologies 6:45. https:\/\/doi.org\/10.36287\/ijmsit.6.1.45","DOI":"10.36287\/ijmsit.6.1.45"},{"key":"9821_CR219","doi-asserted-by":"publisher","unstructured":"Zafar, M.B., Valera, I., & Rodriguez, M.G., et\u00a0al. (2017). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, https:\/\/doi.org\/10.1145\/3038912.3052660,","DOI":"10.1145\/3038912.3052660"},{"key":"9821_CR220","doi-asserted-by":"publisher","unstructured":"Zhai, W., Rusert, J., & Shafiq, Z., et\u00a0al. (2022). Adversarial authorship attribution for deobfuscation. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, pp 7372\u20137384, https:\/\/doi.org\/10.18653\/v1\/2022.acl-long.509, https:\/\/aclanthology.org\/2022.acl-long.509","DOI":"10.18653\/v1\/2022.acl-long.509"},{"issue":"1","key":"9821_CR221","doi-asserted-by":"publisher","first-page":"32","DOI":"10.26599\/IJCS.2022.9100033","volume":"7","author":"J Zhang","year":"2023","unstructured":"Zhang, J., Shu, Y., & Yu, H. (2023). Fairness in design: A framework for facilitating ethical artificial intelligence designs. International Journal of Crowd Science, 7(1), 32\u201339. https:\/\/doi.org\/10.26599\/IJCS.2022.9100033","journal-title":"International Journal of Crowd Science"},{"issue":"3","key":"9821_CR222","first-page":"1","volume":"11","author":"WE Zhang","year":"2020","unstructured":"Zhang, W. E., Sheng, Q. Z., Alhazmi, A., et al. (2020). Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3), 1\u201341.","journal-title":"ACM Transactions on Intelligent Systems and Technology (TIST)"},{"key":"9821_CR223","unstructured":"Zhang, X., Zhao, J., & LeCun, Y. (2015). Character-level convolutional networks for text classification. Advances in neural information processing systems 28"},{"key":"9821_CR224","doi-asserted-by":"publisher","unstructured":"Zhang, Y., Fan, Y., & Song, W., et\u00a0al. (2019). Your style your identity: Leveraging writing and photography styles for drug trafficker identification in darknet markets over attributed heterogeneous information network. In: The World Wide Web Conference. Association for Computing Machinery, New York, NY, USA, WWW \u201919, p 3448\u20133454, https:\/\/doi.org\/10.1145\/3308558.3313537,","DOI":"10.1145\/3308558.3313537"},{"key":"9821_CR225","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Strubell, E., & Hovy, E. (2022). A survey of active learning for natural language processing. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, pp 6166\u20136190, https:\/\/aclanthology.org\/2022.emnlp-main.414","DOI":"10.18653\/v1\/2022.emnlp-main.414"},{"key":"9821_CR226","doi-asserted-by":"publisher","first-page":"59","DOI":"10.1007\/3-540-44853-5_5","volume-title":"Intelligence and Security Informatics","author":"R Zheng","year":"2003","unstructured":"Zheng, R., Qin, Y., Huang, Z., et al. (2003). Authorship analysis in cybercrime investigation. In H. Chen, R. Miranda, D. D. Zeng, et al. (Eds.), Intelligence and Security Informatics (pp. 59\u201373). Springer."}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09821-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09821-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09821-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,6]],"date-time":"2025-09-06T10:03:04Z","timestamp":1757152984000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09821-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,22]]},"references-count":226,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,6]]}},"alternative-id":["9821"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09821-w","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,22]]},"assertion":[{"value":"22 March 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no Conflict of interest to declare relevant to this article\u2019s content.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"16"}}