{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,7]],"date-time":"2026-05-07T13:16:04Z","timestamp":1778159764362,"version":"3.51.4"},"reference-count":117,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2019,12,11]],"date-time":"2019-12-11T00:00:00Z","timestamp":1576022400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2019,12,11]],"date-time":"2019-12-11T00:00:00Z","timestamp":1576022400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Digital Catapult"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Sci Eng Ethics"],"published-print":{"date-parts":[[2020,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741\u2013742, 1960. <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"doi\" xlink:href=\"https:\/\/doi.org\/10.1126\/science.132.3429.741\">10.1126\/science.132.3429.741<\/jats:ext-link>; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles\u2014the \u2018what\u2019 of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)\u2014rather than on practices, the \u2018how.\u2019 Awareness of the potential issues is increasing at a fast rate, but the AI community\u2019s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.<\/jats:p>","DOI":"10.1007\/s11948-019-00165-5","type":"journal-article","created":{"date-parts":[[2019,12,16]],"date-time":"2019-12-16T10:12:10Z","timestamp":1576491130000},"page":"2141-2168","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":656,"title":["From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices"],"prefix":"10.1007","volume":"26","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5221-4770","authenticated-orcid":false,"given":"Jessica","family":"Morley","sequence":"first","affiliation":[]},{"given":"Luciano","family":"Floridi","sequence":"additional","affiliation":[]},{"given":"Libby","family":"Kinsey","sequence":"additional","affiliation":[]},{"given":"Anat","family":"Elhalal","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2019,12,11]]},"reference":[{"key":"165_CR1","doi-asserted-by":"publisher","unstructured":"Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems\u2014CHI\u201918 (pp. 1\u201318). https:\/\/doi.org\/10.1145\/3173574.3174156.","DOI":"10.1145\/3173574.3174156"},{"issue":"3","key":"165_CR2","doi-asserted-by":"publisher","first-page":"518","DOI":"10.1109\/JPROC.2018.2884923","volume":"107","author":"G Adamson","year":"2019","unstructured":"Adamson, G., Havens, J. C., & Chatila, R. (2019). Designing a value-driven future for ethical autonomous and intelligent systems. Proceedings of the IEEE, 107(3), 518\u2013525. https:\/\/doi.org\/10.1109\/JPROC.2018.2884923.","journal-title":"Proceedings of the IEEE"},{"key":"165_CR3","unstructured":"AI Now Institute Algorithmic Accountability Policy Toolkit. (2018). Retrieved from https:\/\/ainowinstitute.org\/aap-toolkit.pdf."},{"issue":"3","key":"165_CR4","doi-asserted-by":"publisher","first-page":"251","DOI":"10.1080\/09528130050111428","volume":"12","author":"C Allen","year":"2000","unstructured":"Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251\u2013261. https:\/\/doi.org\/10.1080\/09528130050111428.","journal-title":"Journal of Experimental & Theoretical Artificial Intelligence"},{"key":"165_CR5","doi-asserted-by":"publisher","first-page":"161","DOI":"10.1007\/978-3-319-67280-9_9","volume-title":"Privacy technologies and policy","author":"M Alshammari","year":"2017","unstructured":"Alshammari, M., & Simpson, A. (2017). Towards a principled approach for engineering privacy by design. In E. Schweighofer, H. Leitold, A. Mitrakas, & K. Rannenberg (Eds.), Privacy technologies and policy (Vol. 10518, pp. 161\u2013177). Cham: Springer. https:\/\/doi.org\/10.1007\/978-3-319-67280-9_9."},{"issue":"2","key":"165_CR6","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1007\/s10676-018-9495-z","volume":"21","author":"IF Anabo","year":"2019","unstructured":"Anabo, I. F., Elexpuru-Albizuri, I., & Villard\u00f3n-Gallego, L. (2019). Revisiting the Belmont report\u2019s ethical principles in internet-mediated research: Perspectives from disciplinary associations in the social sciences. Ethics and Information Technology, 21(2), 137\u2013149. https:\/\/doi.org\/10.1007\/s10676-018-9495-z.","journal-title":"Ethics and Information Technology"},{"issue":"3","key":"165_CR7","doi-asserted-by":"publisher","first-page":"973","DOI":"10.1177\/1461444816676645","volume":"20","author":"M Ananny","year":"2018","unstructured":"Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973\u2013989. https:\/\/doi.org\/10.1177\/1461444816676645.","journal-title":"New Media & Society"},{"issue":"1","key":"165_CR117","doi-asserted-by":"publisher","first-page":"337","DOI":"10.1515\/pjbr-2018-0024","volume":"9","author":"M Anderson","year":"2018","unstructured":"Anderson, M., & Anderson, S. L. (2018). GenEth: A general ethical dilemma analyzer. Paladyn, Journal of Behavioral Robotics, 9(1), 337\u2013357. https:\/\/doi.org\/10.1515\/pjbr-2018-0024.","journal-title":"Paladyn, Journal of Behavioral Robotics"},{"key":"165_CR8","doi-asserted-by":"crossref","unstructured":"Antignac, T., Sands, D., & Schneider, G. (2016). Data minimisation: A language-based approach (long version). arXiv:1611.05642 [Cs].","DOI":"10.1007\/978-3-319-58469-0_30"},{"issue":"1","key":"165_CR9","doi-asserted-by":"publisher","first-page":"59","DOI":"10.1007\/s10676-018-9447-7","volume":"20","author":"T Arnold","year":"2018","unstructured":"Arnold, T., & Scheutz, M. (2018). The \u201cbig red button\u201d is too late: An alternative model for the ethical evaluation of AI systems. Ethics and Information Technology, 20(1), 59\u201369. https:\/\/doi.org\/10.1007\/s10676-018-9447-7.","journal-title":"Ethics and Information Technology"},{"issue":"1","key":"165_CR10","doi-asserted-by":"publisher","first-page":"17","DOI":"10.1111\/phil.12025","volume":"45","author":"M Arvan","year":"2014","unstructured":"Arvan, M. (2014). A better, dual theory of human rights: A better, dual theory of human rights. The Philosophical Forum, 45(1), 17\u201347. https:\/\/doi.org\/10.1111\/phil.12025.","journal-title":"The Philosophical Forum"},{"key":"165_CR11","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-018-0848-2","author":"M Arvan","year":"2018","unstructured":"Arvan, M. (2018). Mental time-travel, semantic flexibility, and A.I. ethics. AI & Society. https:\/\/doi.org\/10.1007\/s00146-018-0848-2.","journal-title":"AI & Society"},{"key":"165_CR12","unstructured":"Beijing AI Principles. (2019). Retrieved from Beijing Academy of Aritifical Intelligence website. https:\/\/www.baai.ac.cn\/blog\/beijing-ai-principles."},{"key":"165_CR13","unstructured":"Bibal, A., & Fr\u00e9nay, B. (2016). Interpretability of machine learning models and representations: An introduction."},{"issue":"4","key":"165_CR14","doi-asserted-by":"publisher","first-page":"543","DOI":"10.1007\/s13347-017-0263-5","volume":"31","author":"R Binns","year":"2018","unstructured":"Binns, R. (2018a). Algorithmic accountability and public reason. Philosophy & Technology, 31(4), 543\u2013556. https:\/\/doi.org\/10.1007\/s13347-017-0263-5.","journal-title":"Philosophy & Technology"},{"issue":"3","key":"165_CR15","doi-asserted-by":"publisher","first-page":"73","DOI":"10.1109\/MSP.2018.2701147","volume":"16","author":"R Binns","year":"2018","unstructured":"Binns, R. (2018b). What can political philosophy teach us about algorithmic fairness? IEEE Security and Privacy, 16(3), 73\u201380. https:\/\/doi.org\/10.1109\/MSP.2018.2701147.","journal-title":"IEEE Security and Privacy"},{"key":"165_CR16","doi-asserted-by":"publisher","unstructured":"Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). \u2018It\u2019s reducing a human being to a percentage\u2019: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI conference on human factors in computing systems\u2014CHI\u201918 (pp. 1\u201314). https:\/\/doi.org\/10.1145\/3173574.3173951.","DOI":"10.1145\/3173574.3173951"},{"key":"165_CR17","doi-asserted-by":"publisher","DOI":"10.1007\/s10551-019-04226-4","author":"A Buhmann","year":"2019","unstructured":"Buhmann, A., Pa\u00dfmann, J., & Fieseler, C. (2019). Managing algorithmic accountability: Balancing reputational concerns, engagement strategies, and the potential of rational discourse. Journal of Business Ethics. https:\/\/doi.org\/10.1007\/s10551-019-04226-4.","journal-title":"Journal of Business Ethics"},{"issue":"1","key":"165_CR18","doi-asserted-by":"publisher","first-page":"205395171562251","DOI":"10.1177\/2053951715622512","volume":"3","author":"J Burrell","year":"2016","unstructured":"Burrell, J. (2016). How the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https:\/\/doi.org\/10.1177\/2053951715622512.","journal-title":"Big Data & Society"},{"issue":"2133","key":"165_CR19","doi-asserted-by":"publisher","first-page":"20180080","DOI":"10.1098\/rsta.2018.0080","volume":"376","author":"C Cath","year":"2018","unstructured":"Cath, C. (2018). Governing Artificial Intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https:\/\/doi.org\/10.1098\/rsta.2018.0080.","journal-title":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences"},{"key":"165_CR20","doi-asserted-by":"publisher","DOI":"10.1007\/s11948-017-9901-7","author":"C Cath","year":"2017","unstructured":"Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the \u2018Good Society\u2019: The US, EU, and UK approach. Science and Engineering Ethics. https:\/\/doi.org\/10.1007\/s11948-017-9901-7.","journal-title":"Science and Engineering Ethics"},{"issue":"1","key":"165_CR21","doi-asserted-by":"publisher","first-page":"155","DOI":"10.1007\/s13347-018-0304-8","volume":"31","author":"C Cath","year":"2018","unstructured":"Cath, C., Zimmer, M., Lomborg, S., & Zevenbergen, B. (2018). Association of internet researchers (AoIR) roundtable summary: Artificial Intelligence and the good society workshop proceedings. Philosophy & Technology, 31(1), 155\u2013162. https:\/\/doi.org\/10.1007\/s13347-018-0304-8.","journal-title":"Philosophy & Technology"},{"issue":"2","key":"165_CR22","doi-asserted-by":"publisher","first-page":"405","DOI":"10.1007\/s12394-010-0053-z","volume":"3","author":"A Cavoukian","year":"2010","unstructured":"Cavoukian, A., Taylor, S., & Abrams, M. E. (2010). Privacy by design: Essential for organizational accountability and strong business practices. Identity in the Information Society, 3(2), 405\u2013413. https:\/\/doi.org\/10.1007\/s12394-010-0053-z.","journal-title":"Identity in the Information Society"},{"key":"165_CR23","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2019.04.007","author":"R Clarke","year":"2019","unstructured":"Clarke, R. (2019). Principles and business processes for responsible AI. Computer Law and Security Review. https:\/\/doi.org\/10.1016\/j.clsr.2019.04.007.","journal-title":"Computer Law and Security Review"},{"issue":"1","key":"165_CR24","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1007\/s11948-010-9233-3","volume":"18","author":"M Coeckelbergh","year":"2012","unstructured":"Coeckelbergh, M. (2012). Moral responsibility, technology, and experiences of the tragic: From Kierkegaard to offshore engineering. Science and Engineering Ethics, 18(1), 35\u201348. https:\/\/doi.org\/10.1007\/s11948-010-9233-3.","journal-title":"Science and Engineering Ethics"},{"key":"165_CR25","unstructured":"Cookson, C. (2018, September 6). Artificial Intelligence faces public backlash, warns scientist. Financial Times. Retrieved from https:\/\/www.ft.com\/content\/0b301152-b0f8-11e8-99ca-68cf89602132."},{"key":"165_CR26","doi-asserted-by":"crossref","unstructured":"Cowls, J., King, T., Taddeo, M., & Floridi, L. (2019). Designing AI for social good: Seven essential factors (May 15, 2019). Available at SSRN: https:\/\/ssrn.com\/abstract=.","DOI":"10.2139\/ssrn.3388669"},{"issue":"7625","key":"165_CR27","doi-asserted-by":"publisher","first-page":"311","DOI":"10.1038\/538311a","volume":"538","author":"K Crawford","year":"2016","unstructured":"Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311\u2013313. https:\/\/doi.org\/10.1038\/538311a.","journal-title":"Nature"},{"issue":"4","key":"165_CR28","doi-asserted-by":"publisher","first-page":"499","DOI":"10.1007\/s13347-018-0337-z","volume":"31","author":"M D\u2019Agostino","year":"2018","unstructured":"D\u2019Agostino, M., & Durante, M. (2018). Introduction: The governance of algorithms. Philosophy & Technology, 31(4), 499\u2013505. https:\/\/doi.org\/10.1007\/s13347-018-0337-z.","journal-title":"Philosophy & Technology"},{"issue":"3","key":"165_CR29","doi-asserted-by":"publisher","first-page":"305","DOI":"10.1007\/s10515-014-0168-9","volume":"23","author":"LA Dennis","year":"2016","unstructured":"Dennis, L. A., Fisher, M., Lincoln, N. K., Lisitsa, A., & Veres, S. M. (2016). Practical verification of decision-making in agent-based autonomous systems. Automated Software Engineering, 23(3), 305\u2013359. https:\/\/doi.org\/10.1007\/s10515-014-0168-9.","journal-title":"Automated Software Engineering"},{"issue":"3","key":"165_CR30","doi-asserted-by":"publisher","first-page":"398","DOI":"10.1080\/21670811.2014.976411","volume":"3","author":"N Diakopoulos","year":"2015","unstructured":"Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398\u2013415. https:\/\/doi.org\/10.1080\/21670811.2014.976411.","journal-title":"Digital Journalism"},{"key":"165_CR31","unstructured":"Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv:1702.08608 [Cs, Stat]."},{"key":"165_CR32","unstructured":"DotEveryone. (2019). The DotEveryone consequence scanning agile event. Retrieved from https:\/\/doteveryone.org.uk\/project\/consequence-scanning\/."},{"issue":"1","key":"165_CR33","doi-asserted-by":"publisher","first-page":"eaao5580","DOI":"10.1126\/sciadv.aao5580","volume":"4","author":"J Dressel","year":"2018","unstructured":"Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. https:\/\/doi.org\/10.1126\/sciadv.aao5580.","journal-title":"Science Advances"},{"issue":"3\u20134","key":"165_CR34","doi-asserted-by":"publisher","first-page":"347","DOI":"10.1007\/s12130-010-9118-4","volume":"23","author":"M Durante","year":"2010","unstructured":"Durante, M. (2010). What is the model of trust for multi-agent systems? Whether or not e-trust applies to autonomous agents. Knowledge, Technology & Policy, 23(3\u20134), 347\u2013366. https:\/\/doi.org\/10.1007\/s12130-010-9118-4.","journal-title":"Knowledge, Technology & Policy"},{"issue":"3","key":"165_CR35","doi-asserted-by":"publisher","first-page":"46","DOI":"10.1109\/MSP.2018.2701152","volume":"16","author":"L Edwards","year":"2018","unstructured":"Edwards, L., & Veale, M. (2018). Enslaving the algorithm: From a \u201cright to an explanation\u201d to a \u201cright to better decisions\u201d? IEEE Security and Privacy, 16(3), 46\u201354. https:\/\/doi.org\/10.1109\/MSP.2018.2701152.","journal-title":"IEEE Security and Privacy"},{"key":"165_CR36","unstructured":"European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https:\/\/ec.europa.eu\/futurium\/en\/ai-alliance-consultation."},{"issue":"2083","key":"165_CR37","doi-asserted-by":"publisher","first-page":"20160112","DOI":"10.1098\/rsta.2016.0112","volume":"374","author":"L Floridi","year":"2016","unstructured":"Floridi, L. (2016a). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160112. https:\/\/doi.org\/10.1098\/rsta.2016.0112.","journal-title":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences"},{"issue":"6","key":"165_CR38","doi-asserted-by":"publisher","first-page":"1669","DOI":"10.1007\/s11948-015-9733-2","volume":"22","author":"L Floridi","year":"2016","unstructured":"Floridi, L. (2016b). Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and Engineering Ethics, 22(6), 1669\u20131688. https:\/\/doi.org\/10.1007\/s11948-015-9733-2.","journal-title":"Science and Engineering Ethics"},{"issue":"3","key":"165_CR39","doi-asserted-by":"publisher","first-page":"495","DOI":"10.1007\/s11023-017-9438-1","volume":"27","author":"L Floridi","year":"2017","unstructured":"Floridi, L. (2017). The logic of design as a conceptual logic of information. Minds and Machines, 27(3), 495\u2013519. https:\/\/doi.org\/10.1007\/s11023-017-9438-1.","journal-title":"Minds and Machines"},{"issue":"2133","key":"165_CR40","doi-asserted-by":"publisher","first-page":"20180081","DOI":"10.1098\/rsta.2018.0081","volume":"376","author":"L Floridi","year":"2018","unstructured":"Floridi, L. (2018). Soft ethics, the governance of the digital and the general data protection regulation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180081. https:\/\/doi.org\/10.1098\/rsta.2018.0081.","journal-title":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences"},{"key":"165_CR41","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0055-y","author":"L Floridi","year":"2019","unstructured":"Floridi, L. (2019a). Establishing the rules for building trustworthy AI. Nature Machine Intelligence. https:\/\/doi.org\/10.1038\/s42256-019-0055-y.","journal-title":"Nature Machine Intelligence"},{"key":"165_CR42","doi-asserted-by":"publisher","DOI":"10.1093\/oso\/9780198833635.001.0001","volume-title":"The logic of information: A theory of philosophy as conceptual design","author":"L Floridi","year":"2019","unstructured":"Floridi, L. (2019b). The logic of information: A theory of philosophy as conceptual design (1st ed.). New York, NY: Oxford University Press.","edition":"1"},{"key":"165_CR43","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-019-00354-x","author":"L Floridi","year":"2019","unstructured":"Floridi, L. (2019c). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology. https:\/\/doi.org\/10.1007\/s13347-019-00354-x.","journal-title":"Philosophy & Technology"},{"key":"165_CR44","unstructured":"Floridi, L, & Clement-Jones, T. (2019, March 20). The five principles key to any ethical framework for AI. Tech New Statesman. Retrieved from https:\/\/tech.newstatesman.com\/policy\/ai-ethics-framework."},{"key":"165_CR45","doi-asserted-by":"publisher","DOI":"10.1162\/99608f92.8cd550d1","author":"L Floridi","year":"2019","unstructured":"Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review. https:\/\/doi.org\/10.1162\/99608f92.8cd550d1.","journal-title":"Harvard Data Science Review"},{"issue":"4","key":"165_CR46","doi-asserted-by":"publisher","first-page":"689","DOI":"10.1007\/s11023-018-9482-5","volume":"28","author":"L Floridi","year":"2018","unstructured":"Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People\u2014an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689\u2013707. https:\/\/doi.org\/10.1007\/s11023-018-9482-5.","journal-title":"Minds and Machines"},{"key":"165_CR47","unstructured":"Floridi, L., & Strait, A. (Forthcoming). Ethical foresight analysis: What it is and why it is needed."},{"issue":"2083","key":"165_CR48","doi-asserted-by":"publisher","first-page":"20160360","DOI":"10.1098\/rsta.2016.0360","volume":"374","author":"L Floridi","year":"2016","unstructured":"Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. https:\/\/doi.org\/10.1098\/rsta.2016.0360.","journal-title":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences"},{"key":"165_CR49","unstructured":"Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. arXiv:1609.07236 [Cs, Stat]."},{"issue":"3","key":"165_CR50","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1609\/aimag.v38i3.2741","volume":"38","author":"B Goodman","year":"2017","unstructured":"Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a \u2018right to explanation\u2019. AI Magazine, 38(3), 50. https:\/\/doi.org\/10.1609\/aimag.v38i3.2741.","journal-title":"AI Magazine"},{"issue":"2","key":"165_CR51","doi-asserted-by":"publisher","first-page":"9","DOI":"10.12775\/setf.2018.015","volume":"6","author":"BP Green","year":"2018","unstructured":"Green, B. P. (2018). Ethical reflections on Artificial Intelligence. Scientia et Fides, 6(2), 9. https:\/\/doi.org\/10.12775\/setf.2018.015.","journal-title":"Scientia et Fides"},{"issue":"5","key":"165_CR52","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3236009","volume":"51","author":"R Guidotti","year":"2018","unstructured":"Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1\u201342. https:\/\/doi.org\/10.1145\/3236009.","journal-title":"ACM Computing Surveys"},{"key":"165_CR115","unstructured":"Habermas, J. (1983). Moralbewu\u00dftsein und kommunikatives Handeln. Frankfurt am Main: Suhrkamp. [English, 1990a]"},{"key":"165_CR116","volume-title":"The structural transformation of the public sphere: An inquiry into a category of bourgeois society","author":"J Habermas","year":"1991","unstructured":"Habermas, J. (1991). The structural transformation of the public sphere: An inquiry into a category of bourgeois society. Cambridge, Mass: MIT Press."},{"key":"165_CR53","unstructured":"Hagendorff, T. (2019). The ethics of AI ethics\u2014an evaluation of guidelines. arXiv:1903.03425 [Cs, Stat]."},{"issue":"9","key":"165_CR54","doi-asserted-by":"publisher","first-page":"829","DOI":"10.1177\/0191453714545340","volume":"40","author":"J Heath","year":"2014","unstructured":"Heath, J. (2014). Rebooting discourse ethics. Philosophy and Social Criticism, 40(9), 829\u2013866. https:\/\/doi.org\/10.1177\/0191453714545340.","journal-title":"Philosophy and Social Criticism"},{"issue":"3","key":"165_CR55","doi-asserted-by":"publisher","first-page":"619","DOI":"10.1007\/s11948-014-9565-5","volume":"21","author":"A Hevelke","year":"2015","unstructured":"Hevelke, A., & Nida-R\u00fcmelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619\u2013630. https:\/\/doi.org\/10.1007\/s11948-014-9565-5.","journal-title":"Science and Engineering Ethics"},{"key":"165_CR56","unstructured":"Holland, S., Hosny, A., Newman, S., Joseph, J., & Chmielinski, K. (2018). The dataset nutrition label: A framework to drive higher data quality standards. arXiv:1805.03677 [Cs]."},{"issue":"6435","key":"165_CR57","doi-asserted-by":"publisher","first-page":"26","DOI":"10.1126\/science.aax0162","volume":"364","author":"EA Holm","year":"2019","unstructured":"Holm, E. A. (2019). In defense of the black box. Science, 364(6435), 26\u201327. https:\/\/doi.org\/10.1126\/science.aax0162.","journal-title":"Science"},{"key":"165_CR58","doi-asserted-by":"publisher","first-page":"55","DOI":"10.1109\/DISA.2018.8490530","volume":"2018","author":"A Holzinger","year":"2018","unstructured":"Holzinger, A. (2018). From machine learning to explainable AI. World Symposium on Digital Intelligence for Systems and Machines (DISA), 2018, 55\u201366. https:\/\/doi.org\/10.1109\/DISA.2018.8490530.","journal-title":"World Symposium on Digital Intelligence for Systems and Machines (DISA)"},{"key":"165_CR59","unstructured":"ideo.org. (2015). The field guide to human-centered design. Retrieved from http:\/\/www.designkit.org\/resources\/1."},{"key":"165_CR60","unstructured":"Involve, & DeepMind. (2019). How to stimulate effective public engagement on the ethics of Artificial Intelligence. Retrieved from https:\/\/www.involve.org.uk\/sites\/default\/files\/field\/attachemnt\/How%20to%20stimulate%20effective%20public%20debate%20on%20the%20ethics%20of%20artificial%20intelligence%20.pdf."},{"key":"165_CR61","doi-asserted-by":"publisher","DOI":"10.1007\/s10676-018-9467-3","author":"N Jacobs","year":"2018","unstructured":"Jacobs, N., & Huldtgren, A. (2018). Why value sensitive design needs ethical commitments. Ethics and Information Technology. https:\/\/doi.org\/10.1007\/s10676-018-9467-3.","journal-title":"Ethics and Information Technology"},{"key":"165_CR62","doi-asserted-by":"crossref","unstructured":"Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial Intelligence: The global landscape of ethics guidelines. arXiv:1906.11668 [Cs].","DOI":"10.1038\/s42256-019-0088-2"},{"key":"165_CR63","unstructured":"Johansson, F. D., Shalit, U., & Sontag, D. (2016). Learning representations for counterfactual inference. arXiv:1605.03661 [Cs, Stat]."},{"key":"165_CR64","doi-asserted-by":"publisher","DOI":"10.1080\/1369118X.2018.1477967","author":"J Kemper","year":"2018","unstructured":"Kemper, J., & Kolkman, D. (2018). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society. https:\/\/doi.org\/10.1080\/1369118X.2018.1477967.","journal-title":"Information, Communication & Society"},{"key":"165_CR65","doi-asserted-by":"publisher","DOI":"10.1093\/qje\/qjx032","author":"J Kleinberg","year":"2017","unstructured":"Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). Human decisions and machine predictions. The Quarterly Journal of Economics. https:\/\/doi.org\/10.1093\/qje\/qjx032.","journal-title":"The Quarterly Journal of Economics"},{"key":"165_CR66","unstructured":"Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807 [Cs, Stat]. Retrieved from http:\/\/arxiv.org\/abs\/1609.05807."},{"key":"165_CR67","unstructured":"Knight, W. (2019). Why does Beijing suddenly care about AI ethics? MIT Technology Review. Retrieved from https:\/\/www.technologyreview.com\/s\/613610\/why-does-china-suddenly-care-about-ai-ethics-and-privacy\/."},{"key":"165_CR68","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1016\/j.coisb.2017.07.001","volume":"4","author":"BM Knoppers","year":"2017","unstructured":"Knoppers, B. M., & Thorogood, A. M. (2017). Ethics and big data in health. Current Opinion in Systems Biology, 4, 53\u201357. https:\/\/doi.org\/10.1016\/j.coisb.2017.07.001.","journal-title":"Current Opinion in Systems Biology"},{"key":"165_CR69","unstructured":"Kolter, Z., & Madry, A. (2018). Materials for tutorial adversarial robustness: Theory and practice. Retrieved from https:\/\/adversarial-ml-tutorial.org\/."},{"issue":"2133","key":"165_CR70","doi-asserted-by":"publisher","first-page":"20180084","DOI":"10.1098\/rsta.2018.0084","volume":"376","author":"JA Kroll","year":"2018","unstructured":"Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180084. https:\/\/doi.org\/10.1098\/rsta.2018.0084.","journal-title":"Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences"},{"key":"165_CR71","doi-asserted-by":"publisher","DOI":"10.1007\/s10676-019-09503-4","author":"K La Fors","year":"2019","unstructured":"La Fors, K., Custers, B., & Keymolen, E. (2019). Reassessing values for emerging big data technologies: Integrating design-based and application-based approaches. Ethics and Information Technology. https:\/\/doi.org\/10.1007\/s10676-019-09503-4.","journal-title":"Ethics and Information Technology"},{"key":"165_CR72","doi-asserted-by":"publisher","unstructured":"Lakkaraju, H., Kleinberg, J., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). The selective labels problem: evaluating algorithmic predictions in the presence of unobservables. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining\u2014KDD\u201917 (pp. 275\u2013284). https:\/\/doi.org\/10.1145\/3097983.3098066.","DOI":"10.1145\/3097983.3098066"},{"issue":"4","key":"165_CR73","doi-asserted-by":"publisher","first-page":"611","DOI":"10.1007\/s13347-017-0279-x","volume":"31","author":"B Lepri","year":"2018","unstructured":"Lepri, B., Oliver, N., Letouz\u00e9, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31(4), 611\u2013627. https:\/\/doi.org\/10.1007\/s13347-017-0279-x.","journal-title":"Philosophy & Technology"},{"key":"165_CR74","volume-title":"Code (Version 2.0)","author":"L Lessig","year":"2006","unstructured":"Lessig, L., & Lessig, L. (2006). Code (Version 2.0). New York: Basic Books."},{"key":"165_CR75","unstructured":"Lighthill, J. (1973). \u2018Artificial Intelligence: A general survey\u2019 in Artificial Intelligence: A paper symposium. Retrieved from UK Science Research Council website: http:\/\/www.chilton-computing.org.uk\/inf\/literature\/reports\/lighthill_report\/p001.htm."},{"key":"165_CR76","unstructured":"Lipton, Z. C. (2016). The mythos of model interpretability. arXiv:1606.03490 [Cs, Stat]."},{"key":"165_CR77","unstructured":"Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 4765\u20134774). Retrieved from http:\/\/papers.nips.cc\/paper\/7062-a-unified-approach-to-interpreting-model-predictions.pdf."},{"key":"165_CR78","doi-asserted-by":"publisher","first-page":"219","DOI":"10.1007\/978-3-319-22906-5_17","volume-title":"Trust, privacy and security in digital business","author":"E-L Makri","year":"2015","unstructured":"Makri, E.-L., & Lambrinoudakis, C. (2015). Privacy principles: Towards a common privacy audit methodology. In S. Fischer-H\u00fcbner, C. Lambrinoudakis, & J. L\u00f3pez (Eds.), Trust, privacy and security in digital business (Vol. 9264, pp. 219\u2013234). Cham: Springer."},{"issue":"2","key":"165_CR79","doi-asserted-by":"publisher","first-page":"93","DOI":"10.1108\/JICES-08-2013-0030","volume":"12","author":"T Matzner","year":"2014","unstructured":"Matzner, T. (2014). Why privacy is not enough privacy in the context of \u201cubiquitous computing\u201d and \u201cbig data\u201d. Journal of Information, Communication and Ethics in Society, 12(2), 93\u2013106. https:\/\/doi.org\/10.1108\/JICES-08-2013-0030.","journal-title":"Journal of Information, Communication and Ethics in Society"},{"key":"165_CR80","unstructured":"Mikhailov, D. (2019). A new method for ethical data science. Retrieved from Medium website: https:\/\/medium.com\/wellcome-data-labs\/a-new-method-for-ethical-data-science-edb59e400ae9."},{"key":"165_CR81","unstructured":"Miller, C., & Coldicott, R. (2019). People, power and technology: The tech workers\u2019 view. Retrieved from Doteveryone website: https:\/\/doteveryone.org.uk\/report\/workersview\/."},{"issue":"1","key":"165_CR82","doi-asserted-by":"publisher","first-page":"114","DOI":"10.1016\/j.ejor.2010.11.003","volume":"210","author":"J Mingers","year":"2011","unstructured":"Mingers, J. (2011). Ethics and OR: Operationalising discourse ethics. European Journal of Operational Research, 210(1), 114\u2013124. https:\/\/doi.org\/10.1016\/j.ejor.2010.11.003.","journal-title":"European Journal of Operational Research"},{"issue":"4","key":"165_CR83","doi-asserted-by":"publisher","first-page":"855","DOI":"10.2307\/25750707","volume":"34","author":"J Mingers","year":"2010","unstructured":"Mingers, J., & Walsham, G. (2010). Toward ethical information systems: The contribution of discourse ethics. MIS Quarterly: Management Information Systems, 34(4), 855\u2013870.","journal-title":"MIS Quarterly: Management Information Systems"},{"key":"165_CR84","doi-asserted-by":"publisher","unstructured":"Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency\u2014FAT*\u201919 (pp. 220\u2013229). https:\/\/doi.org\/10.1145\/3287560.3287596.","DOI":"10.1145\/3287560.3287596"},{"issue":"2","key":"165_CR85","doi-asserted-by":"publisher","first-page":"205395171667967","DOI":"10.1177\/2053951716679679","volume":"3","author":"BD Mittelstadt","year":"2016","unstructured":"Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https:\/\/doi.org\/10.1177\/2053951716679679.","journal-title":"Big Data & Society"},{"key":"165_CR86","first-page":"119","volume":"79","author":"H Nissenbaum","year":"2004","unstructured":"Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79, 119.","journal-title":"Washington Law Review"},{"key":"165_CR87","unstructured":"OECD. (2019a). Forty-two countries adopt new OECD principles on Artificial Intelligence. Retrieved from https:\/\/www.oecd.org\/science\/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm."},{"key":"165_CR88","unstructured":"OECD. (2019b). Recommendation of the Council on Artificial Intelligence. Retrieved from https:\/\/legalinstruments.oecd.org\/en\/instruments\/OECD-LEGAL-0449."},{"issue":"2","key":"165_CR89","doi-asserted-by":"publisher","first-page":"126","DOI":"10.1057\/ejis.2013.18","volume":"23","author":"MC Oetzel","year":"2014","unstructured":"Oetzel, M. C., & Spiekermann, S. (2014). A systematic methodology for privacy impact assessments: A design science approach. European Journal of Information Systems, 23(2), 126\u2013150. https:\/\/doi.org\/10.1057\/ejis.2013.18.","journal-title":"European Journal of Information Systems"},{"key":"165_CR90","unstructured":"Overdorf, R., Kulynych, B., Balsa, E., Troncoso, C., & G\u00fcrses, S. (2018). Questioning the assumptions behind fairness solutions. arXiv:1811.11293 [Cs]."},{"key":"165_CR91","unstructured":"Oxborough, C., Cameron, E., Rao, A., Birchall, A., Townsend, A., & Westermann, C. (2018). Explainable AI: Driving business value through greater understanding. Retrieved from PWC website: https:\/\/www.pwc.co.uk\/audit-assurance\/assets\/explainable-ai.pdf."},{"key":"165_CR92","unstructured":"Peters, D., & Calvo, R. A. (2019, May 2). Beyond principles: A process for responsible tech. Retrieved from Medium website: https:\/\/medium.com\/ethics-of-digital-experience\/beyond-principles-a-process-for-responsible-tech-aefc921f7317."},{"issue":"2","key":"165_CR93","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1108\/DPRG-11-2018-0068","volume":"21","author":"SE Polykalas","year":"2019","unstructured":"Polykalas, S. E., & Prezerakos, G. N. (2019). When the mobile app is free, the product is your personal data. Digital Policy, Regulation and Governance, 21(2), 89\u2013101. https:\/\/doi.org\/10.1108\/DPRG-11-2018-0068.","journal-title":"Digital Policy, Regulation and Governance"},{"key":"165_CR94","unstructured":"Poursabzi-Sangdeh, F., Goldstein, D. G., Hofman, J. M., Vaughan, J. W., & Wallach, H. (2018). Manipulating and measuring model interpretability. arXiv:1802.07810 [Cs]."},{"key":"165_CR95","unstructured":"PWC. (2019). The PwC responsible AI framework. Retrieved from https:\/\/www.pwc.co.uk\/services\/audit-assurance\/risk-assurance\/services\/technology-risk\/technology-risk-insights\/accelerating-innovation-through-responsible-ai.html."},{"key":"165_CR96","unstructured":"Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. Retrieved from AINow website: https:\/\/ainowinstitute.org\/aiareport2018.pdf."},{"key":"165_CR97","unstructured":"Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August 12). Local interpretable model-agnostic explanations (LIME): An introduction a technique to explain the predictions of any machine learning classifier. Retrieved from https:\/\/www.oreilly.com\/learning\/introduction-to-local-interpretable-model-agnostic-explanations-lime."},{"issue":"2","key":"165_CR98","doi-asserted-by":"publisher","first-page":"127","DOI":"10.1007\/s10676-018-9452-x","volume":"20","author":"L Royakkers","year":"2018","unstructured":"Royakkers, L., Timmer, J., Kool, L., & van Est, R. (2018). Societal and ethical issues of digitization. Ethics and Information Technology, 20(2), 127\u2013142. https:\/\/doi.org\/10.1007\/s10676-018-9452-x.","journal-title":"Ethics and Information Technology"},{"key":"165_CR99","unstructured":"Russell, C., Kusner, M. J., Loftus, J., & Silva, R. (2017). When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 6414\u20136423). Retrieved from http:\/\/papers.nips.cc\/paper\/7220-when-worlds-collide-integrating-different-counterfactual-assumptions-in-fairness.pdf."},{"key":"165_CR100","doi-asserted-by":"publisher","DOI":"10.1007\/s10676-019-09502-5","author":"JS Saltz","year":"2019","unstructured":"Saltz, J. S., & Dewar, N. (2019). Data science ethical considerations: A systematic literature review and proposed project framework. Ethics and Information Technology. https:\/\/doi.org\/10.1007\/s10676-019-09502-5.","journal-title":"Ethics and Information Technology"},{"issue":"3429","key":"165_CR101","doi-asserted-by":"publisher","first-page":"741","DOI":"10.1126\/science.132.3429.741","volume":"132","author":"AL Samuel","year":"1960","unstructured":"Samuel, A. L. (1960). Some moral and technical consequences of automation\u2014a refutation. Science, 132(3429), 741\u2013742. https:\/\/doi.org\/10.1126\/science.132.3429.741.","journal-title":"Science"},{"issue":"1","key":"165_CR102","first-page":"109","volume":"52","author":"AD Selbst","year":"2017","unstructured":"Selbst, A. D. (2017). Disparate impact in big data policing. Georgia Law Review, 52(1), 109\u2013196.","journal-title":"Georgia Law Review"},{"key":"165_CR103","unstructured":"Spielkamp, M., Matzat, L., Penner, K., Thummler, M., Thiel, V., Gie\u00dfler, S., & Eisenhauer, A. (2019). Algorithm Watch 2019: The AI Ethics Guidelines Global Inventory. Retrieved from https:\/\/algorithmwatch.org\/en\/project\/ai-ethics-guidelines-global-inventory\/."},{"issue":"3","key":"165_CR104","doi-asserted-by":"publisher","first-page":"26","DOI":"10.1109\/MSP.2018.2701164","volume":"16","author":"BC Stahl","year":"2018","unstructured":"Stahl, B. C., & Wright, D. (2018). Ethics and privacy in AI and big data: Implementing responsible research and innovation. IEEE Security and Privacy, 16(3), 26\u201333. https:\/\/doi.org\/10.1109\/MSP.2018.2701164.","journal-title":"IEEE Security and Privacy"},{"issue":"6404","key":"165_CR105","doi-asserted-by":"publisher","first-page":"751","DOI":"10.1126\/science.aat5991","volume":"361","author":"M Taddeo","year":"2018","unstructured":"Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751\u2013752. https:\/\/doi.org\/10.1126\/science.aat5991.","journal-title":"Science"},{"issue":"1","key":"165_CR106","doi-asserted-by":"publisher","first-page":"49","DOI":"10.1007\/s10676-006-9128-9","volume":"9","author":"M Turilli","year":"2007","unstructured":"Turilli, M. (2007). Ethical protocols design. Ethics and Information Technology, 9(1), 49\u201362. https:\/\/doi.org\/10.1007\/s10676-006-9128-9.","journal-title":"Ethics and Information Technology"},{"key":"165_CR107","volume-title":"Current issues in computing and philosophy","author":"M Turilli","year":"2008","unstructured":"Turilli, M. (2008). Ethics and the practice of software design. In A. Briggle, P. Brey, & K. Waelbers (Eds.), Current issues in computing and philosophy. Amsterdam: IOS Press."},{"issue":"2","key":"165_CR108","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1007\/s10676-009-9187-9","volume":"11","author":"M Turilli","year":"2009","unstructured":"Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology, 11(2), 105\u2013112. https:\/\/doi.org\/10.1007\/s10676-009-9187-9.","journal-title":"Ethics and Information Technology"},{"key":"165_CR109","unstructured":"Vakkuri, V., Kemell, K.-K., Kultanen, J., Siponen, M., & Abrahamsson, P. (2019). Ethically aligned design of autonomous systems: Industry viewpoint and an empirical study. arXiv:1906.07946 [Cs]."},{"key":"165_CR110","unstructured":"Vaughan, J., & Wallach, H. (2016). The inescapability of uncertainty: AI, uncertainty, and why you should vote no matter what predictions say. Retrieved 4 July 2019, from Points. Data Society website: https:\/\/points.datasociety.net\/uncertainty-edd5caf8981b."},{"issue":"2","key":"165_CR111","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1093\/idpl\/ipx005","volume":"7","author":"S Wachter","year":"2017","unstructured":"Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76\u201399. https:\/\/doi.org\/10.1093\/idpl\/ipx005.","journal-title":"International Data Privacy Law"},{"key":"165_CR112","volume-title":"Cybernetics: Or control and communication in the animal and the machine","author":"N Wiener","year":"1961","unstructured":"Wiener, N. (1961). Cybernetics: Or control and communication in the animal and the machine (2d ed.). New York: MIT Press.","edition":"2d"},{"key":"165_CR113","unstructured":"Winfield, A. (2019, April 18). An updated round up of ethical principles of robotics and AI. Retrieved from http:\/\/alanwinfield.blogspot.com\/2019\/04\/an-updated-round-up-of-ethical.html."},{"key":"165_CR114","doi-asserted-by":"publisher","first-page":"43","DOI":"10.1007\/978-3-030-17287-9_4","volume-title":"Persuasive technology: Development of persuasive and behavior change support systems","author":"F Yetim","year":"2019","unstructured":"Yetim, F. (2019). Supporting and understanding reflection on persuasive technology through a reflection schema. In H. Oinas-Kukkonen, K. T. Win, E. Karapanos, P. Karppinen, & E. Kyza (Eds.), Persuasive technology: Development of persuasive and behavior change support systems (pp. 43\u201351). Cham: Springer."}],"container-title":["Science and Engineering Ethics"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-019-00165-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1007\/s11948-019-00165-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-019-00165-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2020,12,10]],"date-time":"2020-12-10T01:00:32Z","timestamp":1607562032000},"score":1,"resource":{"primary":{"URL":"http:\/\/link.springer.com\/10.1007\/s11948-019-00165-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,12,11]]},"references-count":117,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2020,8]]}},"alternative-id":["165"],"URL":"https:\/\/doi.org\/10.1007\/s11948-019-00165-5","relation":{},"ISSN":["1353-3452","1471-5546"],"issn-type":[{"value":"1353-3452","type":"print"},{"value":"1471-5546","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,12,11]]},"assertion":[{"value":"16 May 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 November 2019","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 December 2019","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}