{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:16:16Z","timestamp":1775002576858,"version":"3.50.1"},"reference-count":69,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2022,6,16]],"date-time":"2022-06-16T00:00:00Z","timestamp":1655337600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Austrian Science Fund (FWF)","award":["P-32554"],"award-info":[{"award-number":["P-32554"]}]},{"name":"Australian UTS STEM-HASS Strategic Research Fund 2021","award":["P-32554"],"award-info":[{"award-number":["P-32554"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["MAKE"],"abstract":"<jats:p>AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency\/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits responsible use of socio-technical AI systems, but currently receives little attention. In this paper, we investigate the effects of AI explanations and fairness on human-AI trust and perceived fairness, respectively, in specific AI-based decision-making scenarios. A user study simulating AI-assisted decision-making in two health insurance and medical treatment decision-making scenarios provided important insights. Due to the global pandemic and restrictions thereof, the user studies were conducted as online surveys. From the participant\u2019s trust perspective, fairness was found to affect user trust only under the condition of a low fairness level, with the low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision-making. From the perspective of perceived fairness, our work found that low levels of introduced fairness decreased users\u2019 perceptions of fairness, while high levels of introduced fairness increased users\u2019 perceptions of fairness. The addition of explanations definitely increased the perception of fairness. Furthermore, we found that application scenarios influenced trust and perceptions of fairness. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations and the degree of fairness introduced, but also the scenarios in which AI-assisted decision-making is used.<\/jats:p>","DOI":"10.3390\/make4020026","type":"journal-article","created":{"date-parts":[[2022,6,17]],"date-time":"2022-06-17T05:25:11Z","timestamp":1655443511000},"page":"556-579","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":136,"title":["Fairness and Explanation in AI-Informed Decision Making"],"prefix":"10.3390","volume":"4","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9209-6676","authenticated-orcid":false,"given":"Alessa","family":"Angerschmid","sequence":"first","affiliation":[{"name":"Medical Informatics, Statistics and Documentation, Medical University Graz, 8036 Graz, Austria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6034-644X","authenticated-orcid":false,"given":"Jianlong","family":"Zhou","sequence":"additional","affiliation":[{"name":"Human-Centered AI Lab, University of Natural Resources and Life Sciences, 1190 Vienna, Austria"},{"name":"Human-Centered AI Lab, University of Technology Sydney, Sydney, NSW 2007, Australia"}]},{"given":"Kevin","family":"Theuermann","sequence":"additional","affiliation":[{"name":"Doctoral School of Computer Science, Graz University of Technology, 8010 Graz, Austria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4971-8729","authenticated-orcid":false,"given":"Fang","family":"Chen","sequence":"additional","affiliation":[{"name":"Human-Centered AI Lab, University of Technology Sydney, Sydney, NSW 2007, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6786-5194","authenticated-orcid":false,"given":"Andreas","family":"Holzinger","sequence":"additional","affiliation":[{"name":"Medical Informatics, Statistics and Documentation, Medical University Graz, 8036 Graz, Austria"},{"name":"Human-Centered AI Lab, University of Natural Resources and Life Sciences, 1190 Vienna, Austria"},{"name":"Doctoral School of Computer Science, Graz University of Technology, 8010 Graz, Austria"},{"name":"xAI Lab, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB T5J 3B1, Canada"}]}],"member":"1968","published-online":{"date-parts":[[2022,6,16]]},"reference":[{"key":"ref_1","unstructured":"(2022, May 31). White Paper on Artificial Intelligence\u2014A European Approach to Excellence and Trust. Available online: https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=CELEX:52020DC0065."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Bernhaupt, R., Dalvi, G., Joshi, A.K., Balkrishan, D., O\u2019Neill, J., and Winckler, M. (2017). Effects of Uncertainty and Cognitive Load on User Trust in Predictive Decision Making. Human-Computer Interaction\u2014INTERACT 2017, Springer.","DOI":"10.1007\/978-3-319-67744-6"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zhou, J., Verma, S., Mittal, M., and Chen, F. (2021, January 29\u201331). Understanding Relations between Perception of Fairness and Trust in Algorithmic Decision Making. Proceedings of the International Conference on Behavioral and Social Computing (BESC 2021), Doha, Qatar.","DOI":"10.1109\/BESC53957.2021.9635182"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"20","DOI":"10.1038\/538020a","article-title":"Can we open the black box of AI?","volume":"538","author":"Castelvecchi","year":"2016","journal-title":"Nat. News"},{"key":"ref_5","first-page":"378","article-title":"Making Machine Learning Useable by Revealing Internal States Update\u2014A Transparent Approach","volume":"13","author":"Zhou","year":"2016","journal-title":"Int. J. Comput. Sci. Eng."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Zhou, J., Gandomi, A.H., Chen, F., and Holzinger, A. (2021). Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics, 10.","DOI":"10.3390\/electronics10050593"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Zhou, J., and Chen, F. (2018). 2D Transparency Space\u2014Bring Domain Users and Machine Learning Experts Together. Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent, Springer International Publishing.","DOI":"10.1007\/978-3-319-90403-0_1"},{"key":"ref_8","unstructured":"Zhou, J., and Chen, F. (2018). Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent, Springer."},{"key":"ref_9","first-page":"42","article-title":"Can we Trust Machine Learning Results? Artificial Intelligence in Safety-Critical Decision Support","volume":"112","author":"Holzinger","year":"2018","journal-title":"ERCIM News"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"105587","DOI":"10.1016\/j.clsr.2021.105587","article-title":"Legal aspects of data cleansing in medical AI","volume":"42","author":"Stoeger","year":"2021","journal-title":"Comput. Law Secur. Rev."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1145\/3458652","article-title":"Medical Artificial Intelligence: The European Legal Perspective","volume":"64","author":"Stoeger","year":"2021","journal-title":"Commun. ACM"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1007\/s10676-010-9253-3","article-title":"Explanation and trust: What to tell the user in security and AI?","volume":"13","author":"Pieters","year":"2011","journal-title":"Ethics Inf. Technol."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Zhou, J., Hu, H., Li, Z., Yu, K., and Chen, F. (2019). Physiological Indicators for User Trust in Machine Learning with Influence Enhanced Fact-Checking. Machine Learning and Knowledge Extraction, Springer.","DOI":"10.1007\/978-3-030-29726-8_7"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Alam, L., and Mueller, S. (2021). Examining the effect of explanation on satisfaction and trust in AI diagnostic systems. BMC Med Inform. Decis. Mak., 21.","DOI":"10.1186\/s12911-021-01542-6"},{"key":"ref_15","first-page":"91","article-title":"Making machine learning useable","volume":"14","author":"Zhou","year":"2015","journal-title":"Int. J. Intell. Syst. Technol. Appl."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"101994","DOI":"10.1016\/j.ijinfomgt.2019.08.002","article-title":"Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy","volume":"57","author":"Dwivedi","year":"2021","journal-title":"Int. J. Inf. Manag."},{"key":"ref_17","first-page":"0049124118782533","article-title":"Fairness in criminal justice risk assessments: The state of the art","volume":"50","author":"Berk","year":"2018","journal-title":"Sociol. Methods Res."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S. (2015, January 10\u201313). Certifying and removing disparate impact. Proceedings of the KDD2015, Sydney, NSW, Australia.","DOI":"10.1145\/2783258.2783311"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Starke, C., Baleis, J., Keller, B., and Marcinkowski, F. (2021). Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review of the Empirical Literature. arXiv.","DOI":"10.1177\/20539517221115189"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"63","DOI":"10.1016\/j.ijinfomgt.2019.01.021","article-title":"Artificial intelligence for decision-making in the era of Big Data\u2014Evolution, challenges and research agenda","volume":"48","author":"Duan","year":"2019","journal-title":"Int. J. Inf. Manag."},{"key":"ref_21","first-page":"109","article-title":"Cognitive Technologies and Artificial Intelligence in Social Perception","volume":"30","author":"Kuzior","year":"2022","journal-title":"Manag. Syst. Prod. Eng."},{"key":"ref_22","first-page":"35","article-title":"Employees\u2019 Perceptions of Trust, Fairness, and the Management of Change in Three Private Universities in Cyprus","volume":"2","author":"Komodromos","year":"2014","journal-title":"J. Hum. Resour. Manag. Labor Stud."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"996","DOI":"10.1080\/0267257X.2015.1036101","article-title":"The impact of fairness on trustworthiness and trust in banking","volume":"31","author":"Roy","year":"2015","journal-title":"J. Mark. Manag."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K.E., and Dugan, C. (2019, January 17\u201320). Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment. Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI\u201919), Marina del Ray, CA, USA.","DOI":"10.1145\/3301275.3302310"},{"key":"ref_25","unstructured":"Kilbertus, N., Carulla, M.R., Parascandolo, G., Hardt, M., Janzing, D., and Sch\u00f6lkopf, B. (2017, January 4\u20139). Avoiding discrimination through causal reasoning. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_26","unstructured":"Bellamy, R.K.E., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., and Mojsilovic, A. (2018). AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"541","DOI":"10.1080\/08838151.2020.1843357","article-title":"User Perceptions of Algorithmic Decisions in the Personalized AI System:Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability","volume":"64","author":"Shin","year":"2020","journal-title":"J. Broadcast. Electron. Media"},{"key":"ref_28","unstructured":"Corbett-Davies, S., and Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Nabi, R., and Shpitser, I. (2018, January 2\u20137). Fair inference on outcomes. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11553"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Glymour, B., and Herington, J. (2019, January 29\u201331). Measuring the biases that matter: The ethical and casual foundations for measures of fairness in algorithms. Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA.","DOI":"10.1145\/3287560.3287573"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Lee, M.K., and Baykal, S. (March, January 25). Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland, OR, USA.","DOI":"10.1145\/2998181.2998230"},{"key":"ref_32","first-page":"1","article-title":"Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation","volume":"3","author":"Lee","year":"2019","journal-title":"Proc. ACM Hum. Comput. Interact."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"105456","DOI":"10.1016\/j.clsr.2020.105456","article-title":"Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making","volume":"39","author":"Helberger","year":"2020","journal-title":"Comput. Law Secur. Rev."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Harrison, G., Hanson, J., Jacinto, C., Ramirez, J., and Ur, B. (2020, January 27\u201330). An Empirical Study on the Perceived Fairness of Realistic, Imperfect Machine Learning Models. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* \u201920, Barcelona, Spain.","DOI":"10.1145\/3351095.3372831"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"277","DOI":"10.1016\/j.chb.2019.04.019","article-title":"Role of fairness, accountability, and transparency in algorithmic affordance","volume":"98","author":"Shin","year":"2019","journal-title":"Comput. Hum. Behav."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"102061","DOI":"10.1016\/j.ijinfomgt.2019.102061","article-title":"Beyond user experience: What constitutes algorithmic experiences?","volume":"52","author":"Shin","year":"2020","journal-title":"Int. J. Inf. Manag."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"102551","DOI":"10.1016\/j.ijhcs.2020.102551","article-title":"The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI","volume":"146","author":"Shin","year":"2021","journal-title":"Int. J. Hum. Comput. Stud."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., and Shadbolt, N. (2018, January 21\u201326). \u2018It\u2019s Reducing a Human Being to a Percentage\u2019: Perceptions of Justice in Algorithmic Decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, Montreal, QC, Canada.","DOI":"10.1145\/3173574.3173951"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Zhou, J., Bridon, C., Chen, F., Khawaji, A., and Wang, Y. (2015, January 18\u201323). Be Informed and Be Involved: Effects of Uncertainty and Correlation on User\u2019s Confidence in Decision Making. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, Association for Computing Machinery, CHI EA \u201915, Seoul, Korea.","DOI":"10.1145\/2702613.2732769"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2687924","article-title":"Measurable Decision Making with GSR and Pupillary Analysis for Intelligent User Interface","volume":"21","author":"Zhou","year":"2015","journal-title":"ACM Trans. Comput.-Hum. Interact."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Kizilcec, R.F. (2016, January 7\u201312). How Much Information? Effects of Transparency on Trust in an Algorithmic Interface. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, CHI \u201916, San Jose, CA, USA.","DOI":"10.1145\/2858036.2858402"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Liao, Q.V., and Bellamy, R.K.E. (2020, January 27\u201330). Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* \u201920, Barcelona, Spain.","DOI":"10.1145\/3351095.3372852"},{"key":"ref_43","unstructured":"Yin, M., Vaughan, J.W., and Wallach, H. (2018, January 14). Does Stated Accuracy Affect Trust in Machine Learning Algorithms?. Proceedings of the ICML2018 Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"1395","DOI":"10.1111\/j.1539-6924.2008.01091.x","article-title":"On the Relation Between Trust and Fairness in Environmental Risk Management","volume":"28","author":"Earle","year":"2008","journal-title":"Risk Anal."},{"key":"ref_45","first-page":"58","article-title":"The effects of perceived service fairness on satisfaction, trust, and behavioural intentions","volume":"33","author":"Nikbin","year":"2011","journal-title":"Singap. Manag. Rev."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Kasinidou, M., Kleanthous, S., Barlas, P., and Otterbacher, J. (2021, January 3\u201310). I Agree with the Decision, but They Didn\u2019t Deserve This: Future Developers\u2019 Perception of Fairness in Algorithmic Decisions. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT \u201921, Virtual Event.","DOI":"10.1145\/3442188.3445931"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1002\/widm.1312","article-title":"Causability and Explainability of Artificial Intelligence in Medicine","volume":"9","author":"Holzinger","year":"2019","journal-title":"Wiley Interdiscip. Rev. Data Min. Knowl. Discov."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"28","DOI":"10.1016\/j.inffus.2021.01.008","article-title":"Towards Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI","volume":"71","author":"Holzinger","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"106916","DOI":"10.1016\/j.knosys.2021.106916","article-title":"Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions","volume":"220","author":"Hudec","year":"2021","journal-title":"Knowl. Based Syst."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"193","DOI":"10.1007\/s13218-020-00636-z","article-title":"Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations","volume":"34","author":"Holzinger","year":"2020","journal-title":"KI -Kuenstliche Intell."},{"key":"ref_51","first-page":"1885","article-title":"Understanding Black-box Predictions via Influence Functions","volume":"70","author":"Koh","year":"2017","journal-title":"Proc. ICML"},{"key":"ref_52","unstructured":"Papenmeier, A., Englebienne, G., and Seifert, C. (2019). How model accuracy and explanation fidelity influence user trust. arXiv."},{"key":"ref_53","unstructured":"Larasati, R., Liddo, A.D., and Motta, E. (2020, January 17). The Effect of Explanation Styles on User\u2019s Trust. Proceedings of the Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies co-located with IUI 2020, Cagliari, Italy."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Wang, X., and Yin, M. (2021, January 14\u201317). Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. Proceedings of the 26th International Conference on Intelligent User Interfaces, College Station, TX, USA.","DOI":"10.1145\/3397481.3450650"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"373","DOI":"10.1111\/rmir.12111","article-title":"Artificial Intelligence: Implications for Social Inflation and Insurance","volume":"21","author":"Kelley","year":"2018","journal-title":"Risk Manag. Insur. Rev."},{"key":"ref_56","unstructured":"Article 29 Working Party (2022, January 19). Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016\/679. Available online: https:\/\/ec.europa.eu\/newsroom\/article29\/items\/612053\/en."},{"key":"ref_57","unstructured":"(2022, January 19). Regulation (EU) 2016\/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing DIRECTIVE 95\/46\/EC (General Data Protection Regulation). Available online: https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/PDF\/?uri=CELEX:02016R0679-20160504."},{"key":"ref_58","unstructured":"(2022, January 19). European Parliament Resolution of 20 October 2020 with Recommendations to the Commission on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies, 2020\/2012(INL). Available online: https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=CELEX:52020IP0275."},{"key":"ref_59","unstructured":"High-Level Export Group on Artificial Intelligence (2022, January 19). Ethics Guidelines for Trustworthy AI. Available online: https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"651","DOI":"10.1016\/S0277-9536(99)00145-8","article-title":"Decision-making in the physician\u2013patient encounter: Revisiting the shared treatment decision-making model","volume":"49","author":"Charles","year":"1999","journal-title":"Soc. Sci. Med."},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"i2139","DOI":"10.1136\/bmj.i2139","article-title":"Medical error\u2014The third leading cause of death in the US","volume":"353","author":"Makary","year":"2016","journal-title":"BMJ"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"447","DOI":"10.1126\/science.aax2342","article-title":"Dissecting racial bias in an algorithm used to manage the health of populations","volume":"366","author":"Obermeyer","year":"2019","journal-title":"Science"},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Pourhomayoun, M., and Shakibi, M. (2020). Predicting mortality risk in patients with COVID-19 using artificial intelligence to help medical decision-making. MedRxiv.","DOI":"10.1101\/2020.03.30.20047308"},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"67","DOI":"10.1007\/s10648-008-9093-4","article-title":"Example-Based Learning in Heuristic Domains: A Cognitive Load Theory Account","volume":"21","author":"Renkl","year":"2009","journal-title":"Educ. Psychol. Rev."},{"key":"ref_65","doi-asserted-by":"crossref","unstructured":"Cai, C.J., Jongejan, J., and Holbrook, J. (2019, January 17\u201320). The Effects of Example-Based Explanations in a Machine Learning Interface. Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI \u201919), Marina del Ray, CA, USA.","DOI":"10.1145\/3301275.3302289"},{"key":"ref_66","doi-asserted-by":"crossref","first-page":"520","DOI":"10.1177\/0018720812465081","article-title":"I Trust It, but I Don\u2019t Know Why: Effects of Implicit Attitudes Toward Automation on Trust in an Automated System","volume":"55","author":"Merritt","year":"2013","journal-title":"Hum. Factors"},{"key":"ref_67","doi-asserted-by":"crossref","unstructured":"Cropanzano, R.S., and Ambrose, M.L. (2015). Measuring Justice and Fairness. The Oxford Handbook of Justice in the Workplace, Oxford University Press.","DOI":"10.1093\/oxfordhb\/9780199981410.013.8"},{"key":"ref_68","doi-asserted-by":"crossref","unstructured":"Schoeffer, J., Machowski, Y., and Kuehl, N. (2021). Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making. arXiv.","DOI":"10.24251\/HICSS.2022.134"},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Zhou, J., Chen, F., Berry, A., Reed, M., Zhang, S., and Savage, S. (2020, January 1\u20134). A Survey on Ethical Principles of AI and Implementations. Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, ACT, Australia.","DOI":"10.1109\/SSCI47803.2020.9308437"}],"container-title":["Machine Learning and Knowledge Extraction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-4990\/4\/2\/26\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T23:33:25Z","timestamp":1760139205000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-4990\/4\/2\/26"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,16]]},"references-count":69,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2022,6]]}},"alternative-id":["make4020026"],"URL":"https:\/\/doi.org\/10.3390\/make4020026","relation":{},"ISSN":["2504-4990"],"issn-type":[{"value":"2504-4990","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,16]]}}}