{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,4]],"date-time":"2026-04-04T03:45:51Z","timestamp":1775274351024,"version":"3.50.1"},"reference-count":17,"publisher":"Frontiers Media SA","license":[{"start":{"date-parts":[[2024,8,21]],"date-time":"2024-08-21T00:00:00Z","timestamp":1724198400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["frontiersin.org"],"crossmark-restriction":true},"short-container-title":["Front. Artif. Intell."],"abstract":"<jats:p>Over the last decade, investment in artificial intelligence (AI) has grown significantly, driven by technology companies and the demand for PhDs in AI. However, new challenges have emerged, such as the \u2018black box\u2019 and bias in AI models. Several approaches have been developed to reduce these problems. Responsible AI focuses on the ethical development of AI systems, considering social impact. Fair AI seeks to identify and correct algorithm biases, promoting equitable decisions. Explainable AI aims to create transparent models that allow users to interpret results. Finally, Causal AI emphasizes identifying cause-and-effect relationships and plays a crucial role in creating more robust and reliable systems, thereby promoting fairness and transparency in AI development. Responsible, Fair, and Explainable AI has several weaknesses. However, Causal AI is the approach with the slightest criticism, offering reassurance about the ethical development of AI.<\/jats:p>","DOI":"10.3389\/frai.2024.1439702","type":"journal-article","created":{"date-parts":[[2024,8,22]],"date-time":"2024-08-22T10:19:21Z","timestamp":1724321961000},"update-policy":"https:\/\/doi.org\/10.3389\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["Implications of causality in artificial intelligence"],"prefix":"10.3389","volume":"7","author":[{"given":"Lu\u00eds","family":"Cavique","sequence":"first","affiliation":[]}],"member":"1965","published-online":{"date-parts":[[2024,8,21]]},"reference":[{"key":"ref1","author":"Angwin","year":"2016"},{"key":"ref2","doi-asserted-by":"publisher","first-page":"7353","DOI":"10.1073\/pnas.1510489113","article-title":"Recursive partitioning for heterogeneous causal effects","volume":"113","author":"Athey","year":"2016","journal-title":"Proc. Natl. Acad. Sci. U. S. A."},{"key":"ref3","first-page":"688969","article-title":"Principles and practice of explainable machine learning","volume-title":"Front. Big Data","author":"Belle","year":"2021"},{"key":"ref4","article-title":"The Brussels effect","volume":"107","author":"Bradford","year":"2012","journal-title":"Northwest. Univ. Law Rev."},{"key":"ref5","doi-asserted-by":"crossref","first-page":"1","DOI":"10.4018\/978-1-6684-9591-9.ch001","article-title":"Causality: the next step in artificial intelligence","volume-title":"Philosophy of artificial intelligence and its place in society","author":"Cavique","year":"2023"},{"key":"ref6","article-title":"The responsibility is ours","author":"Dignum","year":"2019"},{"key":"ref7","year":"2020"},{"key":"ref8","year":"2024"},{"key":"ref9","article-title":"Counterfactual fairness","author":"Kusner","year":"2017"},{"key":"ref10","doi-asserted-by":"publisher","first-page":"20","DOI":"10.1145\/358831","article-title":"Why is the current xAI not meeting the expectations?","volume":"66","author":"Malizia","year":"2023","journal-title":"Commun. ACM"},{"key":"ref11","volume-title":"Interpretable machine learning with Python: learn to build interpretable high-performance models with hands-on real-world examples","author":"Mas\u00eds","year":"2021"},{"key":"ref12","year":"2022"},{"key":"ref13","author":"Molnar","year":"2024"},{"key":"ref14","volume-title":"The book of why: the new science of cause and effect","author":"Pearl","year":"2018"},{"key":"ref15","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-16474-3_51","article-title":"Uplift modeling using the transformed outcome approach","volume-title":"Progress in artificial intelligence, EPIA 2022","author":"Pinheiro","year":"2022"},{"key":"ref16","doi-asserted-by":"publisher","first-page":"88","DOI":"10.1145\/3587930","article-title":"What should we do when our ideas of fairness conflict","volume":"67","author":"Raghavan","year":"2023","journal-title":"Commun. ACM"},{"key":"ref17","volume-title":"Responsible data science, special topics in DataScience, DS-GA 3001.009","author":"Stoyanovich","year":"2020"}],"container-title":["Frontiers in Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2024.1439702\/full","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,8,22]],"date-time":"2024-08-22T17:54:12Z","timestamp":1724349252000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2024.1439702\/full"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,21]]},"references-count":17,"alternative-id":["10.3389\/frai.2024.1439702"],"URL":"https:\/\/doi.org\/10.3389\/frai.2024.1439702","relation":{},"ISSN":["2624-8212"],"issn-type":[{"value":"2624-8212","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,8,21]]},"article-number":"1439702"}}