{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,30]],"date-time":"2026-01-30T10:00:29Z","timestamp":1769767229000,"version":"3.49.0"},"reference-count":0,"publisher":"Advances in Artificial Intelligence and Machine Learning","issue":"03","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAIML"],"published-print":{"date-parts":[[2025]]},"abstract":"<jats:p>This study re-examines the role of Explainable AI (XAI) within human-AI collaborative environments and proposes a design and evaluation framework for a human-AI collaboration system that integrates Large Language Models (LLMs) and state-of-the-art AI agent technology. The proposed methodology, which consists of an AI model, an explanation generation module, and a human-AI interface, enhances the adaptability and reliability of explanations. A key contribution of this research is the introduction of an LLM-XAI collaborative architecture that integrates personalized, adaptive explanations with a feedbackdriven improvement mechanism. Notably, the system presents a novel paradigm for explanations that distinguishes it from conventional XAI methods by utilizing Chain-of-Thought reasoning traces, natural language explanations, and a multi-stage verification mechanism provided by Deep Research and LLM-based agents. The system defines core quality metrics such as explainability, transparency, reliability, interactivity, and adaptability, and concurrently develops a multi-dimensional evaluation framework to assess these metrics using both quantitative and qualitative data. This system is structured with a feedback loop that enables continuous learning and improvement while transparently explaining the AI\u2019s decisionmaking process. The quality of explanations is also assessed with quantitative metrics, and the system improves continuously through user feedback. This study also presents quantitative and qualitative evaluation metrics and user research methodologies to validate the system\u2019s effectiveness, which is expected to contribute to achieving trust-based human-AI collaboration. Furthermore, to demonstrate its practical applicability, a pilot implementation in a medical diagnosis support scenario is presented, offering an ideal model where humans and AI collaborate complementarily, thereby playing a crucial role in promoting the ethical use and social acceptance of AI systems.<\/jats:p>","DOI":"10.54364\/aaiml.2025.53240","type":"journal-article","created":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T09:24:21Z","timestamp":1758619461000},"page":"4308-4341","source":"Crossref","is-referenced-by-count":3,"title":["Design and Evaluation Methods for LLM-Based Explainable AI (XAI)-Based Human-AI Collaboration Systems"],"prefix":"10.54364","volume":"05","author":[{"given":"Cheonsu","family":"Jeong","sequence":"first","affiliation":[]}],"member":"32807","published-online":{"date-parts":[[2025]]},"container-title":["Advances in Artificial Intelligence and Machine Learning"],"original-title":[],"link":[{"URL":"https:\/\/www.oajaiml.com\/uploads\/archivepdf\/430552340.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T09:24:21Z","timestamp":1758619461000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.oajaiml.com\/uploads\/archivepdf\/430552340.pdf"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025]]},"references-count":0,"journal-issue":{"issue":"03","published-online":{"date-parts":[[2025]]},"published-print":{"date-parts":[[2025]]}},"URL":"https:\/\/doi.org\/10.54364\/aaiml.2025.53240","relation":{},"ISSN":["2582-9793"],"issn-type":[{"value":"2582-9793","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025]]}}}