PERSONALIZED EXPLAINABILITY REQUIREMENTS ANALYSIS FRAMEWORK FOR AI-ENABLED SYSTEMS
Main Article Content
Abstract
through predictive analysis, and personalized recommendations in numerous sectors. However, complex machine learning (ML) models become less transparent and may recommend incorrect decisions which leads to a loss of confidence and trust. Consequently, explainability is considered a key requirement of AI-enabled systems. Recent studies focus on implementing explainable AI (XAI) techniques to improve the transparency and trustworthiness of ML models. However, analyzing the explainability requirements of different stakeholders, especially non-technical stakeholders for AI-enabled systems remains challenging. It lacks a comprehensive and personalized requirements analysis process that investigates the risk impact of outcomes produced by ML models and analyzes diverse stakeholder needs of explanations. This research proposes a framework with a requirement analysis that includes four key stages: (1) domain analysis, (2) stakeholder analysis, (3) explainability analysis, and (4) translation and prioritization, to analyse the personalized explainability needs of four types of stakeholders (i.e., development team, subject matter experts, decision makers and affected users) for AI-enabled systems. As demonstrated by the case study, it is feasible to apply the proposed framework to analyse diverse stakeholders' needs and define personalized explainability requirements for AI-enabled systems effectively.