PERSONALIZED EXPLAINABILITY REQUIREMENTS ANALYSIS FRAMEWORK FOR AI-ENABLED SYSTEMS

Authors

  • Jia Kai Quah Department of Software Engineering, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
  • Yin Kia Chiam Department of Software Engineering, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
  • Nor Ashikin Md Sari Department of Medicine, Faculty of Medicine, Universiti Malaya, 50603 Kuala Lumpur, Malaysia

DOI:

https://doi.org/10.22452/mjcs.vol38no1.3

Keywords:

Requirements analysis, Explainable AI, User-centric, User story, Personalization

Abstract

Artificial Intelligence (AI) has evolved into an indispensable technology that assists humans in making better decisions through predictive analysis and personalized recommendations in numerous sectors. However, complex machine learning (ML) models become less transparent and may recommend incorrect decisions, which leads to a loss of confidence and trust. Consequently, explainability is considered a key requirement of AI-enabled systems. Recent studies focus on implementing explainable AI (XAI) techniques to improve the transparency and trustworthiness of ML models. However, analyzing the explainability requirements of different stakeholders, especially non-technical stakeholders for AI-enabled systems, remains challenging. It lacks a comprehensive and personalized requirements analysis process that investigates the risk impact of outcomes produced by ML models and analyzes diverse stakeholder needs for explanations. This research proposes a framework with a requirement analysis that includes four key stages: (1) domain analysis, (2) stakeholder analysis, (3) explainability analysis, and (4) translation and prioritization, to analyze the personalized explainability needs of four types of stakeholders (i.e., development team, subject matter experts, decision makers and affected users) for AI-enabled systems. As demonstrated by the case study, it is feasible to apply the proposed framework to analyze diverse stakeholders' needs and define personalized explainability requirements for AI-enabled systems effectively.

Downloads

Download data is not yet available.

Downloads

Published

2025-03-30

How to Cite

Quah, J. K. ., Chiam, Y. K. ., & Sari, N. A. M. (2025). PERSONALIZED EXPLAINABILITY REQUIREMENTS ANALYSIS FRAMEWORK FOR AI-ENABLED SYSTEMS. Malaysian Journal of Computer Science, 38(1), 55–80. https://doi.org/10.22452/mjcs.vol38no1.3