Publication

A multi-module explainable artificial intelligence framework for project risk management: Enhancing transparency in decision-making

Badhon, Bodrunnessa
Chakrabortty, Ripon K.
Anavatti, Sreenatha G.
Citations
Google Scholar:
Altmetric:
Publication Type
Journal article with impact factor
Editor
Supervisor
Publication Year
2025
Journal
Engineering Applications of Artificial Intelligence
Book
Publication Volume
148
Publication Issue
May
Publication Begin page
Publication End page
Publication Number of pages
Collections
Abstract
The remarkable advancements in machine learning (ML) have led to its extensive adoption in Project Risk Management (PRM), leveraging its powerful predictive capabilities and data-driven insights that support proactive decision-making. Nevertheless, the “black-box” nature of ML models obscures the reasoning behind predictions, undermining transparency and trust. To address this, existing explainable artificial intelligence (XAI) techniques, such as Local Interpretable Model-agnostic Explanations (LIME), Global Priors-based LIME (G-LIME), and SHapley Additive exPlanations (SHAP), have been applied to interpret black-box models. Yet, they face considerable limitations in PRM, including their inability to model cascading effects and multi-level dependencies among risk factors, suffering from inconsistencies due to random sampling, and failure to capture non-linear interactions in high-dimensional risk data. In response to these shortcomings, this paper proposes the Multi-Module eXplainable Artificial Intelligence framework for Project Risk Management (MMXAI-PRM), a novel approach designed to address the unique demands of PRM. The framework consists of three modules: the Risk Relationship Insight Module (RRIM), which models risk dependencies using a Knowledge Graph (KG); the Risk Factor Influence Analysis Module (RFIAM), which introduces a Conditional Tabular Generative Adversarial Network-aided Local Interpretable Model-agnostic Explanations using Kernel Ridge Regression (CTGAN-LIME-KR) to ensure explanation consistency and handle non-linearity; and the Visualization and Interpretation Module (VIM), which synthesizes these insights into an interpretable, chain-based representation. Extensive experiments demonstrate that MMXAI-PRM delivers more consistent, stable, and accurate explanations than existing XAI methods. By improving interpretability, it enhances trust in AI-driven risk predictions and equips project managers with actionable insights, advancing decision-making in PRM.
Research Projects
Organizational Units
Journal Issue
Keywords
Explainable Artificial Intelligence, Knowledge Graph, Conditional Tabular Generative Adversarial Networks, Local Interpretable Model-Agnostic Explanations, Project Risk Management
Citation
Knowledge Domain/Industry
Other links
Embedded videos