Badhon, BodrunnessaChakrabortty, Ripon K.Anavatti, Sreenatha G.Vanhoucke, Mario2025-03-102025-03-1020250952-197610.1016/j.engappai.2025.110427https://repository.vlerick.com/handle/20.500.12127/7654The remarkable advancements in machine learning (ML) have led to its extensive adoption in Project Risk Management (PRM), leveraging its powerful predictive capabilities and data-driven insights that support proactive decision-making. Nevertheless, the “black-box” nature of ML models obscures the reasoning behind predictions, undermining transparency and trust. To address this, existing explainable artificial intelligence (XAI) techniques, such as Local Interpretable Model-agnostic Explanations (LIME), Global Priors-based LIME (G-LIME), and SHapley Additive exPlanations (SHAP), have been applied to interpret black-box models. Yet, they face considerable limitations in PRM, including their inability to model cascading effects and multi-level dependencies among risk factors, suffering from inconsistencies due to random sampling, and failure to capture non-linear interactions in high-dimensional risk data. In response to these shortcomings, this paper proposes the Multi-Module eXplainable Artificial Intelligence framework for Project Risk Management (MMXAI-PRM), a novel approach designed to address the unique demands of PRM. The framework consists of three modules: the Risk Relationship Insight Module (RRIM), which models risk dependencies using a Knowledge Graph (KG); the Risk Factor Influence Analysis Module (RFIAM), which introduces a Conditional Tabular Generative Adversarial Network-aided Local Interpretable Model-agnostic Explanations using Kernel Ridge Regression (CTGAN-LIME-KR) to ensure explanation consistency and handle non-linearity; and the Visualization and Interpretation Module (VIM), which synthesizes these insights into an interpretable, chain-based representation. Extensive experiments demonstrate that MMXAI-PRM delivers more consistent, stable, and accurate explanations than existing XAI methods. By improving interpretability, it enhances trust in AI-driven risk predictions and equips project managers with actionable insights, advancing decision-making in PRM.enExplainable Artificial IntelligenceKnowledge GraphConditional Tabular Generative Adversarial NetworksLocal Interpretable Model-Agnostic ExplanationsProject Risk ManagementA multi-module explainable artificial intelligence framework for project risk management: Enhancing transparency in decision-makingEngineering Applications of Artificial Intelligence1873-676958614