Coussement, KristofCaigny, Arno DeSłowiński, RomanBaesens, BartBoute, RobertChoi, Tsan-MingDelen, DursunKraus, MathiasLessmann, StefanMaldonado, SebastiánMartens, DavidÓskarsdóttir, MaríaVairetti, CarlaVerbeke, WouterWeber, Richard2023-10-102023-10-1020240377221710.1016/j.ejor.2023.09.0261-s2.0-S0377221723007294S0377-2217(23)00729-4https://www.sciencedirect.com/science/article/pii/S0377221723007294http://hdl.handle.net/20.500.12127/7266The ability to understand and explain the outcomes of data analysis methods, with regard to aiding decision-making, has become a critical requirement for many applications. For example, in operational research domains, data analytics have long been promoted as a way to enhance decision-making. This study proposes a comprehensive, normative framework to define explainable artificial intelligence (XAI) for operational research (XAIOR) as a reconciliation of three subdimensions that constitute its requirements: performance, attributable, and responsible analytics. In turn, this article offers in-depth overviews of how XAIOR can be deployed through various methods with respect to distinct domains and applications. Finally, an agenda for future XAIOR research is defined.en© 2023 Elsevier B.V. All rights reserved.Decision analysisXAIExplainable artificial intelligenceInterpretable machine learningXAIORExplainable AI for Operational Research: A defining framework, methods, applications, and a research agendaOtherEuropean Journal of Operational Research102358