Publication

In AI we Trust: Determinants of continuous trust in the user/system interaction

Decroix, Ignace
Citations
Altmetric:
Publication Type
Conference Presentation
Editor
Supervisor
Publication Year
2023
Journal
Book
Publication Volume
Publication Issue
Publication Begin page
Publication End page
Publication NUmber of pages
Abstract
The main objective of this study is to examine which factors influence users’ continuous trust in automated systems – and, more specifically, AI-based systems – and, subsequently, to develop an empirical model representing those factors. Influenced by the fourth wave of industrialisation, society and business undergo significant changes (Hancock, 2017; Brynjolfsson & McAfee, 2017). This wave comes with a vast array of new digital technologies (Schwab, 2016a; Schwab, 2016b) – consider artificial intelligence (Hansen & Bogh, 2020) – that are positioned as assets to leverage digital transformation efforts (Besson & Rowe, 2012). People and digital technology (Kane, 2015) interact in this context (Christ-Brendemühl & Schaarschmidt, 2019; Glikson & Woolley, 2020), and automation frequently acts on behalf of humans (Russel & Norvig, 2009; Xu, Mak, and Brintrup, 2021). However, less-rational factors (such as fear) are more at play, even more so often higher levels of automation appear (Sarter, Woods, and Billings, 1997). Organisations risk employees developing technology perceptions that breed resistance (Venkatesh, 2006), reluctance (Kane et al., 2019), and disappointment. These, in turn, impact people’s interactions with technology (Bardakei & Ünver, 2019) which leads to users neglecting beneficial decision aids (Davis & Kotteman, 1995) and discounting advice from algorithms (Prahl & Van Swol, 2017). Faulty interactions like these ould cause a decrease in trust and subsequent disuse or sabotage of technology (Parasuraman & Riley, 1997). Various theories emerged in the technology and acceptance literature (e.g., Technology Acceptance Model (TAM3; Vankatesh & Bala, 2008)), where trust, considered the cornerstone of social interaction (Blau, 1964), was also found to mediate human-technology relationships (Taddeo, 2017) and is seen as the degree to which a user can rely on the technology to achieve their goals under conditions of uncertainty and vulnerability (Lee & See, 2004). Especially when the system becomes too complex to be understood completely, will trust navigate complexity and enable reliance (Gsenger & Strle, 2021). Focusing on AI, we found that many concerns relating to AI usage link back to trust (e.g., the perception of AI as a black box; Logg et al., 2019; Lockey et al., 2021). The concept of trust, however, has received very little attention in AI literature thus far (Emaminejad et al., 2015). When trust is researched, the primary focus is on aspects of the technology itself rather than also including aspects of the individual and the environment (Toreini et al., 2019). We did not find any model in the literature that explains trust in systems deploying AI. Nor could we find any survey that allows for measuring trust (Böckle et al., 2021). As it stands, the majority of the current state of knowledge of human-machine interaction and the trust relationship draws on research in the context of automation. Further explorative research in the realm of AI is warranted.Given that explorative research is required, we opt for qualitative research through Grounded Theory (Strauss & Corbin, 1994) and collect our data through semi-structured interviews with practitioners that use AI-based systems in their work context. We will use an interview guide with open-ended questions and transcribe the nterviews verbatim. After every inte incorporate our reflections into the interview guide before continuing a new round (Charmaz, 2006). This cycle (from recruiting data to coding and comparing excerpts in Nviv12) will be repeated until theoretical saturation is reached. This will allow us to build the first empirical model of the antecedents of trust in AI-based systems. 593Results have not yet been obtained, but we are confident to have a final empirical model before the EAWOP conference. As this study is primarily explorative, the emerging antecedents and model will require testing to analyse their reliability and utility. While conclusions cannot yet be drawn, we aim to increase the understanding of both academics and practitioners on users’ continuous trust in AI-based systems. This research also lays the fundament for a follow-up study to build and validate a survey to measure trust in AI-based systems.Studying continuous trust in AI systems connects to the changing world of work, especially given that the amount of human-technology interactions has increased over recent decades and is expected to continue increasing. This topic links to EAWOP’s topic 16 (i.e., technology) and, specifically, subitems Artificial Intelligence and Human-Machine-Systems
Research Projects
Organizational Units
Journal Issue
Keywords
Trust, Artificial Intelligence
Citation
Knowledge Domain/Industry
DOI
Other links
Embedded videos