Full Seminar Details
Retno Larasati
Knowledge Media Institute, The Open University
This event took place on Tuesday 22 November 2022 at 11:30
The way in which Artificial Intelligence (AI) systems reach conclusions is not always transparent to end-users, whether experts or non-experts. This creates serious concerns about the trust that people would place in such systems if they were to be adopted in real-life contexts. These concerns become even bigger when individuals’ well-being is at stake, as in the case of AI technologies applied to healthcare. Moreover, there are also the over-trusting and under-trusting issues that need to be addressed. Non-expert users have been shown to often over-trust or under-trust AI systems, even when having very little knowledge of the technical competence of the system. Over-trust can have dangerous societal consequences when trust is placed in systems of low or unclear technical competence. Meanwhile, under-trust can hinder AI systems adaptation in our everyday life. This research studies the extent to which explanations and interactions can help non-expert users properly calibrate trust in AI systems, specifically AI for disease detection and preliminary diagnosis. This means reducing trust when users tend to over-trust an unreliable system and increasing trust if the system can be shown to work well. The three fundamental contributions to knowledge are, first, informing how to construct explanations that non-expert users can make sense of (meaningful explanations). Second, it contextualises current AI explanation research in healthcare, informing how explanations should be designed for AI-assisted disease detection and preliminary diagnosis systems (Explanation Design Guidelines). Third, it provides preliminary insights into the importance of the interaction modality of explanations in influencing trust. These preliminary findings can inform and promote future research on XAI by shifting the focus of current research from explanation content design to explanation delivery and interaction design.
This replay can only be watched on FaceBook - https://fb.watch/rPK-C-1c7Q/
Maven of the Month
We are also inviting top experts in AI and Knowledge Technologies to discuss major socio-technological topics with an audience that comprises both members of the Knowledge Media Institute, as well as the wider staff at The Open University. Differently from our seminar series, these events follow a Q&A format.