Member
Retno Larasati
Post-Doctoral Research Associate
Retno Larasati is a Post-doctoral Research Associate at the Knowledge Media Institute in the Open University. Her research interest is focused on trust, explainable AI, and AI ethics.
Her PhD research is about the effect of explanation on user trust in AI healthcare. Before starting her PhD, her research was around computer vision area, including handwritten recognition and visual-only word boundary detection. She has a Master's degree in Advanced Software Engineering with Management from King's College London.
Keys: AI ethics, explainable AI, human-computer trust, human-centred AI, human-centred explainable AI
Team: Venetia Brown, Tracie Farrell
Projects
News
20 Jul 2023
10 Jul 2023
20 Jun 2023
09 May 2023
02 Dec 2022
Publications
Larasati, R. (2023) Trust and Explanation in Artificial Intelligence Systems: A Healthcare Application in Disease Detection and Preliminary Diagnosis
Larasati, R., De Liddo, A. and Motta, E. (2023) Human and AI Trust: Trust Attitude Measurement Instrument Development, Workshop on Trust and Reliance in AI-Assisted Tasks (TRAIT) at CHI 2023, Hamburg, Germany
Larasati, R. (2022) Explainable AI for Breast Cancer Diagnosis: Application and User's Understandability Perception, 2022 International Conference on Electrical, Computer and Energy Technologies (ICECET), Prague, Czech Republic
Larasati, R., De Liddo, A. and Motta, E. (2021) AI Healthcare System Interface: Explanation Design for Non-Expert User Trust, ACM IUI 2021. Workshop 7: Transparency and Explanations in Smart Systems - TExSS
Larasati, R., De Liddo, A. and Motta, E. (2020) The effect of explanation styles on user's trust, 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies, Cagliari, Italy