Member
Retno Larasati
Post-Doctoral Research AssociateRetno Larasati is a Post-doctoral Research Associate at the Knowledge Media Institute in the Open University. Her research interest is focused on trust, explainable AI, and AI ethics.
Her PhD research is about the effect of explanation on user trust in AI healthcare. Before starting her PhD, her research was around computer vision area, including handwritten recognition and visual-only word boundary detection. She has a Master's degree in Advanced Software Engineering with Management from King's College London.
Keys: AI ethics, explainable AI, human-computer trust, human-centred AI, human-centred explainable AI
Team: Venetia Brown, Tracie Farrell
Projects
News
22 May 2024
10 Jul 2023
20 Jun 2023
09 May 2023
06 Dec 2022
Publications
Larasati, R., De Liddo, A. and Motta, E. (2023) Meaningful Explanation Effect on User's Trust in an AI Medical System: Designing Explanations for Non-Expert Users, ACM Transactions on Interactive Intelligent Systems, pp. (Early Access)
Larasati, R. (2023) AI in Healthcare: Impacts, Risks and Regulation to Mitigate Adverse Impacts, 3rd Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies, AiOfAi 2023, Macao
Larasati, R. (2023) AI in Healthcare - Reflection on Potential Harms and Impacts (Short Paper), HHAI-WS 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), Munich, Germany
Larasati, R. (2023) Trust and Explanation in Artificial Intelligence Systems: A Healthcare Application in Disease Detection and Preliminary Diagnosis
Larasati, R., De Liddo, A. and Motta, E. (2023) Human and AI Trust: Trust Attitude Measurement Instrument Development, Workshop on Trust and Reliance in AI-Assisted Tasks (TRAIT) at CHI 2023, Hamburg, Germany