Member
Retno Larasati
PhD Research Student
Retno Larasati is a PhD student at the Knowledge Media Institute in the Open University. Her research interest is focused on the user interaction of explainable AI.
Before starting her PhD, her research was around computer vision area, including handwritten recognition and visual-only word boundary detection. She has a Master's degree in Advanced Software Engineering with Management from King's College London.
Keys: explainable AI, human-computer trust, human-centred AI, human-centred explainable AI
News
22 Mar 2022
14 Nov 2019
Publications
Larasati, R., De Liddo, A. and Motta, E. (2021) AI Healthcare System Interface: Explanation Design for Non-Expert User Trust, ACM IUI 2021. Workshop 7: Transparency and Explanations in Smart Systems - TExSS
Larasati, R., De Liddo, A. and Motta, E. (2020) The effect of explanation styles on user's trust, 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies, Cagliari, Italy
Larasati, R. and De Liddo, A. (2019) Building a Trustworthy Explainable AI in Healthcare, INTERACT 2019/ 17th IFIP: International Conference of Human Computer Interaction. Workshop: Human(s) in the loop -Bringing AI & HCI together, Cyprus
Larasati, R. (2019) Interaction Between Human And Explanation in Explainable AI System For Cancer Detection and Preliminary Diagnosis, CRC Student Conference 2019, The Open University, UK