A B C D E F G H I J K L M N O P Q R S T U V W X Y Z all

Member

Retno LarasatiMember status icon

Post-Doctoral Research Associate
Retno Larasati Photograph

Telephone Icon +44 (0)1908 653093

Camera Icon RDF Icon

LinkedIn Icon

Retno Larasati is a Post-doctoral Research Associate at the Knowledge Media Institute in the Open University. Her research interest is focused on trust, explainable AI, and AI ethics.

Her PhD research is about the effect of explanation on user trust in AI healthcare. Before starting her PhD, her research was around computer vision area, including handwritten recognition and visual-only word boundary detection. She has a Master's degree in Advanced Software Engineering with Management from King's College London.

Keys: AI ethics, explainable AI, human-computer trust, human-centred AI, human-centred explainable AI

Team: Venetia Brown, Tracie Farrell

Projects

Shifting Power

News

03 Nov 2025


31 Oct 2025


14 Nov 2024


15 Jul 2024


22 May 2024

View all Articles

Publications

Brown, V., Larasati, R., Kwarteng, J. and Farrell, T. (2025). Understanding AI and Power: Situated Perspectives from Global North and South Practitioners. AI & Society (Early Access). https://oro.open.ac.uk/107004/.

Larasati, R. (2025). Inclusivity of AI Speech in Healthcare: A Decade Look Back. In: Speech AI for All Workshop - CHI 2025, 27 Apr 2025, Yokohama, Japan. https://oro.open.ac.uk/105547/.

Brown, V., Larasati, R., Third, A. and Farrell, T. (2024). A Qualitative Study on Cultural Hegemony and the Impacts of AI. In: AAAI/ACM Conference on AI, Ethics, and Society (AIES-24), 21-23 Oct 2024, San Jose, CA, USA. https://oro.open.ac.uk/99264/.

Larasati, R. (2023). AI in Healthcare: Impacts, Risks and Regulation to Mitigate Adverse Impacts. In: 3rd Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies, AiOfAi 2023, 21 Aug 2023, Macao. https://oro.open.ac.uk/95494/.

Larasati, R., De Liddo, A. and Motta, E. (2023). Meaningful Explanation Effect on User's Trust in an AI Medical System: Designing Explanations for Non-Expert Users. ACM Transactions on Interactive Intelligent Systems, 13(4), https://oro.open.ac.uk/94168/.

Larasati, R. (2023). AI in Healthcare - Reflection on Potential Harms and Impacts (Short Paper). In: HHAI-WS 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), 26-27 Jun 2023, Munich, Germany. https://oro.open.ac.uk/93705/.

Larasati, R., De Liddo, A. and Motta, E. (2023). Human and AI Trust: Trust Attitude Measurement Instrument Development. In: Workshop on Trust and Reliance in AI-Assisted Tasks (TRAIT) at CHI 2023, 23 Apr 2023, Hamburg, Germany. https://oro.open.ac.uk/88780/.

Larasati, R. (2023). Trust and Explanation in Artificial Intelligence Systems: A Healthcare Application in Disease Detection and Preliminary Diagnosis. [Thesis] https://oro.open.ac.uk/88778/.

Larasati, R. (2022). Explainable AI for Breast Cancer Diagnosis: Application and User’s Understandability Perception. In: 2022 International Conference on Electrical, Computer and Energy Technologies (ICECET), 20-22 Jul 2022, Prague, Czech Republic. https://oro.open.ac.uk/85137/.

Larasati, R. (2019). Interaction Between Human And Explanation in Explainable AI System For Cancer Detection and Preliminary Diagnosis. In: CRC Student Conference 2019, 2019, The Open University, UK. https://oro.open.ac.uk/75832/.

Larasati, R. (2020). AI Explanation Understanding from User’s Perspective in Healthcare Application. In: CRC Student Conference 2020, 2020, The Open University. https://oro.open.ac.uk/79837/.

Larasati, R., De Liddo, A. and Motta, E. (2021). AI Healthcare System Interface: Explanation Design for Non-Expert User Trust. In: ACM IUI 2021. Workshop 7: Transparency and Explanations in Smart Systems - TExSS, 13 Apr 2021. https://oro.open.ac.uk/77436/.

Larasati, R., De Liddo, A. and Motta, E. (2020). The Effect of Explanation Styles on User’s Trust. In: Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies (ExSS-ATEC), 17 Mar 2020, Cagliari, Italy. https://oro.open.ac.uk/69307/.

Larasati, R., De Liddo, A. and Motta, E. (2020). The effect of explanation styles on user's trust. In: 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies, 17 Mar 2020, Cagliari, Italy. https://oro.open.ac.uk/70421/.

Larasati, R. and De Liddo, A. (2019). Building a Trustworthy Explainable AI in Healthcare. In: INTERACT 2019/ 17th IFIP: International Conference of Human Computer Interaction. Workshop: Human(s) in the loop -Bringing AI & HCI together, 2-6 Sep 2019, Cyprus. https://oro.open.ac.uk/68245/.

View By

Research Themes

#kmiou on Bluesky

CONTACT US

Knowledge Media Institute
The Open University
Walton Hall
Milton Keynes
MK7 6AA
United Kingdom

Tel: +44 (0)1908 653800

Fax: +44 (0)1908 653169

Email: KMi Support

COMMENT

If you have any comments, suggestions or general feedback regarding our website, please email us at the address below.

Email: KMi Development Team