Building a Trustworthy Explainable AI in Healthcare
Retno Larasati, Anna DeLiddo
Chapter from the book: Loizides, F et al. 2020. Human Computer Interaction and Emerging Technologies: Adjunct Proceedings from the INTERACT 2019 Workshops.
Chapter from the book: Loizides, F et al. 2020. Human Computer Interaction and Emerging Technologies: Adjunct Proceedings from the INTERACT 2019 Workshops.
The lack of clarity on how the most advanced AI algorithms do what they do creates serious concerns as to the accountability, trust and social acceptability of AI technologies. These concerns become even bigger when people’s well being is at stake, such as healthcare. This calls for systems enabling to make decisions transparent, understandable and explainable for users. This paper briefly discusses the trust in AI healthcare system, propose a framework relation between trust and characteristics of explanation, and possible future studies to build trustworthy Explainable AI.
Larasati R. & DeLiddo A. 2020. Building a Trustworthy Explainable AI in Healthcare. In: Loizides, F et al (eds.), Human Computer Interaction and Emerging Technologies. Cardiff: Cardiff University Press. DOI: https://doi.org/10.18573/book3.ab
This is an Open Access chapter distributed under the terms of the Creative Commons Attribution 4.0 license (unless stated otherwise), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. Copyright is retained by the author(s).
This book has been peer reviewed. See our Peer Review Policies for more information.
Published on May 7, 2020