You should not control what you do not understand: the risks of controllability in AI
Affiliation: PUC-Rio, BR
Close
Affiliation: PUC-Rio, BR
Close
Chapter from the book: Loizides, F et al. 2020. Human Computer Interaction and Emerging Technologies: Adjunct Proceedings from the INTERACT 2019 Workshops.
In this paper, we posit that giving users control over an artificial intelligence (AI) model may be dangerous without their proper understanding of how the model works. Traditionally, AI research has been more concerned with improving accuracy rates than putting humans in the loop, i.e., with user interactivity. However, as AI tools become more widespread, high-quality user interfaces and interaction design become essential to the consumer’s adoption of such tools. As developers seek to give users more influence over AI models, we argue this urge should be tempered by improving users’ understanding of the models’ behavior.