Evaluation of Explainable Artificial Intelligence (EXplAIn) tackles the challenge of evaluating users interacting with ML-systems, aiming at developing a generic and integrative framework for evaluating human ML-system collaboration.

Funded by The Swedish Research Council (Etableringsbidrag, Vetenskapsrådet).

PI: Maria Riveiro External link, opens in new window.

Description

We live in a fascinating big data world, full of challenges, but also possibilities. Over the course of the next 20 years more will change around the way we do our daily activities than it has happened in the last 2000; we are entering an augmented age, where our natural capabilities are being augmented by AI technologies that help us think, make and be connected.

However, understanding how people interact with Machine Learning (ML) technologies is critical to design and evaluate systems that people can use effectively. Unfortunately, ML is often conceived in an impersonal way and ML algorithms are often perceived as black-boxes, which hinders their use and their full exploitation in our daily activities.

EXPLAIN tackles the challenge of evaluating users interacting with ML-systems. We argue that to be able to evaluate these interactive processes, we need to include theoretical principles from Cognitive Science that account for human preconceptions about systems' inner workings and behavior.

Expected outcomes

We develop a generic and integrative framework for evaluating human ML-system collaboration, combining traditional methods from ML and HCI with principles from Cognitive theories rarely considered in this interdisciplinary field.

The overall goal is to contribute to explaining our interactions with AI-technologies, moving forward towards more usable AI for augmented intelligence.

Project duration and funding

EXPLAIN is a project funded by The Swedish Research Council External link, opens in new window. (Vetenskapsrådet External link, opens in new window., VR) and runs during 2019-2022.

Contact information

If you would like to know more about the project, please contact Maria Riveiro External link, opens in new window., maria.riveiro@ju.se.

News

Publications

  • Riveiro, M. (2023). Expectations, trust, and evaluation Dagstuhl Reports, 12(8), 109. More information
  • Riveiro, M. (2023). A design theory for uncertainty visualization? Dagstuhl Reports, 12(8), 12-13. More information
  • Riveiro, M. and Thill, S. (2022). The challenges of providing explanations of AI systems when they do not behave like users expect. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’22), July 4–7, 2022, Barcelona, Spain. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3503252.3531306
  • Beauxis-Aussalet, E., Behrisch, M., Borgo, R., Chau, D. H., Collins, C., Ebert, D., El-Assady, M., Endert, A., Keim, D. A., Kohlhammer, J., Oelke, D., Peltonen, J., Riveiro, M., Schreck, T., Strobelt, H. and van Wijk, J. J. "The Role of Interactive Visualization in Fostering Trust in AI," in IEEE Computer Graphics and Applications, vol. 41, no. 6, pp. 7-12, 1 Nov.-Dec. 2021, doi: 10.1109/MCG.2021.3107875.