Evaluation of Explainable Artificial Intelligence (EXplAIn) tackles the challenge of evaluating users interacting with ML-systems, aiming at developing a generic and integrative framework for evaluating human ML-system collaboration.
Funded by The Swedish Research Council (Etableringsbidrag, Vetenskapsrådet).
PI: Maria Riveiro
We live in a fascinating big data world, full of challenges, but also possibilities. Over the course of the next 20 years more will change around the way we do our daily activities than it has happened in the last 2000; we are entering an augmented age, where our natural capabilities are being augmented by AI technologies that help us think, make and be connected.
However, understanding how people interact with Machine Learning (ML) technologies is critical to design and evaluate systems that people can use effectively. Unfortunately, ML is often conceived in an impersonal way and ML algorithms are often perceived as black-boxes, which hinders their use and their full exploitation in our daily activities.
EXPLAIN tackles the challenge of evaluating users interacting with ML-systems. We argue that to be able to evaluate these interactive processes, we need to include theoretical principles from Cognitive Science that account for human preconceptions about systems' inner workings and behavior.
We develop a generic and integrative framework for evaluating human ML-system collaboration, combining traditional methods from ML and HCI with principles from Cognitive theories rarely considered in this interdisciplinary field.
The overall goal is to contribute to explaining our interactions with AI-technologies, moving forward towards more usable AI for augmented intelligence.
If you would like to know more about the project, please contact Maria Riveiro, email@example.com.
Content updated 2019-09-09