Categories
Speculations & Theory

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

The most recent work from Kaur et al. (2022) explores the interpretability and explainability of machine learning models through sensemaking theory.

[Abstract]

Understanding how ML models work is a prerequisite for responsibly designing, deploying, and using ML-based systems. With interpretability approaches, ML can now offer explanations for its outputs to aid human understanding. Though these approaches rely on guidelines for how humans explain things to each other, they ultimately solve for improving the artefact — an explanation. In this paper, we propose an alternate framework for interpretability grounded in Weick’s sensemaking theory, which focuses on who the explanation is intended for. Recent work has advocated for the importance of understanding stakeholders’ needs—we build on this by providing concrete properties (e.g., identity, social context, environmental cues, etc.) that shape human understanding. We use an application of sensemaking in organizations as a template for discussing design guidelines for sensible AI, AI that factors in the nuances of human cognition when trying to explain itself.

Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., & Wortman Vaughan, J. (2020, April). Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-14).

Link to paper: https://arxiv.org/abs/2205.05057

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.