David Berry is currently visiting the Department of Digital Humanities as Visiting Professor of Critical Theory and Digital Humanities. The following post from David introduces some of his current research on explainability and interpretability. He is giving a talk about this work at the Infrastructural Interventions workshop on Tuesday 22nd June.
I am very excited to be a Visiting Professor of Critical Theory and Digital Humanities at KCL in the Department of Digital Humanities in 2021 as KCL not only has a great research culture, but also really exciting projects which I have been learning about. Whilst I am at Kings, I have been working on a new project around the concept of Explainability called “Explanatory Publics: Explainability, Automation and Critique.” Explainability is the idea that artificial intelligence systems should be able to generate a sufficient explanation of how an automated decision was made, representing or explaining, in some sense, its technical processing. With concerns over biases in algorithms there is an idea that self-explanation by a machine learning system would engender trust in these systems. [1]
Trust is a fundamental basis of any system, but it has to be stabilised through the generation of norms and practices that create justifications for the way things are. This is required for automated decision making, in part, because computation is increasingly a central aspect of a nation’s economy, real or imaginary. I argue that this is important under conditions of computational capitalism because when we call for an explanation, we might be able to better understand the contradictions within this historically specific form of computation that emerges in Late Capitalism. I have been exploring how these contradictions are continually suppressed in computational societies and generate systemic problems borne out of the need for the political economy of software to be obscured so that its functions and the mechanisms of value generation are hidden from public knowledge.
I argue that explainability offers a novel and critical means of intervention into, and transformation of, digital technology. By explanatory publics, I am gesturing to the need for frameworks of knowledge, whether social, political, technical, economic or cultural, to be justified through a social right to explanation. Explanations are assumed to tell us how things work and thereby giving us the power to change our environment in order to meet our own ends. Indeed, for a polity to be considered democratic, I argue that it must ensure that its citizens are able to develop a capacity for explanatory thought in relation to the digital (in addition to other capacities), and able to question ideas, practices and institutions in a digital society. So this also includes the corollary that citizens can demand explanatory accounts from technologies, institutions and artificial intelligences in the digital technologies they rely on.
The notion of explainability offers a critical point of intervention into these debates. By addressing the problem of creating systems that can explain their automated decision-making processes, the concept of justification becomes paramount. However, many current discussions of explainability tend to be chiefly interested in creating an explanatory product, whereas I argue that an understanding of the explanatory process will have greater impacts for algorithmic legitimacy and democratic politics.
[1] Within the field of AI there is now a growing awareness of this problem of opaque systems and a sub-discipline of “explainable AI” (XAI) has emerged and begun to address these very complex issues – although mainly through a technical approach.