banner
CS Colloquium (BMAC)
 

Mar
26

Berry ISTeC Distinguished Lecture in conjunction with the Computer Science Department and the Electrical and Computer Engineering Department Seminar Series
Explainability and the Data Intensive University

Speaker: 
David Berry, Professor, Digital Humanities (Media and Film), School of Media, film and Music, University of Sussex

When:
4:00PM ~ 5:00PM, Tuesday, March 26, 2019
 
Where: BSB A101
Abstract: In the UK, the Data Protection Act 2018 has come into force, which was the enabling legislation for the European GDPR (General Data Protection Regulation). It has been argued that this creates a new right in relation to automated algorithmic systems that requires the "controller" of the algorithm to supply an explanation of how a decision was made to the user (or "data subject") – the social right to explanation. This right has come to be known as the problem of explainability for artificial intelligence research, or Explainable Artificial Intelligence (XAI). In this paper I want to explore the implications of this for the university, and particularly the concept of explainability it gives rise to. One of the key drivers for the attention given to explainability has been a wider public unease with the perceived bias of algorithms in everyday life, the rise in automated decision processes (ADP) and the calls for accountability in these systems. Computation combined with artificial intelligence and machine learning has raised interesting questions about authorship, authenticity, post-human futures, creativity and AI-driven systems. Many of these debates foreground the question of the human, whether as post-human technologies or as challenges to the privileged status of humans as intelligent, thinking or creative beings. These implications are increasingly discussed in the media and in politics, particularly in relation to a future dominated by technologies which will have huge social consequences. This is reflected in an anxiety felt by those who fear the potential for bias to infiltrate machine decision-making systems once humans are removed from the equation. It is in this context that public disquiet has risen in relation the perceived unfairness of these, often unaccountable, algorithmic systems. The discussion I wish to open in this paper is largely speculative. It seems to me that we have two issues that are interesting to consider. Firstly, that the GDPR might require universities to have or to be “explainable" in some sense and therefore subject to the same data protection regime as other algorithms (perhaps as Explainable Universities – xU). This may mean they are required to provide their internal processing descriptions (e.g. automated grading/plagiarism checking) under this "right to explanation". Secondly, this interpretation problem faced by algorithm programmers seems to me exactly the kind of interpretative questions that casts light on the computerisation of the university. What exactly are universities for? How can we explain this to new generations or to the wider public? Perhaps explainability offers a critical site to reflect on these questions?

Bio: Dr. Berry researches the theoretical and medium-specific challenges of understanding digital and computational media, particularly algorithms, software and code. His work draws on digital humanities, critical theory, political economy, social theory, software studies, and the philosophy of technology. As Professor of Digital Humanities, he is particularly interested in how computation is being incorporated into arts and humanities and social science practice. His new work examines the historical and philosophical genealogies of the notion of an "Idea of a University" and is funded by the British Academy.