Best Thesis Prize goes to undergraduate dissertation exploring climate disinformation on Facebook

Congratulations to Kajsa Lonrusten, a recent graduate from our Digital Culture BA, who recently won Best Thesis Prize for her dissertation on “The Circulation of Organised Climate Change Denial on Facebook”.
The dissertation drew on approaches from digital methods and digital journalism modules that she attended in order to explore the circulation of material associated with DeSmog Blog’s Climate Disinformation Database.
Kajsa says she plans to build on this work in her graduate studies:

“I am surprised and extremely thankful that my dissertation was given this prize! Although it took a lot of hard work and many late nights, I thoroughly enjoyed the work I did while writing the thesis and I learnt a lot from it. I took a risk doing something slightly different and using some very interesting methods, and, thanks to a lot of help from my supervisor, it worked out in the end and it made the process a lot more fun. I will take this experience and knowledge with me into my next chapter as I start my Master’s in Journalism, where I hope to put them to good use when the time comes for me to write a Masters thesis.”

Investigating infodemic – researchers, students and journalists work together to explore the online circulation of COVID-19 misinformation and conspiracies

Over the past year researchers and students at the Department of Digital Humanities, King’s College London have contributed to a series of collaborative digital investigations into the online circulation of COVID-19 misinformation and conspiracies.

As part of a pilot on “engaged research led teaching” at King’s College London, undergraduate and graduate students have contributed to projects developed with journalists, media organisations and non-governmental organisations around the world.

These were undertaken in association with the Arts and Humanities Research Council funded project Infodemic: Combatting COVID-19 Conspiracy Theories, which explores how digital methods grounded in social and cultural research may facilitate understanding of WHO has described as an “infodemic” of misleading, fabricated, conspiratorial and other problematic material related to the COVID-19 pandemic.

These projects led to and contributed to a number of stories, investigations and publications including:

Continue reading “Investigating infodemic – researchers, students and journalists work together to explore the online circulation of COVID-19 misinformation and conspiracies”

Teaching available at KCL Digital Humanities – digital skills needed

The Department of Digital Humanities at King’s College London is looking for hourly paid lecturers to teach introductory computational skills on the weekly seminars of our digital culture modules in person at our Strand campus. In addition to computational skills, we also have some modules open for which a conceptual knowledge of machine learning and data handling is sufficient.

For the modules below, we welcome colleagues with experience in the following areas: Python 3, HTML5, CSS3 and JavaScript, and the ability to teach coding at beginner and intermediate levels.

7AAVDH02 Coding and the Humanities 2 x seminar groups open Coding skills required
7AAVDM14 The Cultural Web: Building a Humanities Website 3 x seminar groups open Coding skills required
7AAVMWEB Web Technologies 6 x seminar groups open Coding skills required

For the modules below, insight into the computational concept of machine learning and/or data handling are sufficient.

7AAVDC18 Artificial Intelligence and Society 4 x seminar groups
7AAVDC20 Health Datafication 3 x seminar groups

The teaching will be in the form of weekly recurring seminars, in which students practically train the lectures given before by DDH colleagues; the colleagues will also help you with planning the seminar teaching.

  • Term1 dates for teaching weekly in person are: Monday, 27 September to Monday, 10 December 2021 with a reading week in between.
  • Payment: besides teaching the seminar(s), we also factor in preparation (2h) per seminar, and office hours/emails (1h) per seminar.
  • Term1 dates for teaching: Actual seminar teaching begins in the week starting Monday, 27 September 2021 and runs to Friday, 10 December 2021 with ‘free’ reading week in between. Seminars accompany the lectures given each week by the module leader. Seminars occur weekly.

  • Timetable scheduling is done centrally so we cannot guarantee which day exactly apart from that they will be weekdays between 8am and 19h.

Please contact mercedes DOT bunz AT kcl.ac.uk if you are interested with a short CV outlining why you have expertise in the modules. You will need the right to work in the UK. Also feel free to forward this blogpost if anyone comes to mind.

 

 

New article: “Journalism aggregators: an analysis of Longform.org”

An article on “Journalism aggregators: an analysis of Longform.org” co-authored by Marco Braghieri, Tobias Blanke and Jonathan Gray has just been published in Journalism Research. The article is open access and available in both English and German. Here’s the abstract:

What is the role and significance of digital long-form content aggregators in contemporary journalism? This article contends that they are an important, emerging object of study in journalism research and provides a digital methods analysis and theoretical engagement with Longform.org, one of the most prominent long-form content aggregators on the web. We propose that Longform.org can be understood as leveraging the datafication of news content in order to valorize the long tail of archived material. Drawing on scraped data from the archive, we undertake an in-depth analysis into the practices of long-form aggregators. While Longform.org exhibits a degree of curatorial diversity, legacy news media outlets tend to be featured more frequently. Accessibility of news media archives is one of the most relevant factors for being featured by Longform.org. Our analysis demonstrates the relevant role of smaller digital-only publications, which provide a unique mix of sources. Through a network analysis of scraped tags we explore the composition of themes, including personal, world-political, celebrity, technological and cultural concerns. The data and curatorial practices of such long-form aggregators may be understood as an area of contemporary news work that conditions which past perspectives are more readily available, experienceable and programmable on the web.

The article draws on Marco Braghieri’s research on long-form journalism and archives, about which you can read more in Yesterday’s News. The future of long-form journalism and archives recently published by Peter Lang.

Jobs: Lecturer in Digital Innovation Management Education

Our Department of Digital Humanities is seeking to appoint two Lecturers (Academic Education Pathway) in Digital Innovation Management to contribute to its teaching in this and related areas. The post-holder will have a sound knowledge of the key concepts, theories, debates and challenges in digital innovation management, information systems, algorithmic management and digital transformation, with a PhD in a relevant area.

We are looking for a range of expertise to support our fast-expanding programmes, which may include work experience in the sector of study. We are particularly interested in the following areas to support our teaching: any aspect of digital innovation; information systems; digital platforms; algorithmic management; and critical studies of technology. Furthermore, candidates with the potential for developing collaborations and knowledge exchange activities with commercial, industrial or third sector bodies are especially welcome.

More here: https://jobs.kcl.ac.uk/gb/en/job/029085/Lecturer-in-Digital-Innovation-Management-Education

Salary: £38,304 – £45,026 per annum, including London Weighting Allowance
Aug-2021 Closing date: 17-Aug-2021
Interviews: Week starting 30 August
Starting date: 15 September 2021 (if possible)

These posts will be offered on an a fixed-term contract for 1 year, and are full-time  post 35 hours per week- 100% full time equivalent. Continue reading “Jobs: Lecturer in Digital Innovation Management Education”

Creative AI: Models of Artistic Practice

How do artists use the backend of machine learning in their artworks? How is Machine Learning transforming the making of meaning? These were the questions brought about in the Creative AI Lab’s presentation during the National College of Art and Design, Dublin (NCAD) Digital Culture Webinar Series hosted by Elaine Hoey (New Media Artist) and Dr. Rachel O’Dwyer (Lecturer in Digital Cultures, NCAD).

Titled ‘Inquiring the Backend of Machine Learning Artworks: Making Meaning by Calculation’, the Creative AI Lab’s Eva Jäger (Associate Curator of Serpentine Galleries Arts Technologies) and Mercedes Bunz (Senior Lecturer in Digital Society, KCL) presented their research (video here) into how artists use the backend of Machine Learning (ML) to develop their work. Their presentation was followed by Zac Ionnidis’, who spoke about Forensic Architecture’s use of AI to automate parts of human rights monitoring research.

The research of the Creative AI Lab, a collaboration of the Department of Digital Humanities, KCL and the Serpentine Gallery, surfaced particular ways in which artists make use of machine learning in order to create models for engagement with these new technologies. Mercedes Bunz outlined some of the inspirations for the research, citing Zylinska’s book ‘AI Art: Machine Visions and Warped Dreams’ and Søren Pold’s ‘The Metainterface’. Further on, Eva Jäger explained why it was important to research the artistic usage of the back-end of ML. She elaborated on what constituted the back-end and said that this work often leads to an “alternate output which includes building or developing software and publishing/sharing research”.

Diagram showing back end and front end of AI artwork

Mercedes Bunz carried forward the presentation by arguing against the idea that humans are the only ones that can create meanings referring to work by Stuart Hall, who developed a more open approach when researching the encoding and decoding of meaning regarding television programs. Continue reading “Creative AI: Models of Artistic Practice”

Welcome to new Lecturers at the Department of Digital Humanities 🎊

A very warm welcome to all of our new members of staff at the Department of Digital Humanities! Joining us ahead of the next academic year we have:

  • Andrea Ballatore, Lecturer in Social and Cultural Informatics
  • Barbara McGillivray, Lecturer in Digital Humanities and Cultural Computation
  • Daniel Chavez Heras, Lecturer in Humanistic and Social Computing Education
  • Laura Gibson, Lecturer in Digital Content Management Education
  • Mike Duggan, Lecturer in Digital Culture, Society and Economy Education
  • Niki Cheong, Lecturer in Digital Culture and Society

Stuart Dunn, Reader and Head of the Department of Digital Humanities, comments:

“Our Department represents a broad range of digitally-driven teaching and research into the human record across numerous fields and disciplines; and also service, which draws on our traditions of excellence in the Digital Humanities to help society deal with the many challenges of the contemporary digital world. It is therefore my pleasure to welcome our new colleagues to DDH, who will help us build in all these areas, and consolidate and expand our strengths in the future”.

You can find out more about each of them in their bios below.

Continue reading “Welcome to new Lecturers at the Department of Digital Humanities 🎊”

Visiting Professor David Berry on Explainability and Interpretability as Critical Infrastructure Practice

David Berry is currently visiting the Department of Digital Humanities as Visiting Professor of Critical Theory and Digital Humanities. The following post from David introduces some of his current research on explainability and interpretability. He is giving a talk about this work at the Infrastructural Interventions workshop on Tuesday 22nd June.

I am very excited to be a Visiting Professor of Critical Theory and Digital Humanities at KCL in the Department of Digital Humanities in 2021 as KCL not only has a great research culture, but also really exciting projects which I have been learning about. Whilst I am at Kings, I have been working on a new project around the concept of Explainability called “Explanatory Publics: Explainability, Automation and Critique.” Explainability is the idea that artificial intelligence systems should be able to generate a sufficient explanation of how an automated decision was made, representing or explaining, in some sense, its technical processing. With concerns over biases in algorithms there is an idea that self-explanation by a machine learning system would engender trust in these systems. [1]

Trust is a fundamental basis of any system, but it has to be stabilised through the generation of norms and practices that create justifications for the way things are. This is required for automated decision making, in part, because computation is increasingly a central aspect of a nation’s economy, real or imaginary. I argue that this is important under conditions of computational capitalism because when we call for an explanation, we might be able to better understand the contradictions within this historically specific form of computation that emerges in Late Capitalism. I have been exploring how these contradictions are continually suppressed in computational societies and generate systemic problems borne out of the need for the political economy of software to be obscured so that its functions and the mechanisms of value generation are hidden from public knowledge.

I argue that explainability offers a novel and critical means of intervention into, and transformation of, digital technology. By explanatory publics, I am gesturing to the need for frameworks of knowledge, whether social, political, technical, economic or cultural, to be justified through a social right to explanation. Explanations are assumed to tell us how things work and thereby giving us the power to change our environment in order to meet our own ends. Indeed, for a polity to be considered democratic, I argue that it must ensure that its citizens are able to develop a capacity for explanatory thought in relation to the digital (in addition to other capacities), and able to question ideas, practices and institutions in a digital society. So this also includes the corollary that citizens can demand explanatory accounts from technologies, institutions and artificial intelligences in the digital technologies they rely on.

The notion of explainability offers a critical point of intervention into these debates. By addressing the problem of creating systems that can explain their automated decision-making processes, the concept of justification becomes paramount. However, many current discussions of explainability tend to be chiefly interested in creating an explanatory product, whereas I argue that an understanding of the explanatory process will have greater impacts for algorithmic legitimacy and democratic politics.

[1] Within the field of AI there is now a growing awareness of this problem of opaque systems and a sub-discipline of “explainable AI” (XAI) has emerged and begun to address these very complex issues – although mainly through a technical approach.

New ERC Project Exploring the Intersection between Surveillance and Morality

‘Smart cities’, ‘employee assistance programmes’, the pervasive language of ‘security’ – the implementation of surveillance technologies has consistently been framed in relation to moral ideas. This ambiguity has been observed by surveillance scholars for many years, David Lyon once describing the alternating ends of ‘care’ and ‘control’ which these technologies serve. Yet the study of surveillance has predominantly concerned itself with the latter. This ERC-funded project, recently initiated through the Department of Digital Humanities, opens up a different range of questions. If morality is the medium through which surveillance technologies have so often been popularly legitimized, then what if there is a history of the phenomenon yet to be written – in which surveillance proliferates not as a lever in power relations – but through these accepted notions of ‘the good’.

The project is anchored in the discipline of anthropology, involving four ethnographies that explore everyday relationships to digital monitoring. This shapes its epistemological approach in two important ways. The first is a conceptual collectivism. Although many of the ethnographic encounters will be with individuals, the focus is not on individuals per se, but on the role that digital monitoring plays in the mediation of their wider relationships, both intimate but also potentially very abstract. The second is a sensitivity to cultural difference. With two studies in each country, the project erects a binary contrast between Germany and Britain, as places with visibly distinct histories and attitudes towards surveillance. Aside from modest opposition to the introduction of ID cards, the response to the intensification of surveillance in Britain has been placid, even sympathetic. By contrast, Germany has witnessed widespread civic mobilizations against monitoring over the past forty years: from the census protests of the 70s and 80s, to more recent protests against the retention of data by mobile phone companies, or the boycotting of image collection by Google StreetView. Through this comparison the project aims to problematize this question of the good further. What kinds of collective experiences are being drawn on when people support or oppose surveillance?

Overall, the study pivots around the moral ambivalence of surveillance that members of highly technologized societies increasingly find themselves faced with. If surveillance enables forms of care – for the body, the friend, the family, the nation – cannot their insalubrious applications simply be overlooked? It is this moral tangle that, we offer, consistently inhibits restrictions on the growth of these technologies. By studying, through long-term ethnographic fieldwork, how and why people themselves use monitoring technologies voluntarily, we aim to establish greater clarity on those modes of monitoring that support human health, happiness and dignity – and those that are inimical to it.

 

Surveillance and Moral Community: Anthropologies of Monitoring in Germany and Britain, ERC Project No. 947867, is led by Dr Vita Peacock.

 

 

Day of Digital Humanities 2021

This is a collection of blog posts members of the department contributed to the annual Day of Digital Humanities, a global online event which was this year themed on ‘multilingual digital humanities’. These were originally published on the Language Acts & Worldmaking website at

https://languageacts.org/blog/day-of-digital-humanities-part-1/ and https://languageacts.org/blog/day-of-digital-humanities-part-2/

2021-night-and-day.jpg

Introduction by Paul Spence

This year’s Day of Digital Humanities event Day of DH 2021 presents another opportunity to engage with the wider digital humanities community through a series of interactions channelled through Twitter and Instagram with the hashtag #dayofdh2021. The focus this year is on ‘multilingual DH’, a topic which has been at the centre of my research in the last four or five years, and which has thankfully started to gain traction in recent years through initiatives such as the multilingual Open Methods initiative, multi-language versions of the Programming Historian and multilingual DH. As Quinn Dombrowski remarked recently, “Multilingual DH is suddenly blooming”.

I asked colleagues from the Department of Digital Humanities at King’s College London to describe research they are doing on multilingual digital studies. What follows are a few examples of their work.

Continue reading “Day of Digital Humanities 2021”