Speaker series with UNC Chapel Hill on “Digital Aesthetics: Critique, Creativity and Selfhood in Computational Culture”

As part of an ongoing collaboration between between Kings College London and UNC Chapel Hill, Doug Stark and Carly Schnitzler are convening a series of talks with researchers at the Department of Digital Humanities.

The series will start with a workshop with Conor McKeown, followed by talks with Feng Zhu, Mercedes Bunz and Zeena Feldman. If you’re interested in joining Conor’s workshop you can RSVP below, and you can contact Doug and Carly if you’re interested in joining for the talks.

  • Workshop: Dr. Conor McKeown, Tuesday, October 5, 5pm BST (“Virtual World-Building in Unity (C#)). RSVP here!
  • Lecture: Dr. Feng Zhu, Wednesday, October 20, 5.30pm BST
  • Lecture: Dr. Mercedes Bunz, Friday, November 12, 5pm GMT (“Creative AI as a Critical Technical Practice: Inquiring the Backend of Machine Learning Artworks”)
  • Lecture: Dr. Zeena Feldman, Thursday, December 2, 3pm GMT (“Quitting Digital Culture: Rethinking Agency in a Beyond-Choice Ontology”)

Joanna Zylinska joins Department of Digital Humanities as Professor of Media Philosophy and Critical Digital Practice

We’re delighted to announce that Joanna Zylinska has been appointed as Professor of Media Philosophy and Critical Digital Practice in the Department of Digital Humanities.

Professor Zylinska is a writer, lecturer, artist and curator, working in the areas of digital technologies and new media, ethics, photography and art. Prior to joining King’s in 2021, she worked for many years at Goldsmiths, University of London, including as Co-Head of its Department of Media, Communications and Cultural Studies. She has held visiting positions as Guest Professor at Shandong University in China, Winton Chair Visiting Scholar at the University of Minnesota, US, and Beaverbrook Visiting Scholar at McGill University in Canada.

Zylinska is the author of eight books – most recently, AI Art: Machine Visions and Warped Dreams (Open Humanities Press, 2020, open access), The End of Man: A Feminist Counterapocalypse (University of Minnesota Press, 2018, open access) and Nonhuman Photography (MIT Press, 2017). Her work has been translated into Chinese, French, German, Norwegian, Polish, Portuguese, Russian, Spanish and Turkish. Zylinska combines her philosophical writings with image-based art practice and curatorial work. In 2013 she was Artistic Director of Transitio_MX05 ‘Biomediations’: Festival of New Media Art and Video in Mexico City.

Professor Marion Thain, Executive Dean of the Faculty of Arts and Humanities comments:

The King’s powerhouse in Digital Humanities is going from strength to strength, and we are delighted to welcome Joanna Zylinska (whose expertise spans digital technologies, new media, art, ethics and photography) as Professor of Media Philosophy and Critical Digital Practice. Her appointment cements DDH as a world-leading centre for the study of the contemporary Digital Humanities.
Professor Stuart Dunn, Head of the Department of Digital Humanities says:

I am delighted to welcome Joanna to DDH. She brings a distinguished record of scholarship and academic leadership in digital arts, AI and new media which will expand and enrich the Department across our research, our teaching programmes and our service to society and London. We are thrilled that she has joined us. 

Best Thesis Prize goes to undergraduate dissertation exploring climate disinformation on Facebook

Congratulations to Kajsa Lonrusten, a recent graduate from our Digital Culture BA, who recently won Best Thesis Prize for her dissertation on “The Circulation of Organised Climate Change Denial on Facebook”.
The dissertation drew on approaches from digital methods and digital journalism modules that she attended in order to explore the circulation of material associated with DeSmog Blog’s Climate Disinformation Database.
Kajsa says she plans to build on this work in her graduate studies:

“I am surprised and extremely thankful that my dissertation was given this prize! Although it took a lot of hard work and many late nights, I thoroughly enjoyed the work I did while writing the thesis and I learnt a lot from it. I took a risk doing something slightly different and using some very interesting methods, and, thanks to a lot of help from my supervisor, it worked out in the end and it made the process a lot more fun. I will take this experience and knowledge with me into my next chapter as I start my Master’s in Journalism, where I hope to put them to good use when the time comes for me to write a Masters thesis.”

Investigating infodemic – researchers, students and journalists work together to explore the online circulation of COVID-19 misinformation and conspiracies

Over the past year researchers and students at the Department of Digital Humanities, King’s College London have contributed to a series of collaborative digital investigations into the online circulation of COVID-19 misinformation and conspiracies.

As part of a pilot on “engaged research led teaching” at King’s College London, undergraduate and graduate students have contributed to projects developed with journalists, media organisations and non-governmental organisations around the world.

These were undertaken in association with the Arts and Humanities Research Council funded project Infodemic: Combatting COVID-19 Conspiracy Theories, which explores how digital methods grounded in social and cultural research may facilitate understanding of WHO has described as an “infodemic” of misleading, fabricated, conspiratorial and other problematic material related to the COVID-19 pandemic.

These projects led to and contributed to a number of stories, investigations and publications including:

Continue reading “Investigating infodemic – researchers, students and journalists work together to explore the online circulation of COVID-19 misinformation and conspiracies”

Teaching available at KCL Digital Humanities – digital skills needed

The Department of Digital Humanities at King’s College London is looking for hourly paid lecturers to teach introductory computational skills on the weekly seminars of our digital culture modules in person at our Strand campus. In addition to computational skills, we also have some modules open for which a conceptual knowledge of machine learning and data handling is sufficient.

For the modules below, we welcome colleagues with experience in the following areas: Python 3, HTML5, CSS3 and JavaScript, and the ability to teach coding at beginner and intermediate levels.

7AAVDH02 Coding and the Humanities 2 x seminar groups open Coding skills required
7AAVDM14 The Cultural Web: Building a Humanities Website 3 x seminar groups open Coding skills required
7AAVMWEB Web Technologies 6 x seminar groups open Coding skills required

For the modules below, insight into the computational concept of machine learning and/or data handling are sufficient.

7AAVDC18 Artificial Intelligence and Society 4 x seminar groups
7AAVDC20 Health Datafication 3 x seminar groups

The teaching will be in the form of weekly recurring seminars, in which students practically train the lectures given before by DDH colleagues; the colleagues will also help you with planning the seminar teaching.

  • Term1 dates for teaching weekly in person are: Monday, 27 September to Monday, 10 December 2021 with a reading week in between.
  • Payment: besides teaching the seminar(s), we also factor in preparation (2h) per seminar, and office hours/emails (1h) per seminar.
  • Term1 dates for teaching: Actual seminar teaching begins in the week starting Monday, 27 September 2021 and runs to Friday, 10 December 2021 with ‘free’ reading week in between. Seminars accompany the lectures given each week by the module leader. Seminars occur weekly.

  • Timetable scheduling is done centrally so we cannot guarantee which day exactly apart from that they will be weekdays between 8am and 19h.

Please contact mercedes DOT bunz AT kcl.ac.uk if you are interested with a short CV outlining why you have expertise in the modules. You will need the right to work in the UK. Also feel free to forward this blogpost if anyone comes to mind.



New article: “Journalism aggregators: an analysis of Longform.org”

An article on “Journalism aggregators: an analysis of Longform.org” co-authored by Marco Braghieri, Tobias Blanke and Jonathan Gray has just been published in Journalism Research. The article is open access and available in both English and German. Here’s the abstract:

What is the role and significance of digital long-form content aggregators in contemporary journalism? This article contends that they are an important, emerging object of study in journalism research and provides a digital methods analysis and theoretical engagement with Longform.org, one of the most prominent long-form content aggregators on the web. We propose that Longform.org can be understood as leveraging the datafication of news content in order to valorize the long tail of archived material. Drawing on scraped data from the archive, we undertake an in-depth analysis into the practices of long-form aggregators. While Longform.org exhibits a degree of curatorial diversity, legacy news media outlets tend to be featured more frequently. Accessibility of news media archives is one of the most relevant factors for being featured by Longform.org. Our analysis demonstrates the relevant role of smaller digital-only publications, which provide a unique mix of sources. Through a network analysis of scraped tags we explore the composition of themes, including personal, world-political, celebrity, technological and cultural concerns. The data and curatorial practices of such long-form aggregators may be understood as an area of contemporary news work that conditions which past perspectives are more readily available, experienceable and programmable on the web.

The article draws on Marco Braghieri’s research on long-form journalism and archives, about which you can read more in Yesterday’s News. The future of long-form journalism and archives recently published by Peter Lang.

Jobs: Lecturer in Digital Innovation Management Education

Our Department of Digital Humanities is seeking to appoint two Lecturers (Academic Education Pathway) in Digital Innovation Management to contribute to its teaching in this and related areas. The post-holder will have a sound knowledge of the key concepts, theories, debates and challenges in digital innovation management, information systems, algorithmic management and digital transformation, with a PhD in a relevant area.

We are looking for a range of expertise to support our fast-expanding programmes, which may include work experience in the sector of study. We are particularly interested in the following areas to support our teaching: any aspect of digital innovation; information systems; digital platforms; algorithmic management; and critical studies of technology. Furthermore, candidates with the potential for developing collaborations and knowledge exchange activities with commercial, industrial or third sector bodies are especially welcome.

More here: https://jobs.kcl.ac.uk/gb/en/job/029085/Lecturer-in-Digital-Innovation-Management-Education

Salary: £38,304 – £45,026 per annum, including London Weighting Allowance
Aug-2021 Closing date: 17-Aug-2021
Interviews: Week starting 30 August
Starting date: 15 September 2021 (if possible)

These posts will be offered on an a fixed-term contract for 1 year, and are full-time  post 35 hours per week- 100% full time equivalent. Continue reading “Jobs: Lecturer in Digital Innovation Management Education”

Creative AI: Models of Artistic Practice

How do artists use the backend of machine learning in their artworks? How is Machine Learning transforming the making of meaning? These were the questions brought about in the Creative AI Lab’s presentation during the National College of Art and Design, Dublin (NCAD) Digital Culture Webinar Series hosted by Elaine Hoey (New Media Artist) and Dr. Rachel O’Dwyer (Lecturer in Digital Cultures, NCAD).

Titled ‘Inquiring the Backend of Machine Learning Artworks: Making Meaning by Calculation’, the Creative AI Lab’s Eva Jäger (Associate Curator of Serpentine Galleries Arts Technologies) and Mercedes Bunz (Senior Lecturer in Digital Society, KCL) presented their research (video here) into how artists use the backend of Machine Learning (ML) to develop their work. Their presentation was followed by Zac Ionnidis’, who spoke about Forensic Architecture’s use of AI to automate parts of human rights monitoring research.

The research of the Creative AI Lab, a collaboration of the Department of Digital Humanities, KCL and the Serpentine Gallery, surfaced particular ways in which artists make use of machine learning in order to create models for engagement with these new technologies. Mercedes Bunz outlined some of the inspirations for the research, citing Zylinska’s book ‘AI Art: Machine Visions and Warped Dreams’ and Søren Pold’s ‘The Metainterface’. Further on, Eva Jäger explained why it was important to research the artistic usage of the back-end of ML. She elaborated on what constituted the back-end and said that this work often leads to an “alternate output which includes building or developing software and publishing/sharing research”.

Diagram showing back end and front end of AI artwork

Mercedes Bunz carried forward the presentation by arguing against the idea that humans are the only ones that can create meanings referring to work by Stuart Hall, who developed a more open approach when researching the encoding and decoding of meaning regarding television programs. Continue reading “Creative AI: Models of Artistic Practice”

Welcome to new Lecturers at the Department of Digital Humanities 🎊

A very warm welcome to all of our new members of staff at the Department of Digital Humanities! Joining us ahead of the next academic year we have:

  • Andrea Ballatore, Lecturer in Social and Cultural Informatics
  • Barbara McGillivray, Lecturer in Digital Humanities and Cultural Computation
  • Daniel Chavez Heras, Lecturer in Humanistic and Social Computing Education
  • Laura Gibson, Lecturer in Digital Content Management Education
  • Mike Duggan, Lecturer in Digital Culture, Society and Economy Education
  • Niki Cheong, Lecturer in Digital Culture and Society

Stuart Dunn, Reader and Head of the Department of Digital Humanities, comments:

“Our Department represents a broad range of digitally-driven teaching and research into the human record across numerous fields and disciplines; and also service, which draws on our traditions of excellence in the Digital Humanities to help society deal with the many challenges of the contemporary digital world. It is therefore my pleasure to welcome our new colleagues to DDH, who will help us build in all these areas, and consolidate and expand our strengths in the future”.

You can find out more about each of them in their bios below.

Continue reading “Welcome to new Lecturers at the Department of Digital Humanities 🎊”

Visiting Professor David Berry on Explainability and Interpretability as Critical Infrastructure Practice

David Berry is currently visiting the Department of Digital Humanities as Visiting Professor of Critical Theory and Digital Humanities. The following post from David introduces some of his current research on explainability and interpretability. He is giving a talk about this work at the Infrastructural Interventions workshop on Tuesday 22nd June.

I am very excited to be a Visiting Professor of Critical Theory and Digital Humanities at KCL in the Department of Digital Humanities in 2021 as KCL not only has a great research culture, but also really exciting projects which I have been learning about. Whilst I am at Kings, I have been working on a new project around the concept of Explainability called “Explanatory Publics: Explainability, Automation and Critique.” Explainability is the idea that artificial intelligence systems should be able to generate a sufficient explanation of how an automated decision was made, representing or explaining, in some sense, its technical processing. With concerns over biases in algorithms there is an idea that self-explanation by a machine learning system would engender trust in these systems. [1]

Trust is a fundamental basis of any system, but it has to be stabilised through the generation of norms and practices that create justifications for the way things are. This is required for automated decision making, in part, because computation is increasingly a central aspect of a nation’s economy, real or imaginary. I argue that this is important under conditions of computational capitalism because when we call for an explanation, we might be able to better understand the contradictions within this historically specific form of computation that emerges in Late Capitalism. I have been exploring how these contradictions are continually suppressed in computational societies and generate systemic problems borne out of the need for the political economy of software to be obscured so that its functions and the mechanisms of value generation are hidden from public knowledge.

I argue that explainability offers a novel and critical means of intervention into, and transformation of, digital technology. By explanatory publics, I am gesturing to the need for frameworks of knowledge, whether social, political, technical, economic or cultural, to be justified through a social right to explanation. Explanations are assumed to tell us how things work and thereby giving us the power to change our environment in order to meet our own ends. Indeed, for a polity to be considered democratic, I argue that it must ensure that its citizens are able to develop a capacity for explanatory thought in relation to the digital (in addition to other capacities), and able to question ideas, practices and institutions in a digital society. So this also includes the corollary that citizens can demand explanatory accounts from technologies, institutions and artificial intelligences in the digital technologies they rely on.

The notion of explainability offers a critical point of intervention into these debates. By addressing the problem of creating systems that can explain their automated decision-making processes, the concept of justification becomes paramount. However, many current discussions of explainability tend to be chiefly interested in creating an explanatory product, whereas I argue that an understanding of the explanatory process will have greater impacts for algorithmic legitimacy and democratic politics.

[1] Within the field of AI there is now a growing awareness of this problem of opaque systems and a sub-discipline of “explainable AI” (XAI) has emerged and begun to address these very complex issues – although mainly through a technical approach.