New article: ‘Risk consciousness and public perceptions of COVID-19 vaccine passports’

A new article on how perceptions of risk (Beck, Giddens) impact public attitudes towards vaccination passports, authored by DDH professor, Btihaj Ajana, and Elena Engstler, Anas Ismail & Marina Kousta.

Link to article: https://journals.sagepub.com/doi/full/10.1177/05390184231182056

Abstract:

In response to the global outbreak of COVID-19 in early 2020, many countries around the world have rushed to develop and implement various mechanisms, including vaccination passports, to contain the spread of the virus and manage its significant impact on heath and society. COVID-19 passports have been promoted as a way of speeding society’s return to ‘normal’ life while protecting public health and safety. These passports, however, are not without controversy. Various concerns have been raised with regard to their social and ethical implications. Framing the discussion within the ‘risk society’ thesis and drawing on an interview-based study with members of the UK public as well as the relevant literature, this article examines perceptions of COVID-19 vaccine passports. The findings of the study indicate that participants’ attitudes toward vaccine passports are primarily driven by factors relating to perceptions of risk. While some considered vaccine passports as a positive strategy to encourage vaccine uptake and facilitate travel and daily activities, others saw this mechanism as a coercive step that might alienate further those who are already vaccine hesitant. Issues of fairness, equity, discrimination, trust, and data security were major themes in participants’ narratives and their subjective assessment of vaccine passports.

Résumé:

En réponse à l’épidémie mondiale de COVID-19 au début de l’année 2020, de nombreux pays dans le monde se sont empressés d’élaborer et de mettre en œuvre divers mécanismes, dont les passeports vaccinaux, afin de contenir la propagation du virus et de gérer son impact considérable sur la santé et la société. Les passeports COVID-19 ont été présentés comme un moyen d’accélérer le retour de la société à la vie « normale » tout en protégeant la santé et la sécurité publiques. Ces passeports ne vont toutefois pas sans controverse. Leurs implications sociales et éthiques ont suscité de nombreuses inquiétudes. En inscrivant la discussion dans le cadre de la thèse de la « société du risque » et en s’appuyant sur une étude basée sur des entretiens avec des membres du public britannique ainsi que sur la littérature pertinente, cet article examine les perceptions des passeports vaccinaux de COVID-19. Les résultats de l’étude indiquent que les attitudes des participants à l’égard des passeports vaccinaux sont principalement motivées par des facteurs liés à la perception du risque. Alors que certains considèrent le passeport vaccinal comme une stratégie positive pour encourager la vaccination et faciliter les voyages et les activités quotidiennes, d’autres considèrent ce mécanisme comme une mesure coercitive qui pourrait aliéner davantage ceux qui hésitent déjà à se faire vacciner. Les questions de justice, d’équité, de discrimination, de confiance et de sécurité des données ont été des thèmes majeurs dans les récits des participants et dans leur évaluation subjective des passeports vaccinaux.

New Research Project: Art x Public AI

Art x Public AI is a new research project by the Creative AI Lab, a collaboration between the Serpentine (a public arts org in London) and the Department of Digital Humanities, KCL. The lab focuses on developing research and prototypes that further artistic experimentation with AI. Our aim is to expand the conversations around AI by offering a more nuanced vision and approach to the negotiation of its public value and interest. Through the lens of art-making, we are able to explore key questions with greater precision and specificity.   

A data visualisation analysing the openness of two AI tools over different layers such as energy, server, data, model, software application

In our first workshop* on the topic of Art x Public AI last month, we sketched out an AI tech stack (see above) in order to get a more multi-dimensional view of AI tools that artists are using across their technological infrastructures. In particular, we explored how the tech at each layer is governed and whether or not it is open source. This has been a useful preliminary exercise for shifting the conversation around foundational AI models, from being purely about the IP of inputs (training data) and outputs (generated works), to a bigger one about the interplay between different public and private governance and ownership models used at each layer of the stack. This shift is necessary to give us a better picture of how public interest can be positioned in relation to the influence and development of these technologies.

To this end,  Alana Kushnir (Serpentine Legal Lab & Guest Work Agency) provided us with insights into the existing legal frameworks for each layer of the AI tech stack. This allowed us to identify conceptual gaps and speculate about new types of legal and supralegal approaches that might become necessary in the near future. This first attempt to create a method for examining AI tools has allowed us to articulate where new approaches need to be devised––e.g. for dataset governance, or for a model and its weights. 

As an example, RadicalxChange’s work on data coalitions and escrow agents presents a new data governance paradigm that could sit within the ‘data’ layer of the stack. New frameworks like this emerge only when we closely interrogate the value of the data layer and understand it to be relational. This aligns well with Salome Viljoen’s work on relational data, which Photini Vrikki and Mercedes Bunz discuss as a shift from big to democratic data

Why use artistic production to explore AI discourse?

We know from our work in the Creative AI Lab that artistic practices are exceptionally good at surfacing models for engagement with AI technologies–and not only engagement with the end user. More importantly, the production processes of the creative systems (including ML models) that artists build, highlight concerns that are resonant with those of the general public: What rights do you have over the models you build? Or over the outcomes of a model you use? What relationship do you have to the data you use to train it? Etc.

As we move closer to a world where generative images, audio, and language models can produce evocative content ad infinitum, artists will increasingly identify their ‘artwork’ with their own creative tech system including their own AI model. So, for artists working with AI, the capacity for creative agency will be heavily correlated with the ability to manipulate, govern and verify their machine learning models. And as it happens, this negotiation will also be central to the way in which AI can become a truly public societal infrastructure. 

We will be posting updates as we advance our research. You can also follow the work of Future Art Ecosystems by subscribing to their newsletter, here

Eva, Mercedes & the Creative AI Lab

*Thanks to Eva Jäger, Victoria Ivanova and Kay Watson (Serpentine), Alana Kushnir (Serpentine Legal Lab & Guest Work Agency), Reema Selhi (DACS), Oliver Smith (dmstfctn), and Mercedes Bunz, Daniel Chavez Heras, Alasdair Milne (all DDH) as well as Caroline Sinders.

Professor Stuart Dunn’s Inaugural Lecture at King’s College London

On 20th June 2023, Stuart Dunn of the Department of Digital Humanities at King’s College London delivered his Professorial Inaugural Lecture, The Spatial Humanities: A Challenge to the All-Knowing Map, which explored:

What are Spatial Humanities, and why does King’s have a Professor dedicated to them?

In 1946 Jorge Luis Borges published a short story about a fictional kingdom fixated with perfecting the Art of Cartography. The people construct a map so exact, that it covers the whole expanse of the kingdom. But the map is abandoned by later generations and decayed; until all that is left are its tattered ruins, inhabited only by animals and beggars.

Professor Dunn examines the present-day successors of Borges’s all-encompassing map. Namely, the platforms through which we navigate and wayfind – Google Maps, OpenStreetMap, Apple Maps and so on, and which – metaphorically – cover the world’s entire surface.

Framed partly by the history of ideas, partly by cartography, and partly by digital place-making, Professor Dunn’s approach is situated at that crossroads of disciplines that make up the Spatial Humanities. Through a linked discussion of early antiquarian place-writing, the emergence of Global Position System (GPS) technology, and with what the geographer Doreen Massey called “space-time compression”, he explores the origins of our motivation to “know” the entire world through mapping.

He also discusses how this has led to contemporary placemaking becoming tattered through corporatization and commercialization. How can the Spatial Humanities help us fix our place? Both in the sense of locating where we are, and of repairing our relationship with it.

A full transcript and recording of the lecture can be found here.

Dr Kate Devlin leads £5m UKRI research project to explore responsible and trustworthy AI

King’s College London have been awarded £5m in funding from UK Research and Innovation (UKRI) to support a collaborative project led by Dr Kate Devlin from the Department of Digital Humanities and involving Dr Caitlin Bentley and Professor Sana Khareghani (Department of Informatics), and Professor Prokar Dasgupta (Peter Gorer Department of Immunobiology and the Department of Surgical & Interventional Engineering).

The grant will fund research that helps us understand what responsible and trustworthy AI is, how to develop it, and how to build it into existing systems and the impacts it will have on society:

This is a timely investment, bringing together a world-leading, diverse and multidisciplinary team from all four nations of the UK to work on cutting-edge issues. It is particularly exciting to have the King’s strand of the project based in Arts and Humanities, where the College has recently invested in the Digital Futures Institute, exploring how we can live well with technology. This is truly cross-cutting research on responsible AI with a human-centred approach at the very heart of it.

Dr Kate Devlin, who is leading King’s involvement with the UKRI Responsible Artificial Intelligence UK (RAI UK)

High-dimensional cinema • 6 July 2023

high-dimensional cinema

Join us in this panel discussion to learn how Artificial Intelligence and related technologies are reshaping the production and understanding of audiovisual culture

6 July 2023, 6:00 – 7.30 pm

King’s College London, King’s Building, Nash Lecture Theatre (K2.31) 

Moving images are usually said to have 2 or at most 3 dimensions. If you suspect that your favourite films have many more, join us for a set of presentations and lively panel discussion on “high-dimensional cinema,” and discover how Artificial Intelligence and related technologies are reshaping the production and understanding of audiovisual culture.

In this “meeting of the labs” event, a trio of experts in the computational analysis of visual culture come together to present their latest research and engage in conversation about recent advances at the intersection between cultural analytics, computational aesthetics, and machine learning. Join Mila Oiva, Nanne van Noord, and Daniel Chávez Heras, as they explore if and how high-dimensional cinema uncovers latent structures of meaning and pushes the boundaries of audiovisual creativity, from historical Soviet newsreels to contemporary Hollywood cinema.

Read more about the panelists and their presentations.


This is a public event part of the workshop Sculpting Time with Computers, co-organised by the Digital Futures Institute and the Department of Digital Humanities at King’s College London. CUDAN participants are supported partially via the CUDAN ERA Chair project, funded through the Horizon 2020 research and innovation program of the European Commission (Grant no. 810961).  

If you tweet or toot about this event, you can use the #kingsdh hashtag or mention @kingsdh. @kingsdh@hcommons.social (in Mastodon). If you would like to get notifications about similar events, you can sign up to this mailing list

Any queries, please email daniel.chavez@kcl.ac.uk

Seminar: Museums online: defining and evaluating success • 5 June 2023

Event organised by the Computational Humanities research group

5 June 2023

King’s College London, Bush House NE 2.01, 2 pm (in person only)

Ellen Charlesworth (Durham University, United Kingdom), Museums online: defining and evaluating success

Abstract

During the COVID-19 national lockdowns, there was a significant increase in the amount of content UK museums uploaded online. By publishing on social media and platforms like Google Arts and Culture, many museums hoped to reach new, younger, audiences. 

This seminar poses the simple question, were they successful? 

Platforms’ application programming interfaces (APIs) have made more data available on museums’ digital strategies and online audiences than ever before, opening up new avenues of research. Presenting ongoing work, this talk will explore the results of a large-scale quantitative analysis of museums’ online content, and details how an initial pilot study of 315 UK museums is being expanded to 40,000 museums across Europe. 

By contextualising the findings, it will investigate the underlying factors that shape social media metrics—such as ‘likes’, ‘shares’, and ‘comments’—and highlight how they complicate evaluating success online. It is questionable that social media engagement is indicative of the type of audience engagement museums are trying to foster; however, is it possible to use platform data to build more nuanced evaluative tools for the museum sector?

With platforms increasingly acting as mediators between audiences and museums online, this talk explores the difficulties, and future possibilities, this presents for both museums and researchers.

Bio

Ellen Charlesworth is an AHRC funded PhD candidate at Durham University. Having studied art history at the Courtauld Institute of Art and then data science at Birkbeck, she gained experience designing and evaluating online exhibitions collaborating with the Birkbeck Knowledge Lab, Museum of the Home, and the Venerable English College, Rome. 

Her current research asks how we can improve museums online content; using data from museums’ websites and social media she aims to develop more nuanced measures of audience engagement. Her work identifies sector-wide trends in museums’ online content and explores the way this is shaped by both funding guidelines and platforms’ algorithmic interventions. 

Workshop: Sculpting Time with Computers

fast motion capture in a high-speed tunnel

Interdisciplinary Approaches to Computational Moving Images

6-7 July 2023 

King’s College London, Strand Campus 

This workshop brings together a select group of researchers in the fields of digital and computational humanities, film, cultural history, informatics, computer vision, and digital art, with the purpose of exploring together emerging computational approaches to the study of moving images.

Participants include researchers from leading laboratories in Europe, including the Cultural Data Analytics Open Lab (CUDAN) at Tallinn University and the Cultural Analytics Lab (CANAL) at the University of Amsterdam, as well as archives and digital preservation experts from public UK institutions such as the BBC and the BFI. The workshop is hosted by the Computational Humanities Research Group in the Department of Digital Humanities at King’s College London.

Over two days, we will consider the modelling of moving images as computational artefacts, and reflect on the past, present, and future of computational moving image studies. We will then discuss and actively experiment with several ways of encoding the flows of moving images in time: from shot lengths measurements to high-dimensional representations, computational techniques that might afford new perspectives on the constitution and analysis of cinematic time.

The workshop is broadly split between a day of introductions and theory, and a second day of practical work and plans for future collaboration.  The workshop will take place in the Embankment Room (MB -1.1.4), except the public panel on High-dimensional cinema, which will be at the Nash Lecture Theatre (K2.31). See programme in the next page.

We are also organising a public event on July 6th, click here for more details.

Any queries, please contact daniel.chavez@kcl.ac.uk


This event is funded by the Digital Futures Institute at KCL.

CUDAN participants are supported partially via the CUDAN ERA Chair project, funded through the Horizon 2020 research and innovation program of the European Commission (Grant no. 810961).

DDH researchers at King’s Festival of Artificial Intelligence

DDH researchers are contributing to several public talks and events as part of the The King’s Festival of Artificial Intelligence (Bringing the Human to the Artificial, King’s Institute for Artificial Intelligence).

The festival brings together speakers, exhibits, performances, demos, and screenings in an exciting programme of events from 24th-28th May 2023. The events are open to the public, and provide an opportunity to gather with academics, students, alumni and King’s cultural and industry partners to find out more about developments in artificial intelligence technologies, and the challenges and opportunities that arise from them.

AI and the Visual: Art, Science and Human Bias

Wednesday 24 May, 12.00 to 16.30

This event brings together experts from across a range of disciplines who work on visual culture and AI – from art to facial recognition systems – to explore its opportunities and challenges. How do we live well with AI and the visual? And how do we address its systemic inequalities around race, gender and ethnicity? Speakers include:

Register here

AI Art: On Human and Machine Creativity

Friday 26 May, 18.00 to 18.45

Can computers be creative? Do AI image generators such as DALL·E 2 mean the end of art? Looking at different examples of computational creativity enabled by machine learning, this talk by Joanna Zylinska will aim to cut through the smoke and mirror effect surrounding the current narratives about ‘creative AI’. But it will also demonstrate some practices of machinic co-creation, in which human artists and engineers draw on robotics and AI to produce work that is both visually interesting and thought-provoking. Through this, the talk will raise broader questions about the conditions of art making and creativity today. 

Register here.

AI & Art Salon

Friday 26 May, 19.00 to 20.30

How might AI art impact society and humanity’s self-conception? Attend this live discussion from the makers of the Art & AI podcast. Hear new perspectives, deep insights and crackling debate from a unique mix of scholars from King’s, the Courtauld Institute and the National Gallery.

The panel includes:

Register here.

How deep (learning) is your love?

Saturday 27 May, 19.00 to 19.45

As AI advances, our interactions with chatbots and robots are becoming increasingly common. But what are the potentials and pitfalls of fostering friendships and intimacy with computer software and hardware? This talk explores our emotional connections with AIs and robots, from their ability to provide support and companionship to fears of dehumanisation and the loss of authentic human connection. Technology has the power to bridge social and personal gaps in our lives, while also raising important ethical questions about individual and cultural impact. Join us as we explore the complex and fascinating world of human-AI relationships and consider the implications for our future interactions with technology.

Register here.

King’s Public Lecture in Digital Humanities: Shannon Mattern on “Modeling Doubt, Coding Humility: A Speculative Syllabus”

Last night we hosted Shannon Mattern for a talk on “Modeling Doubt, Coding Humility: A Speculative Syllabus”, as part of a new series of King’s Public Lectures in Digital Humanities. Here’s the video of the livestream.

Modeling Doubt, Coding Humility: A Speculative Syllabus

At a time of increasing artificial intelligence and proliferating conspiracy, faith in ubiquitous data capture and mistrust of public institutions, the ascendance of STEM and declining support for the arts and humanities, we might wonder what kind of epistemological world we’re creating. Prevalent ways of knowing have tended to weaponize uncertainty or ambiguity, as we’ve seen in relation to COVID vaccines, elections, climate, and myriad political scandals. In this talk I’ll sketch out a speculative syllabus for a future class about the place of humility and doubt in various fields of study and practice. We’ll examine how we might use a range of methods and tools — diverse writing styles, modes of visualization and sonification, ways of structuring virtual conversations, etc — to express uncertainty and invite more thoughtful, reflective engagement with our professional and public audiences and interlocutors.

Bio

Shannon Mattern is the Penn Presidential Compact Professor of Media Studies at Art History at the University of Pennsylvania. From 2004 to 2022, she served in the Department of Anthropology and the School of Media Studies at The New School in New York. Her writing and teaching focus on media architectures and infrastructures and spatial epistemologies. She has written books about libraries, maps, and urban intelligence, and she contributes a column about urban data and mediated spaces to Places Journal. You can find her at wordsinspace.net.

This event is co-organised by the King’s College London Department of Digital Humanities, the Centre for Digital Culture and the Digital Futures Institute.

DDH researchers contribute to “AI: Who’s Looking After Me?” exhibition and events at Science Gallery London

Department of Digital Humanities researchers Mark Coté and Kate Devlin have contributed to a new programme of public activities with Science Gallery London.

AI: Who’s Looking After Me?, (presented in collaboration with FutureEverything) is a free exhibition and public events programme, running from 21 June 2023 to 20 January 2024.

AI: Who’s Looking After Me?’ takes a questioning, surprising, playful look at the ways Artificial Intelligence (AI) is already shaping so many areas of our lives, and ask if we can really rely on these technologies for our wellbeing and happiness. Presented in collaboration with FutureEverything, we explore who holds the power, distributes the benefits, and bears the burden of existing AI systems.

Most of us know very little about what AI is or how it works, but so much of how we’re cared for in different aspects of our lives – be it love, justice or health – is undergoing transformative change. ‘AI: Who’s Looking After Me?’ fractures this singular, monolithic ‘AI’ apart, and looks at the range of ways it’s changing how we’re cared for.

“So many of our conversations about AI treat it as this distant, sleek, even magical thing; our attentions are daily directed towards the latest product or scandal. In all this hype and marketing, I think we’re losing sight of the human — both in how AI technologies are made, and the many ways they’re already woven into our lives. To be able to grasp and shape the course of AI’s journey, we need to grapple with its messy, multiple realities and I hope this exhibition can be an invitation to do that. It’s characteristic of what we’re trying to do as a gallery, to nurture unlikely, inventive collaborations and dialogues and be a home for the cultural work that emerges from them.”

Siddharth Khajuria, Director of Science Gallery London

Exhibited works from the programme include:

  • Cat Royale is the futurist utopia where cats are watched over lovingly by an AI robot arm, tending to their every need. The film and installation documenting cats’ experiences with an AI caregiver probe the future impact of new technologies on animal care… and the trade-offs involved. The work from internationally renowned artist collective Blast Theory, currently cultural ambassadors for the Trustworthy Autonomous Systems Hub, will be accompanied by live research from author and computer scientist Dr Kate Devlin, King’s Department of Digital Humanities.  
  • Each Saturday throughout the season, Sentient Beings will invite visitors to question their relationship to security and privacy within the digital landscape of AI assistants. Featuring an immersive soundscape, the work sees artist Salomé Bazin collaborate with Dr Mark Cote from King’s Department for Digital Humanities, and Jose Such and William Seymour from the Department of Informatics.  

For further information visit Science Gallery London.