On 20th June 2023, Stuart Dunn of the Department of Digital Humanities at King’s College London delivered his Professorial Inaugural Lecture, The Spatial Humanities: A Challenge to the All-Knowing Map, which explored:
What are Spatial Humanities, and why does King’s have a Professor dedicated to them?
In 1946 Jorge Luis Borges published a short story about a fictional kingdom fixated with perfecting the Art of Cartography. The people construct a map so exact, that it covers the whole expanse of the kingdom. But the map is abandoned by later generations and decayed; until all that is left are its tattered ruins, inhabited only by animals and beggars.
Professor Dunn examines the present-day successors of Borges’s all-encompassing map. Namely, the platforms through which we navigate and wayfind – Google Maps, OpenStreetMap, Apple Maps and so on, and which – metaphorically – cover the world’s entire surface.
Framed partly by the history of ideas, partly by cartography, and partly by digital place-making, Professor Dunn’s approach is situated at that crossroads of disciplines that make up the Spatial Humanities. Through a linked discussion of early antiquarian place-writing, the emergence of Global Position System (GPS) technology, and with what the geographer Doreen Massey called “space-time compression”, he explores the origins of our motivation to “know” the entire world through mapping.
He also discusses how this has led to contemporary placemaking becoming tattered through corporatization and commercialization. How can the Spatial Humanities help us fix our place? Both in the sense of locating where we are, and of repairing our relationship with it.
A full transcript and recording of the lecture can be found here.
King’s College London have been awarded £5m in funding from UK Research and Innovation (UKRI) to support a collaborative project led by Dr Kate Devlin from the Department of Digital Humanities and involving Dr Caitlin Bentley and Professor Sana Khareghani (Department of Informatics), and Professor Prokar Dasgupta (Peter Gorer Department of Immunobiology and the Department of Surgical & Interventional Engineering).
The grant will fund research that helps us understand what responsible and trustworthy AI is, how to develop it, and how to build it into existing systems and the impacts it will have on society:
This is a timely investment, bringing together a world-leading, diverse and multidisciplinary team from all four nations of the UK to work on cutting-edge issues. It is particularly exciting to have the King’s strand of the project based in Arts and Humanities, where the College has recently invested in the Digital Futures Institute, exploring how we can live well with technology. This is truly cross-cutting research on responsible AI with a human-centred approach at the very heart of it.
Dr Kate Devlin, who is leading King’s involvement with the UKRI Responsible Artificial Intelligence UK (RAI UK)
Join us in this panel discussion to learn how Artificial Intelligence and related technologies are reshaping the production and understanding of audiovisual culture
King’s College London, King’s Building, Nash Lecture Theatre (K2.31)
Moving images are usually said to have 2 or at most 3 dimensions. If you suspect that your favourite films have many more, join us for a set of presentations and lively panel discussion on “high-dimensional cinema,” and discover how Artificial Intelligence and related technologies are reshaping the production and understanding of audiovisual culture.
In this “meeting of the labs” event, a trio of experts in the computational analysis of visual culture come together to present their latest research and engage in conversation about recent advances at the intersection between cultural analytics, computational aesthetics, and machine learning. Join Mila Oiva, Nanne van Noord, and Daniel Chávez Heras, as they explore if and how high-dimensional cinema uncovers latent structures of meaning and pushes the boundaries of audiovisual creativity, from historical Soviet newsreels to contemporary Hollywood cinema.
This is a public event part of the workshop Sculpting Time with Computers, co-organised by the Digital Futures Institute and the Department of Digital Humanities at King’s College London. CUDAN participants are supported partially via the CUDAN ERA Chair project, funded through the Horizon 2020 research and innovation program of the European Commission (Grant no. 810961).
If you tweet or toot about this event, you can use the #kingsdh hashtag or mention @kingsdh. @kingsdh@hcommons.social (in Mastodon). If you would like to get notifications about similar events, you can sign up to this mailing list.
King’s College London, Bush House NE 2.01, 2 pm(in person only)
Ellen Charlesworth (Durham University, United Kingdom), Museums online: defining and evaluating success
Abstract
During the COVID-19 national lockdowns, there was a significant increase in the amount of content UK museums uploaded online. By publishing on social media and platforms like Google Arts and Culture, many museums hoped to reach new, younger, audiences.
This seminar poses the simple question, were they successful?
Platforms’ application programming interfaces (APIs) have made more data available on museums’ digital strategies and online audiences than ever before, opening up new avenues of research. Presenting ongoing work, this talk will explore the results of a large-scale quantitative analysis of museums’ online content, and details how an initial pilot study of 315 UK museums is being expanded to 40,000 museums across Europe.
By contextualising the findings, it will investigate the underlying factors that shape social media metrics—such as ‘likes’, ‘shares’, and ‘comments’—and highlight how they complicate evaluating success online. It is questionable that social media engagement is indicative of the type of audience engagement museums are trying to foster; however, is it possible to use platform data to build more nuanced evaluative tools for the museum sector?
With platforms increasingly acting as mediators between audiences and museums online, this talk explores the difficulties, and future possibilities, this presents for both museums and researchers.
Bio
Ellen Charlesworth is an AHRC funded PhD candidate at Durham University. Having studied art history at the Courtauld Institute of Art and then data science at Birkbeck, she gained experience designing and evaluating online exhibitions collaborating with the Birkbeck Knowledge Lab, Museum of the Home, and the Venerable English College, Rome.
Her current research asks how we can improve museums online content; using data from museums’ websites and social media she aims to develop more nuanced measures of audience engagement. Her work identifies sector-wide trends in museums’ online content and explores the way this is shaped by both funding guidelines and platforms’ algorithmic interventions.
Interdisciplinary Approaches to Computational Moving Images
6-7 July 2023
King’s College London, Strand Campus
This workshop brings together a select group of researchers in the fields of digital and computational humanities, film, cultural history, informatics, computer vision, and digital art, with the purpose of exploring together emerging computational approaches to the study of moving images.
Participants include researchers from leading laboratories in Europe, including the Cultural Data Analytics Open Lab (CUDAN) at Tallinn University and the Cultural Analytics Lab (CANAL) at the University of Amsterdam, as well as archives and digital preservation experts from public UK institutions such as the BBC and the BFI. The workshop is hosted by the Computational Humanities Research Group in the Department of Digital Humanities at King’s College London.
Over two days, we will consider the modelling of moving images as computational artefacts, and reflect on the past, present, and future of computational moving image studies. We will then discuss and actively experiment with several ways of encoding the flows of moving images in time: from shot lengths measurements to high-dimensional representations, computational techniques that might afford new perspectives on the constitution and analysis of cinematic time.
The workshop is broadly split between a day of introductions and theory, and a second day of practical work and plans for future collaboration. The workshop will take place in the Embankment Room (MB -1.1.4), except the public panel on High-dimensional cinema, which will be at the Nash Lecture Theatre (K2.31). See programme in the next page.
CUDAN participants are supported partially via the CUDAN ERA Chair project, funded through the Horizon 2020 research and innovation program of the European Commission (Grant no. 810961).
DDH researchers are contributing to several public talks and events as part of the The King’s Festival of Artificial Intelligence (Bringing the Human to the Artificial, King’s Institute for Artificial Intelligence).
The festival brings together speakers, exhibits, performances, demos, and screenings in an exciting programme of events from 24th-28th May 2023. The events are open to the public, and provide an opportunity to gather with academics, students, alumni and King’s cultural and industry partners to find out more about developments in artificial intelligence technologies, and the challenges and opportunities that arise from them.
This event brings together experts from across a range of disciplines who work on visual culture and AI – from art to facial recognition systems – to explore its opportunities and challenges. How do we live well with AI and the visual? And how do we address its systemic inequalities around race, gender and ethnicity? Speakers include:
Can computers be creative? Do AI image generators such as DALL·E 2 mean the end of art? Looking at different examples of computational creativity enabled by machine learning, this talk by Joanna Zylinska will aim to cut through the smoke and mirror effect surrounding the current narratives about ‘creative AI’. But it will also demonstrate some practices of machinic co-creation, in which human artists and engineers draw on robotics and AI to produce work that is both visually interesting and thought-provoking. Through this, the talk will raise broader questions about the conditions of art making and creativity today.
How might AI art impact society and humanity’s self-conception? Attend this live discussion from the makers of the Art & AI podcast. Hear new perspectives, deep insights and crackling debate from a unique mix of scholars from King’s, the Courtauld Institute and the National Gallery.
As AI advances, our interactions with chatbots and robots are becoming increasingly common. But what are the potentials and pitfalls of fostering friendships and intimacy with computer software and hardware? This talk explores our emotional connections with AIs and robots, from their ability to provide support and companionship to fears of dehumanisation and the loss of authentic human connection. Technology has the power to bridge social and personal gaps in our lives, while also raising important ethical questions about individual and cultural impact. Join us as we explore the complex and fascinating world of human-AI relationships and consider the implications for our future interactions with technology.
Modeling Doubt, Coding Humility: A Speculative Syllabus
At a time of increasing artificial intelligence and proliferating conspiracy, faith in ubiquitous data capture and mistrust of public institutions, the ascendance of STEM and declining support for the arts and humanities, we might wonder what kind of epistemological world we’re creating. Prevalent ways of knowing have tended to weaponize uncertainty or ambiguity, as we’ve seen in relation to COVID vaccines, elections, climate, and myriad political scandals. In this talk I’ll sketch out a speculative syllabus for a future class about the place of humility and doubt in various fields of study and practice. We’ll examine how we might use a range of methods and tools — diverse writing styles, modes of visualization and sonification, ways of structuring virtual conversations, etc — to express uncertainty and invite more thoughtful, reflective engagement with our professional and public audiences and interlocutors.
Bio
Shannon Mattern is the Penn Presidential Compact Professor of Media Studies at Art History at the University of Pennsylvania. From 2004 to 2022, she served in the Department of Anthropology and the School of Media Studies at The New School in New York. Her writing and teaching focus on media architectures and infrastructures and spatial epistemologies. She has written books about libraries, maps, and urban intelligence, and she contributes a column about urban data and mediated spaces to Places Journal. You can find her at wordsinspace.net.
AI: Who’s Looking After Me?, (presented in collaboration with FutureEverything) is a free exhibition and public events programme, running from 21 June 2023 to 20 January 2024.
AI: Who’s Looking After Me?’ takes a questioning, surprising, playful look at the ways Artificial Intelligence (AI) is already shaping so many areas of our lives, and ask if we can really rely on these technologies for our wellbeing and happiness. Presented in collaboration with FutureEverything, we explore who holds the power, distributes the benefits, and bears the burden of existing AI systems.
Most of us know very little about what AI is or how it works, but so much of how we’re cared for in different aspects of our lives – be it love, justice or health – is undergoing transformative change. ‘AI: Who’s Looking After Me?’ fractures this singular, monolithic ‘AI’ apart, and looks at the range of ways it’s changing how we’re cared for.
“So many of our conversations about AI treat it as this distant, sleek, even magical thing; our attentions are daily directed towards the latest product or scandal. In all this hype and marketing, I think we’re losing sight of the human — both in how AI technologies are made, and the many ways they’re already woven into our lives. To be able to grasp and shape the course of AI’s journey, we need to grapple with its messy, multiple realities and I hope this exhibition can be an invitation to do that. It’s characteristic of what we’re trying to do as a gallery, to nurture unlikely, inventive collaborations and dialogues and be a home for the cultural work that emerges from them.”
Siddharth Khajuria, Director of Science Gallery London
Exhibited works from the programme include:
Cat Royale is the futurist utopia where cats are watched over lovingly by an AI robot arm, tending to their every need. The film and installation documenting cats’ experiences with an AI caregiver probe the future impact of new technologies on animal care… and the trade-offs involved. The work from internationally renowned artist collective Blast Theory, currently cultural ambassadors for the Trustworthy Autonomous Systems Hub, will be accompanied by live research from author and computer scientist Dr Kate Devlin, King’s Department of Digital Humanities.
Each Saturday throughout the season, Sentient Beings will invite visitors to question their relationship to security and privacy within the digital landscape of AI assistants. Featuring an immersive soundscape, the work sees artist Salomé Bazin collaborate with Dr Mark Cote from King’s Department for Digital Humanities, and Jose Such and William Seymour from the Department of Informatics.
You can read further details and register here. An excerpt from the event blurb is copied below.
What if racism, sexism, and ableism aren’t just glitches in mostly functional machinery—what if they’re coded into our technological systems? In this talk, data scientist and journalist Meredith Broussard explores why neutrality in tech is a myth and how algorithms can be held accountable.
Broussard, one of the few Black female researchers in artificial intelligence, explores a range of examples: from facial recognition technology trained only to recognize lighter skin tones, to mortgage-approval algorithms that encourage discriminatory lending, to the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Even when such technologies are designed with good intentions, Broussard shows, fallible humans develop programs that can result in devastating consequences.
Broussard argues that the solution isn’t to make omnipresent tech more inclusive, but to root out the algorithms that target certain demographics as “other” to begin with. She explores practical strategies to detect when technology reinforces inequality, and offers ideas for redesigning our systems to create a more equitable world.
Folgert Karsdorp (Royal Netherlands Academy of Arts & Sciences (KNAW), Amsterdam) and Mike Kestemont (University of Antwerp, Belgium), Forgotten knights, unseen sailors, and unapprehended criminals: applying unseen species models to the survival of culture
Abstract
Researchers of the past — whether historians, literary scholars or archaeologists — depend on the sources that have stood the test of time. That sample of history is usually far from complete, however. There are numerous reasons for this, such as natural causes (e.g., fires or floods), decisions at the level of archival policy (what do we preserve and what do we not?), and biases in the formation of the archives themselves. Data representing lower classes were long considered less relevant, for example, and thus socioeconomic factors likewise play a role in the survival of sources. In a series of recent experiments, we have explored how statistical methods from ecology can help us identify such gaps and biases in our knowledge. Those methods all find their basis in “Unseen Species Models,” which were were originally developed to estimate the number of unique species in an environment. Just as ecologists try to estimate biodiversity from an incomplete sample, we apply the models to incomplete historical archives to measure the actual cultural diversity. In this talk, we apply unseen species models to three cases. First, we show how these methods can tell us something about the forgotten medieval chivalric literature in Western Europe. We then apply an extension of the method to the historical archives of the Dutch East India Company, to map out the size of its workforce. Finally, we explore a generalization of the unseen species model with which co-variates of loss or absence can be mapped. We apply this extension to a dataset from historical criminology: the police registers of the Amigo prison (1879-1880) in Brussels, and show how the models can give us an estimate of the “dark number” of unapprehended perpetrators as well as the demographic composition of this group.
Bios
Mike Kestemont, PhD, is a full professor in the department of Literature at the University of Antwerp (Belgium). He specializes in computational text analysis for the Digital Humanities. Whereas his work has a strong focus on historic literature, his previous research has covered a wide range of topics in literary history, including classical, medieval, early modern and modernist texts. Together with Folgert Karsdorp and Allen Riddell he has written a textbook on data science for the Humanities. The persistence of cultural information over long stretches of time is his key research topic at the moment. In the new framework of Cultural Ecology, empirical methods are imported from ecology and biostatistics to provide innovative quantitative models of cultural change and survival. Together with his Polish colleagues Maciej Eder and Jan Rybicki he is involved in the Computational Stylistics Group. Mike lives in Brussels (http://mikekestemont.github.io/), tweets in English (@Mike Kestemont) and codes in Python (https://github.com/mikekestemont).
Folgert Karsdorp, PhD, is a senior researcher in Computational Humanities and Cultural Evolution at the Meertens Institute of the Royal Netherlands Academy of Arts and Sciences (Amsterdam, the Netherlands). His research focuses on modelling cultural change from an evolutionary perspective (e.g., why some cultural phenomena are adopted and persist through time, while others change or disappear). Additionally, he is interested in measuring cultural diversity and compositional complexity, and how we can account for biases in our estimations of diversity. To do that, he employs computational models from Machine Learning, Cultural Evolution, and Ecology. Besides cultural change and diversity, Karsdorp is also interested in teaching about computer programming in the context of the Humanities. Together with Mike Kestemont and Allen Riddell, he published a text book with Princeton University Press about using Python for Humanities data analysis. For more information see his website (https://www.karsdorp.io) and his projects on GitHub (https://github.com/fbkarsdorp).