Starting with a simple question “what does responsible AI look like from the street?” AI-in-the-street teams, one of them hosted at the Department of Digital Humanities, are undertaking creative participatory research in 5 cities the UK and Australia – London, Edinburgh, Coventry, Cambridge and Logan. These research-based interventions funded by AHRC’s BRAID programme (Bridging Divides in Responsible AI) take the form of diagramming workshops, sensing walks and street-based activities, and will inform the scoping of a prototype for a “street-level observatory” for everyday AI: a digital showcase and protocol for rendering the presence, role and effects of AI-based technologies visible and/or tangible for everyday publics in the street.
Image: Anne Fehres and Luke Conroy & AI4Media / Better Images of AI Licenced by CC-BY 4.0
The research of the London Observatory was conducted in three city locations – Science Gallery (London Bridge), Martello Street Studios (London Fields) and Hermitage Community Moorings (Wapping) hosted by Ambient Information Systems with Yasmine Boudiaf. In our primarily discursive workshops of up to 3hrs length we had in total 18 participants, and the approach, which evolved over the three sessions, combined role-playing, experience-sharing, exploring counterfactuals and terminology through algorithmically-guided conversation, envisioning exercises, and collaborative drawing. Audio/video recordings, texts and images from the sessions form the basis of a manifestation as artwork (in progress).
The workshops were grounded in three key considerations
• identities in the street, noting in particular that we may inhabit multiple and fluid identities (as parent accompanying child, as cyclist)
• needs and desires that the street actually or potentially fulfils, or fails to; the extent to which technologies including AI meet these needs and desires, either as currently deployed or as imagined, and unintended effects (which may have uneven impact)
• alternative, non-technological solutions to these needs and desires.
While participants had limited awareness of the extent of AI deployment in the street, they were tech-literate and understood the deeper societal and legal implications of AI systems. Most participants noted the lack of users’ voices in design and implementation processes, and several pointed to the energetic and environmental costs of AI. One participant had expert-level domain knowledge, but even they described finding AI systems as opaque in multiple ways, from problem specification and design, to terms of engagement and access, to the origin, processing and fate of data.
There was widespread agreement that, despite their potential agility and precision, AI technologies are entangled in an ossified economic model that centralises power away from citizens and relegates environmental costs as externalities. Devolution of human agency to machine systems was seen as of mixed utility. Other concerns raised included poor problem specification leading to ‘solutionism’ and function creep, and the general vulnerability of complex technological systems.
The findings of the London Observatory will be combined with those of Edinburgh, Coventry, Cambridge and Logan (Australia), and presented at the Science Gallery on Thursday, 12th September 2024 at 6.30pm.
Text by AI in the Street, Mukul Patel and Mercedes Bunz.
AI in the street is funded under the AHRC BRAID programme (Bridging Divides in Responsible AI). BRAID is a 3-year national research programme funded by the UKRI Arts and Humanities Research Council (AHRC), led by the University of Edinburgh in partnership with the Ada Lovelace Institute and the BBC.