[three_fourths]

How did automatic speech recognition lay the ground for contemporary computational knowledge practices Join us for a public talk with Xiaochang Li (Max Planck Institute for the History of Science, Berlin).  
 
How Language Became Data: Speech Recognition and Computational Knowledge – Xiaochang Li (Max Planck Institute for the History of Science, Berlin)  
 
Beginning in the 1970s, a team of researchers at IBM began to reorient the field of automatic speech recognition from the scientific study of human perception and language towards a startling new mandate: to find “the natural way for the machine to do it.” In what is recognizable today as a data-driven, “black box” approach to language processing, IBM’s Continuous Speech Recognition group set out to meticulously uncouple computational modelling from the demands of explanation and interpretability. Automatic speech recognition was refashioned as a problem of large-scale data acquisition, storage, and classification, one that was distinct from—if not antithetical to—human perception, expertise, and understanding. These efforts were pivotal in bringing language under the purview of data processing, and in doing so helped draw a narrow form of data-driven computational modelling across diverse domains and into the sphere of everyday life, spurring the development of algorithmic techniques that now appear in applications for everything from machine translation to protein sequencing. The history of automatic speech recognition invites a glimpse into how making language into data made data into an imperative, and thus shaped the conceptual and technical groundwork for what is now one of our most wide-reaching modes of computational knowledge.  
 
Bio: Xiaochang Li is currently a Postdoctoral Fellow in the Epistemes of Modern Acoustics research group at the Max Planck Institute for the History of Science in Berlin. This coming fall, she will be joining the faculty at Stanford University as Assistant Professor in the department of Communication. Her current book project examines the history of predictive text and how the problem of making language computationally tractable was laid into the foundations of data- driven computational culture. It traces developments in automatic speech recognition and natural language processing through the twentieth century, highlighting their influence on the cultural, technical, and institutional practices that gave rise to so-called “big data” and machine learning as privileged and pervasive forms of knowledge work.  
 
This event is part of an ongoing seminar series on “critical inquiry with and about the digital” hosted by the Department of Digital Humanities, King’s College London. If you tweet about the event you can use the #kingsdhhashtag or mention @kingsdh. If you’d like to get notifications of future events you can sign up to this mailing list.
 

[/three_fourths]

[one_fourth_last]

Date and time

Wed, 22 May 2019
16:30 – 18:00 BST

Location

BH(S)4.04, Bush House Lecture Theatre 2
Bush House, South Wing, King’s College London
30 Aldwych
London
WC2B 4BG

 

[button open_new_tab=”true” color=”accent-color” hover_text_color_override=”#fff” size=”medium” url=”https://www.eventbrite.co.uk/e/how-language-became-data-speech-recognition-and-computational-knowledge-tickets-55157807487″ text=”Register” color_override=””] [/one_fourth_last]

Leave a comment

Your email address will not be published. Required fields are marked *