• Skip to main content

BMO LAB

Creative Lab for the Arts, Performance, Emerging Technologies and AI

  • Home
  • About
    • A. I., Performance, and the Arts at BMO Lab
  • Highlights
  • Courses
    • Winter 2023
    • Winter 2022
    • Winter 2021
    • Fall 2019
    • 2018
    • Courses Summary
  • Research
  • Events & News
    • Can Stage BMO Lab Residency
    • Diagonal – Speaker Series and Reading Group
    • Radium AI and Art Residence Program: Discovering New Sites of Creativity
    • Events and News Summary
  • Lab
  • People
  • Info
  • Supporters & Partners

Oct 23 2020

Performers-in-Residence Update – Oct 22

Today we went deeper with the AI generated Shakespeare. We discussed many approaches to using this material, and then decided to put the talk aside and jump in. We set up the lab so that the output of the AI was projected on the wall, so that the performers could see the text as it was being produced, and read it immediately… with the performers adopting characters on the fly as they turned up in the script. As the text is formulated anew on the spot, the performers had no idea what was coming, and often did not know where the speech was going as they were speaking it.

This ended up being much more lovely and joyful than we had expected. The text being generated is coherent in bursts but largely nonsensical, and it was fascinating from the outside to watch these wonderful performers ride the language, letting their experience carry them through. As Ryan pointed out, the AI has a dataset, which is all the words of all the plays of Shakespeare. And the performers each have their own dataset built out of their experience on stage and the roles they have played. So there are two datasets engaging here. The AI’s dataset is strictly limited to words. The performers’ datasets are much broader, incorporating their theatre experience, in language but also the embodied experience of movement on stage, but beyond that, of course, the entirety of the experiences of their lives. Seeing the often clumsy output of this limited system and dataset filtered through and interpreted by these living actors was thrilling, and hilarious!

After trying this twice (once with Ryan, Sebastien and Maev, and once after Rick Miller joined us), we talked about how to make the most of what was exciting about this adventure. During the performance, I (David) was manually advancing the generation of the script after each speech so that it did not run ahead of the performers. We talked about using the spatial triggers we had played with yesterday to allow the performer who was speaking to initiate the generation of the next sentence (i.e. by raising their hand into a sensitive zone covering the entire stage.)

Then we finished up but talking about using text in other ways, such as embedding spoken text in the space, by positioning triggers for speeches throughout the performance space (rather than triggering sounds as we had done the day before). We talked about how to make this more usable and intuitive, fantasizing about a system that would allow us to point to a location in space and speak a line that would then become embedded in the space, ready to be triggered should a performer touch that specific location in space. We talked about how writing a script for such a space is tricky, as each speech must be written to allow for many paths through the text. We talked about how it would be possible to swap ‘maps’ on the fly so that one could easily have a scene change where all the locations of the triggers and the phrases they trigger could change.

Written by David Rokeby · Categorized: CanStage_BMO

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

copyright - BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies and AI