• Skip to main content

BMO LAB

Creative Lab for the Arts, Performance, Emerging Technologies and AI

  • Home
  • About
    • A. I., Performance, and the Arts at BMO Lab
  • Courses
    • 2021
    • 2019
    • 2018
  • Research
  • Events and News
    • Can Stage BMO Lab Residency
    • Diagonal – Speaker Series and Reading Group
    • AI as Foil: Exploring the Co-evolution of Art and Technology
  • Lab
  • People
  • Info

Can Stage BMO Lab Residency

  • CanadianStage debuts its short film about the BMO Lab / CanStage Residency
    For the 20.21 season, Ryan Cunningham and Sébastien Heins are participating in the BMO Lab in Creative Research in the Arts, Performance, Emerging Technologies and Artificial Intelligence. Canadian Stage and the Centre for Drama, Theatre and Performance Studies, University of Toronto have partnered to create this unique paid opportunity for two professional actors who will immerse themselves with the Lab’s technologies and research possibilities for application to live theatre performance. Video by: https://www.videocompany.ca/​ The BMO Lab would like to thank the Video Company for putting together such a fine reflection on our residency in progress despite the challenges of the … Read more
  • Performers-In-Residence Update – Nov 16
    Sebastien: Voxel-based cueing of sound, video and lights Sebastien came in to work with David to try to convert a scene from a solo play to interactive triggering. We positioned triggers for sound cues, a video cue and a lighting cue into the space to see if such a system might be usable for actual performances. The session brought up some interesting limitations in the current software that David has produced. The software is an extension of work David did for a very specific project a few years ago and thus was not designed to address the kinds of needs … Read more
  • Performers-in-Residence Update – Nov 6
    A few days ago, Sebastien asked whether it would be possible to trigger voices instead of sounds through the system that places sound possibilities in space, so we decided to do an experiment to see how that might feel and what creative possibilities that might open up. First we sat down and came up with a range of words and phrases that were somewhat ambiguous. and could be presented in different orders. Then Sebastien and Maev recorded these phrases, doing multiple versions of each with different expression. Then, each utterance was converted into a sound file and loaded into the … Read more
  • Performers-in-Residence Update – Oct 29
    Ryan: The Challenge of Indigenous Languages for AI As the others were unavailable, Ryan and David got together to explore a couple of issues related to Ryan's interests. First Ryan brought up the fact that it would be challenging to replicate the kind of AI generated script experiments that we were exploring the other day in the context of Shakespeare in the space of indigenous cultural production. The system we were using (GPT-2 from OpenAI, fine-tuned on the plays of Shakespeare) was initially trained on a very large body of English language writing. The sort of 'learning' that this system … Read more
  • Performers-in-Residence Update – Oct 28
    First we sat down to discuss the experience of the cold read of the AI generated script from last week: what worked, what didn't, and how we might adjust the software to improve the experience. Sebastien, Maev and Rick discussed the potential to use the tool as a tool got training / rehearsal / skill development, as they all felt that the experience was both pleasurable, very challenging and productive. Then we went further with the GPT-2 generated scripts using a modified version of the program that allow us to mix models trained on different bodies of text. The example … Read more
  • Performers-in-Residence Update – Oct 22
    Today we went deeper with the AI generated Shakespeare. We discussed many approaches to using this material, and then decided to put the talk aside and jump in. We set up the lab so that the output of the AI was projected on the wall, so that the performers could see the text as it was being produced, and read it immediately… with the performers adopting characters on the fly as they turned up in the script. As the text is formulated anew on the spot, the performers had no idea what was coming, and often did not know where … Read more
  • Performers-in-Residence Update – Oct 21
    Today we had the pleasure of welcoming Maev Beaty to join in our explorations. We started by extending our exploration of placing spatial interactive cues. Using the Azure Kinect, we created virtual walls and ceilings in the space which trigger sounds when our bodies touched or passed through them. We created multiple layers of these sensitive sound zones above our head which we could reach into, and then created one that was low enough that we had to crouch or bow to keep from triggering. We considered how one might build an invisible stage set of constraints, that the performer … Read more
  • Performers-in-Residence Update – Oct 15
    Today we dipped our toes in the world of Artificial Intelligence. We played with a neural network (GPT-2) that was trained on an enormous dataset of text, and then fine-tuned to the complete set of plays of William Shakespeare. The GPT-2 Shakespeare Explorer showing the probabilities for the next word to be produced (yellow is very likely, as you move towards purple, the words are much less likely to be chosen.) This lead to a lively discussion about the relative meaningfulness or meaninglessness of the resulting text, and ways that this sort of system might be useful in the context … Read more
  • Performers-in-Residence Update – Oct 14
    For the first meeting, we explored various methods of tracking and responding to movement. I set up an interactive sound installation that I created in 2003 that translates a performers movements into sound. We looked at ways a computer can locate the presence and movement of people within a video image, and played around with these to both get a feel for the technology and to think about how this technology might be used in performance. Sebastien and Ryan interacting with interactive sound Then we looked at newer technologies for capturing the human body through depth sensors. We looked at … Read more

copyright - BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies and AI