• Skip to main content

BMO LAB

Creative Lab for the Arts, Performance, Emerging Technologies and AI

  • Home
  • About
    • A. I., Performance, and the Arts at BMO Lab
  • Highlights
  • Courses
    • Winter 2023
    • Winter 2022
    • Winter 2021
    • Fall 2019
    • 2018
    • Courses Summary
  • Research
  • Events & News
    • Can Stage BMO Lab Residency
    • Diagonal – Speaker Series and Reading Group
    • Radium AI and Art Residence Program: Discovering New Sites of Creativity
    • Events and News Summary
  • Lab
  • People
  • Info
  • Supporters & Partners

Apr 22 2022

Arturo Ui – Body Pose Recognition for Sound, Lighting and Video Cues

For our production of The Resistible Rise of Arturo Ui, we explored the use of live motion capture for some of the scenes. At various times, two of the performers wore rigs of 17 sensors under their costumes, with each sensor recording the rotation of one of the major joints in their body. One of the ways we used motion capture was to enable a performer to control all the cues in a scene.

One of the things we are exploring in the lab is performer ‘agency’. We consider this notion of agency in two different ways: “What is gained if we use technology to give a performer more freedom to pace a performance as suits the moment?” and “How might performer-enacted cueing change the practical and economic challenges for small touring shows?”


For the prologue of the show, Sébastien Heins controlled every single cue through his body poses and gestures. In order to achieve this, we started by considering the information contained in any given pose of the human body. The motion capture sensors give us a fairly rich reading of the sequence of poses of a moving body. Modern AI systems are very good at pattern matching. We decided to pair the two so that the AI would learn to recognize key cueing gestures in a reliable and robust manner that accommodates the variations inevitable in live performance.

AI systems learn by being exposed to a large set of labeled examples. They start out at the beginning of the training process responding randomly, and then slowly become better and better through the training process until they are able to respond to new examples correctly even when they vary somewhat from those it was trained on.

In this particular case, we recorded motion capture data as Sébastien rehearsed, on several different days. He was encouraged to vary the gestures a bit each time so that the AI would learn a more general version of the pose. This data was then labeled. Each gesture that we wanted to trigger a cue was assigned a number and each frame of the data was labeled to indicate the cue the pose should trigger or 0 if the pose did not represent a cue pose.

A relatively simple AI was then trained on this labeled data. It successfully learned to recognize the desired gestures and to ignore all sorts of complex movement that was not associated to any trigger.

Written by David Rokeby · Categorized: Blog, Highlights

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

copyright - BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies and AI