For the first meeting, we explored various methods of tracking and responding to movement. I set up an interactive sound installation that I created in 2003 that translates a performers movements into sound. We looked at ways a computer can locate the presence and movement of people within a video image, and played around with these to both get a feel for the technology and to think about how this technology might be used in performance.
Then we looked at newer technologies for capturing the human body through depth sensors. We looked at the Azure Kinect, which creates an image of depth, where each pixel, instead of indicating brightness, instead tells us how far away things are. This technology can both be used for skeleton tracking (capturing the positions of all the joints in the body) and for other kinds of spatial interaction. We looked at how you can use depth cameras to position triggers or controls anywhere in the physical space.
We moved pretty fluidly between tech demonstration, playful exploration, and wide-ranging discussions about our impressions, and thoughts about the use and implications for performance. This mix of engagements was, for me, extremely rich and rewarding. I think that this will set the tone for this phase of the workshop. It is really unusual opportunity for us in the Lab and for Ryan and Sebastien to have the time to consider and explore these technologies deeply.



Leave a Reply