Sebastien: Voxel-based cueing of sound, video and lights
Sebastien came in to work with David to try to convert a scene from a solo play to interactive triggering. We positioned triggers for sound cues, a video cue and a lighting cue into the space to see if such a system might be usable for actual performances.
The session brought up some interesting limitations in the current software that David has produced. The software is an extension of work David did for a very specific project a few years ago and thus was not designed to address the kinds of needs Sebastien has for his performance. Based on this experiment, David has been able to adjust and reimplement parts of the software to make it more appropriate for Sebastien’s needs. This process of iterative design, in consultation with people with real-world needs is a very important part of the successful development of good usable tools.
One limitation we found is that the sound triggering in David’s system was not designed for a situation where a sound clip was triggered by activity in a zone, but then was to play through to the end. Sebastien had to keep the trigger engaged to keep the clip playing.
Despite the limitations, we were able to get a rough sketch of the scene working.
Technically, one computer was tracking the movement (as seen in the video). This computer was talking to another computer that was controlling sound cues, lighting cues and video playback. We used Open Sound Control (OSC) as the communications protocol. At the lab we are trying to provide OSC interfaces for all our tools so that they can all be set up to talk to each other.
Leave a Reply