Today we had the pleasure of welcoming Maev Beaty to join in our explorations. We started by extending our exploration of placing spatial interactive cues. Using the Azure Kinect, we created virtual walls and ceilings in the space which trigger sounds when our bodies touched or passed through them. We created multiple layers of these sensitive sound zones above our head which we could reach into, and then created one that was low enough that we had to crouch or bow to keep from triggering. We considered how one might build an invisible stage set of constraints, that the performer must adhere to, avoid, or transgress at moments in a performance. We talked about how this sensitivity in the space stayed alive but silent when not triggered, as a kind of tension in the space and wondered about how this might be made visible / sensible to the audience so that they sensed the challenge / risk / tension. We also considered how the same tool might make it possible to project a sensitive space just above the heads of the audience. So that the audience could engage with it. We wondered about the virtues of having this a thing that the audience could consciously engage with. Would they? Could it be unconscious so that it was only triggered if an audience member got up to leave? Similarly we wondered how a camera pointed at the audience could be used to sense restlessness (using an infrared camera so that it was not affected by the darkness or stage lighting).
Then we moved on to a discussion about the AI generated Shakespeare ‘sonnets’ that David had generated and distributed on the weekend. These were generated using a modified version of GPT-2, a recent text generation model. Sebastien spoke eloquently about his rich and complicated feelings about these texts, and how, in their often non-sensicalness, they expressed some of the chaos and madness of love. We discussed various ways that we might experiment with these texts. One direction we discusses was to have the texts pre-written so that a performer could hone an interpretation, using phrasing, and selective emphasis to draw it towards sense, or alternatively, trust as a given text to be ridden like a horse, without intention. Or to have the texts generated in real-time, projected, or distributed to cell-phone apps for live-reading.
We spent time talking about the ‘temperature’ feature of the text generation. Simply put, as the ‘temperature’ of the AI is pushed higher, the generated text is less specifically drawn from the probabilities of Shakespeare’s word usage, starts to include modern words, and non-existent words, and then to fall apart. As the temperature is taken cooler, the generated text hews closer and closer to the actual probabilities of use in Shakespeare to the point that it started to loop, finding clusters of words that point towards each other, rather than outward. One of the sonnet selections had been initially generated at a temperature of 0.5 (low), and once it had started looping, David had adjusted the temperature up slowly until the system broke out of the loop, at which point he lowered the temperature again. We all had been struck by this looping behaviour. Sebastien ended the day performing this looping sonnet: