First we sat down to discuss the experience of the cold read of the AI generated script from last week: what worked, what didn’t, and how we might adjust the software to improve the experience. Sebastien, Maev and Rick discussed the potential to use the tool as a tool got training / rehearsal / skill development, as they all felt that the experience was both pleasurable, very challenging and productive.
Then we went further with the GPT-2 generated scripts using a modified version of the program that allow us to mix models trained on different bodies of text. The example we explored mixed one trained on all of Shakespeare’s plays with one trained on a very large corpus of popular music lyrics (spanning Broadway shows, rap, classic pop, folk songs, etc.). We started with a mix of all Shakespeare and no pop lyrics and then progressively added more of the pop lyrics model into the mix. At first, this did not seem to work, as it kept producing Shakespeare-like material with character names before speeches and stage directions. Eventually we realized that, since the model chooses the next work based on the last 512 work ‘tokens’, the history of previous material was making the mix less fluid. Playing around with some parameters, we were able to do a cold-read on a model that was shifting back and forth. Hilarity ensued!
(video excerpts coming)
After a break we talked about what it might mean to use the spatial triggering to trigger recorded spoken words and phrases instead of sounds… to fill the performance space with a field of possible text fragments and to explore it through movements or gestures. We plan to do some experiments in this direction over the next couple of weeks.
Leave a Reply