Ryan: The Challenge of Indigenous Languages for AI
As the others were unavailable, Ryan and David got together to explore a couple of issues related to Ryan’s interests. First Ryan brought up the fact that it would be challenging to replicate the kind of AI generated script experiments that we were exploring the other day in the context of Shakespeare in the space of indigenous cultural production.
The system we were using (GPT-2 from OpenAI, fine-tuned on the plays of Shakespeare) was initially trained on a very large body of English language writing. The sort of ‘learning’ that this system performs generally requires very large bodies of somewhat consistent input data in order to learn.
Ryan and David discussed the challenges this would pose to performing a similar exploration with indigenous plays.
(related background info: https://www.thecanadianencyclopedia.ca/en/article/aboriginal-people-languages)
Ryan: Automatic Spatially-based lighting cues
Then we embarked on an initial exploration of interactive lighting, using the Azure Kinect Depth sensor to allow us to place lighting cues in space. This system allows us to define locations in 3 dimensional space to be triggers or continuous controllers that control lights through standard DMX. (We had not yet refined the trigger positions in this first test so the first lighting cue area reaches a bit too far to the front, which is why the light stays on after he leaves the area lit by the light)
Leave a Reply