Workshop: February 29th, 1-4 pm — Artist’s Talk / Lecture: March 2, 4:30-6 pm
The BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies

Kyle McDonald is an artist working with code. He crafts interactive installations, performances, sneaky interventions, playful websites, workshops, and toolkits for other artists working with code. He explores the possibilities of new technologies: to understand how they affect society, to misuse them, and build alternative futures. He works with machine learning, computer vision, social and surveillance tech. He has been an adjunct professor at NYU’s ITP, a member of F.A.T. Lab (Free Art and Technology), community manager for openFrameworks, and artist in residence at STUDIO for Creative Inquiry at CMU, and YCAM in Japan. His work has been commissioned and shown around the world, including: the V & A (London), NTT ICC (Tokyo), Ars Electronica (Linz, Austria), Sonar (Barcelona), Todays Art (The Hague), and Eyebeam (NYC).
On
He lead a workshop on February 29th introducing participants to using the open-source web-based programming environment p5.js for creating interactive systems using computer vision and machine learning.

Workshop description:
This hands-on workshop will begin by revisiting the basics of coding with p5.js, including drawing, animation, and interactivity. We will then cover computer vision techniques based on simple pixel processing and machine learning, with a focus on tracking bodies, faces, and hands. p5.js is a JavaScript library designed to make coding accessible for artists, designers, and educators. “Computer vision” refers to a broad collection of techniques that allow computers to make intelligent assertions about what’s going on in digital images and video. “Machine learning” refers to explaining tasks to computers via examples (training data) instead of instructions (code). Using p5.js we can quickly leverage the power of new computer vision algorithms built on machine learning to create camera-driven interactive artwork. We will discuss the ml5.js toolkit, and how it fits into the broader ecosystem of modern machine learning tools. We will use ml5.js to detect common objects in front of the webcam, and train a custom classifier that can distinguish between personal objects in front of the webcam. The class will adapt to the familiarity of the students: if the fundamentals of creative coding already well understood, by the end of the workshop we will be discussing higher-level machine learning concepts like generative adversarial networks for image generation and recurrent neural networks for text and music generation without focusing on these topics.
Leave a Reply