During RISD’s 2022 winter session, I took the course Of Sound and Vision in which I created several generative art programs (i.e. patches) using Max / MSP / Jitter. Taught by Mark Cetilia, the course explores the ways sounds can produce or change images, image can conversely shape sounds, and the many possibilities that live in between these extremes.
The projects were fairly open-ended, so for each one I dived into a technique of interest to see what I could make. While I had previously studied generative visual art using Processing, it was interesting to see what type of work Max’s sound-driven environment favored.
My first project was an audio piece combining four sine oscillators and four band-limited noise generators in a descending four-note pattern. I mainly focused on how the sounds interacted with each other, creating beat frequencies and complicated phantom sub-rhythms.
I’ve recorded several performances of the project that you can listen to below. Each is a slightly different live tuning of the piece’s parameters.
Exploring Jitter, I looped and distorted a video from Cut discussing break-ups to juxtapose conclusive stories with a never-ending context. The resulting piece plays with resolution, playback speed, and video mixing.
I discovered this patch also produced interesting results when combined with a video from my Discotheque 1 project made in Processing.
This piece explores extreme uses of contemporary AI-powered creative tools, specifically GAN-generated portraits using thispersondoesnotexist.com, neural filters for emotions using Photoshop, and music generated in the style of existing musicians using Jukebox.
For my final project, I wanted to explore generative typography. Max’s tools unfortunately aren't as sophisticated as those in p5.js, so I opted to embrace the low-tech aesthetic and create a piece reminiscent of WordArt or bowling alley strike animations. Unlike my prior projects, this one is autonomous and proceeds based off a text document.