Audio-Visual Mandelbrot

From UBC Wiki

Audio-Visual Mandelbrot Set Representation

Authors: Gaurav, Ruperto, Nathan

This is a project to test the feasibility of Haskell to build an audio-visual file that maps audio to a continuous visualization of Mandelbrot Set.

What is the problem?

The idea is to create multiple Mandelbrot Sets by iterating over different continuous parameters and generate images (or frames) that can be stitched together to create a video.

What is the something extra?

Mapping audio to each iteration of the Mandelbrot Set such that each frame of the video has a representative sound (not necessarily unique).

What did we learn from doing this?

We learned about using libraries with Haskell. More specifically, we learned how to use FFmpeg to stitch many images/frames into a video.

We learned that we could compile our program into an executable and use optimisation flags (we used the -O2 flag) to make our program run much faster. Originally, our program was very slow and it took more than 10 minutes to generate around 10-15 seconds of video. After applying the optimisation flag, our program could generate a minute of video in around a minute.

We also solidified our understanding of IO and added sanitization/validation of input

One of the hardest parts of the project was just installing FFmpeg and getting it to work/compile. We spent a lot of time doing this and struggled to just get it working, before even starting to work on creating the videos of Mandelbrot sets. Aside from that, using Haskell and FFmpeg for our project was not easy, mainly because we found minimal documentation, if any.

We’re really satisfied that we got our program to work, though. The only thing we decided was out of scope was adding/generating audio for our videos. We spent a lot of time working on the video (with no audio) and writing an audio stream isn't as straightforward due to the lack of documentation, examples and libraries so we decided to drop it due to scope creep.


Please visit our Github repository .