Fractals for fun?

The last week or so, as well as working on the mixes for my first album, I decided to have a look into fractals as it seemed like a logical next step from the graphics animations I have been working on so far. I found that Wikipedia has a lot of good resources, but probably has too much detail for a beginner, if you follow all the rabbit holes it leads you down like I did.  A far better introduction was a Youtube video, which brought together pretty much everything I’d read to date and then some. (Sadly it’s been taken down since I wrote this post so I can no longer share it with you).

At least some of the more visually appealing fractal patterns are constructed using maths that involves the square root of minus one, eg Mandelbrot and Julia sets. Those sets also looked as though they were not going to be easy to animate, as it looks like the whole image is calculated pixel by pixel, with the colour of each pixel set by how long it took to reach a threshold value.

There are some (more basic) fractals that were more easy to understand, however, such as the Cantor set and the Sierpinski carpet, which are made by an iterative process. The Cantor set is a set of lines with the middle section taken out. Then you rinse and repeat, taking the middle section of the new lines away for the next iteration. The Sierpinski curve does something fairly similar, but with rectangles. I could see a way through the fog for programming visuals for these types of fractals in Processing, and have incorporated some of these into a new set of visuals for displaying during my next live gig.

Vorsprung Durch Technik

After some more headbutting and reaching a point where I didn’t think I would be able to solve the original problem of routing live sound through a self-programmed music visualiser, I went back to basics. Ditching the sound module provided on the Creative Programming course, I looked into Processing’s own sound library using the online documentation.

And bingo, using the available sample code in the online tutorial, I suddenly had something that was responding to input from the soundcard. Just like that. The graphics were terrible – just a fuzzy line at the bottom of the screen, but the body was still twitching, so to speak.

So, moving on from there, I’ve incorporated the relevant commands into the visualiser code, and developed the graphics further to create something workable. The short video here is just a teaser: I want to keep the full graphics for live shows.  Here, output from Ableton Live Lite is being picked up by the visualiser from the signal going through the soundcard, and processed on the fly.

Digital Video, DIY style

I’ve already mentioned in previous posts that I’ve been working on a music visualiser application based on Digital Signal Processing (DSP) of sound, which could then be used to project images to screen during performances of my music.  It’s a significant detour from writing music itself, but would be particularly valuable for when I am performing instrumental pieces, to add interest to the listener experience. 

Unfortunately, I have a few challenges to overcome before my work so far can be used to animate live music. Namely, I can currently only use the visualiser on pre-recorded music, which obviously isn’t any good for live work, and it is only coping with small files at this stage. So, I need to learn how to get it to accept streaming audio, and figure out how to get a live sound signal into it.  If indeed that is possible.

I discovered another potential use for my work today, however, and that is to use the applications I’m writing to generate video art. It turns out that this is pretty easy to do by recording the app running on my screen with Quicktime then trimming it in video editing software.  The most difficult thing seems to be getting the sound and images to line up correctly where they are supposed to be synchronised, because recording the app running doesn’t capture the sound (unless that was a mistake on my part… I must check out if I missed any settings). I’ve probably not been completely accurate syncing up the attached example, but it’s close enough this time.

More Digital Art

This slideshow requires JavaScript.

I’ve kept going with the Creative Programming course that I talked about last time, and continued coding. After an amazingly good start, I reached a frustrating head-butting stage which I’m not sure I’m out of yet. I’ve now covered the whole syllabus, but need to go back and properly get to grips with waveform synthesis, to be able to do the last assignment and get the qualification. Plus, it might also be useful for performances if I can create an interesting and unique new digital instrument.

I’ve been working mostly on a music visualiser application based on Digital Signal Processing (DSP) of the sound, which I’ll talk about more in another post. In the meantime, whilst the visualiser is still in experimental mode, I thought I’d share a few more images I’ve created with my graphics-only app and an interactive version where sound was played and was changed in pitch and speed by what was happening on the screen.

 

A Worthwhile Detour

 

A couple of weeks ago, I went to a couple of events at Lincoln’s Sonophilia Festival (the UK one, not the one in Nebraska). One of these events was the Weird Garden experimental music club, and a chap called Dave C was demonstrating Lissajous curves by generating four tones using a Raspberry Pi and an Arduino board, then plotting these on an oscilloscope, projected so we could all see it. I thought that this would interest my Dad – he’s been known to experiment with a Raspberry Pi – and sent him some photos of the set-up.

Well, this started a conversation and a half. It basically headed in the direction of ‘you should learn some Digital Signal Processing’, with me misunderstanding what that might entail, thinking that it would involve circuit board design and a very steep learning curve. There was a lot of talk at cross-purposes, but Dad eventually explained I would need to learn some new programming skills, in a language called Visual DSP. The boards are already designed, so I wouldn’t need to worry about that, and you can do DSP on your computer, anyway, because computers already have the physical tools needed to do DSP.

I said I would look into it, so that I could possibly learn to present my music in a visual form when playing live. Any programming skills I pick up along the way are a bonus, anyway.  So, I enrolled on a several of Coursera courses, to check out the content, and got stuck into one of these, which I will follow through and hopefully complete.  This course isn’t specifically a DSP course, it is Creative Programming for Digital Media & Mobile Apps, but it seems to be pitched at about my level. It uses Java/Javascript (another language I haven’t used before) so any DSP I do when I get more advanced at programming might need to be via my computer, rather than an external board, unless there is a way round that… Actually, it looks like I can tell it to output the program in Python, the language that Raspberry Pi uses…. I have a lot to learn.

I’m still working my way through week 1 of the course, but the images above are all screenshots made from code that I put together* and then ran and interacted with to make art. I modified the code a little between each image captured.

Even if I get horribly stuck from here onward, I will have an app that I can use to make some unique album art!

*Disclaimer: my code also uses functions taken from a module coded by the course providers.