There is a reason Glen Marshall aka butterfly.ie has developed somewhat of a cult following, which see here:
Part of the beauty of this creation is its simplicity. The other aspect is the fact that visuals are, in some sense, generated by the music. Glen Marshalls’s processing code takes in audio signals, in this case a Boards of Canada track, and specifies what type of visuals each signal should generate. The details are difficult to pinpoint. However, Glen has kindly broken down for us what is going on in the video below, generative visuals in a similar vein, soundtrack provided by Radiohead’s Bodysnatchers:
1. Bass guitar – makes the red shading on the red zeno pulsate.
2. Lead guitar – affects intensity of inner glows of both zenos.
3. Treble – affects size of sprites.
4. Vocal – additional affector to red sprite size, affects speed and directions of all sprites, affects size of stars in background.
5. X Factor – this is the name I gave to the overall amplitude – an ‘excitement’ factor. This controls the camera Z depth (near/far) – loudness brings us closer in, quieter breaks bring us out again. This was important to get that sense of a non-static journey and spatial interest that married with the music. The X Factor also increases the speed of the zenos growing, and the intensity of the blue cloud.
I don’t find this video quite as interesting as the former, potentially because it seems as though the b.o.c. music lends itself more to this project. The more atmospheric of musics tend to work with abstract visuals.
Regardless, the point should be clear: that using signals of a particular nature for unintended purposes can yield interesting art. In this case, audio signals generating motion.
We should of course know by now that interactions of this kind appear in many fields these days, examples of which follow here now:
- dancers strapped with all forms of sensors, accelerometers and otherwise, their movements being mapped to audio signals (song) and video (usually presented in the form of various projections);
- collaborations between scientists and artists in which the artists will use data relevant to the scientists’ research to generate painting/sound, instances of which have actually provided scientists with some sort of clarity concerning their research (example, anyone?);
I was speaking to a VJ (among other things) friend of mine recently and when i asked whether he was interested in such aspects of creating visuals to accompany music, he said he preferred to create the visuals live himself. But if you can have both aspects playing into one another, why wouldn’t you? Even something as simple as the pitch of a particular vocal part altering the hue of a particular filter ever so slightly. This is for instance. When you start to introduce slight variables such as this, this is when it all can become really interesting. This potential air of contained randomness opens up a world of possibilities, these small permutations can even inform us of characteristics and connections we would have never dreamed of before. What if, indeed, visuals corresponding to voice could differentiate yours from mine? What for the future of voice recognition?