music


There is a reason Glen Marshall aka butterfly.ie has developed somewhat of a cult following, which see here:

Part of the beauty of this creation is its simplicity. The other aspect is the fact that visuals are, in some sense, generated by the music. Glen Marshalls’s processing code takes in audio signals, in this case a Boards of Canada track, and specifies what type of visuals each signal should generate. The details are difficult to pinpoint. However, Glen has kindly broken down for us what is going on in the video below, generative visuals in a similar vein, soundtrack provided by Radiohead’s Bodysnatchers:

1. Bass guitar – makes the red shading on the red zeno pulsate.
2. Lead guitar – affects intensity of inner glows of both zenos.
3. Treble – affects size of sprites.
4. Vocal – additional affector to red sprite size, affects speed and directions of all sprites, affects size of stars in background.
5. X Factor – this is the name I gave to the overall amplitude – an ‘excitement’ factor. This controls the camera Z depth (near/far) – loudness brings us closer in, quieter breaks bring us out again. This was important to get that sense of a non-static journey and spatial interest that married with the music. The X Factor also increases the speed of the zenos growing, and the intensity of the blue cloud.

I don’t find this video quite as interesting as the former, potentially because it seems as though the b.o.c. music lends itself more to this project. The more atmospheric of musics tend to work with abstract visuals.

Regardless, the point should be clear: that using signals of a particular nature for unintended purposes can yield interesting art. In this case, audio signals generating motion.

We should of course know by now that interactions of this kind appear in many fields these days, examples of which follow here now:

  • dancers strapped with all forms of sensors, accelerometers and otherwise, their movements being mapped to audio signals (song) and video (usually presented in the form of various projections);
  • collaborations between scientists and artists in which the artists will use data relevant to the scientists’ research to generate painting/sound, instances of which have actually provided scientists with some sort of clarity concerning their research (example, anyone?);

I was speaking to a VJ (among other things) friend of mine recently and when i asked whether he was interested in such aspects of creating visuals to accompany music, he said he preferred to create the visuals live himself. But if you can have both aspects playing into one another, why wouldn’t you? Even something as simple as the pitch of a particular vocal part altering the hue of a particular filter ever so slightly. This is for instance. When you start to introduce slight variables such as this, this is when it all can become really interesting. This potential air of contained randomness opens up a world of possibilities, these small permutations can even inform us of characteristics and connections we would have never dreamed of before. What if, indeed, visuals corresponding to voice could differentiate yours from mine? What for the future of voice recognition?

[www.butterfly.ie]

Watch the above before reading on. I was naively (or not so) under a certain misapprehension: i thought that Daito Manabe, the star of the above video, was triggering sounds using his facial muscles. That is, the objects attached to his face were some sort of motion sensors (perhaps accelerometers would do the trick, my knowledge of complex sensors is lacking here) and thus the apparent blink of an eyelid or twitch of the upper lip would send out a digital signal and voila!: hook up a computer and these facial movements could easily be converted into midi (and/or osc) signals to trigger live audio and/or video using your music/video production application of choice.

I was shocked by how fascinating the correlation between sounds and movement were and the choices made as to which sounds to attach to each part of the face. Of course this all before it dawned upon me that this is not what is happening. In fact, quite the opposite. Each sounds sends out an electric signal which is evidently hardwired directly to our friend Daito’s face and, in essence, his muscular and nervous systems. An electric shock thus creates a small, highly visible spasm. This is potentially more shocking and less artistically interesting than what i originally thought was happening. But i’m not actually sure. I feel conflicted.

Now what if we had both systems just described working in conjunction: one person using their muscles to trigger sound and these very sounds then translated into electric pulses hardwired INTO SOMEONE ELSE’S NERVOUS SYSTEM. Is this potentially the future of puppetry? And if there were not necessarily a one-to-one identifiable correlation between the twitcher and the twitchees: a cheekflex could result in a flailing leg, a nodding head could create a breakdance.

Could this be the future of DJing? No longer spinning records or even beatmixing MP3s as seems to commonly occur these days, but a disc jockey making music with his entire body and also choreographing the audience’s dance, drum rolls necessarily accompanied by strained muscular pirrouettes each and every time they occur, horn stabs forcing folk to jump in the air, certain digital sounds heard and the whole floor does the robot. At the push of a button the DJ could make everyone applaud… Anyone?