interactive


There is a reason Glen Marshall aka butterfly.ie has developed somewhat of a cult following, which see here:

Part of the beauty of this creation is its simplicity. The other aspect is the fact that visuals are, in some sense, generated by the music. Glen Marshalls’s processing code takes in audio signals, in this case a Boards of Canada track, and specifies what type of visuals each signal should generate. The details are difficult to pinpoint. However, Glen has kindly broken down for us what is going on in the video below, generative visuals in a similar vein, soundtrack provided by Radiohead’s Bodysnatchers:

1. Bass guitar – makes the red shading on the red zeno pulsate.
2. Lead guitar – affects intensity of inner glows of both zenos.
3. Treble – affects size of sprites.
4. Vocal – additional affector to red sprite size, affects speed and directions of all sprites, affects size of stars in background.
5. X Factor – this is the name I gave to the overall amplitude – an ‘excitement’ factor. This controls the camera Z depth (near/far) – loudness brings us closer in, quieter breaks bring us out again. This was important to get that sense of a non-static journey and spatial interest that married with the music. The X Factor also increases the speed of the zenos growing, and the intensity of the blue cloud.

I don’t find this video quite as interesting as the former, potentially because it seems as though the b.o.c. music lends itself more to this project. The more atmospheric of musics tend to work with abstract visuals.

Regardless, the point should be clear: that using signals of a particular nature for unintended purposes can yield interesting art. In this case, audio signals generating motion.

We should of course know by now that interactions of this kind appear in many fields these days, examples of which follow here now:

  • dancers strapped with all forms of sensors, accelerometers and otherwise, their movements being mapped to audio signals (song) and video (usually presented in the form of various projections);
  • collaborations between scientists and artists in which the artists will use data relevant to the scientists’ research to generate painting/sound, instances of which have actually provided scientists with some sort of clarity concerning their research (example, anyone?);

I was speaking to a VJ (among other things) friend of mine recently and when i asked whether he was interested in such aspects of creating visuals to accompany music, he said he preferred to create the visuals live himself. But if you can have both aspects playing into one another, why wouldn’t you? Even something as simple as the pitch of a particular vocal part altering the hue of a particular filter ever so slightly. This is for instance. When you start to introduce slight variables such as this, this is when it all can become really interesting. This potential air of contained randomness opens up a world of possibilities, these small permutations can even inform us of characteristics and connections we would have never dreamed of before. What if, indeed, visuals corresponding to voice could differentiate yours from mine? What for the future of voice recognition?

[www.butterfly.ie]

[a novel embedded in a map at We Tell Stories]

Aptly named We Tell Stories, here we have a project interested in digital writing, in this case the ways in which we can approach the idea of writing/storytelling/literature using the internet. Screenshot above from an online short novel called The 21 Steps, in which we view a googlemap which contains various nodes, each of which tells us part of the tale— we begin at a particular location, reading the first few sentences and then are led on a detective story through London and in and around the UK, viewing the action from above, as it were. In fact, as it is. Now this is a good place to start: although the story is entirely linear and un-interactive except for the necessary clicks of the mouse, the possibilities it suggest are damn exciting. For example, perhaps having a not entirely linear detective tale on a map, but instead one where you, the reader, actually decide where to look for the story yourself. So then:

An interactive, virtual Choose-Your-Own-Adventure novel in which YOU map out your journey figuratively and also quite literally on a journey round the globe.

[The Cave of Time!]

We Tell Stories sometimes gives us somewhat interesting digital renderings of classics (in this case, our googlemap adventure is a reworking of The 39 Steps). As in this case, they could all be more exciting. A major part of the attraction toward digital and web-based literature is the idea of reader interaction and the We Tell Stories stories do tend to play it safe. My inclination would be to suggest something more fractured (an example of which I outlined above as an alternative way to use mapping, to really make the reader the detective), something far less like traditional reading! When dwelling on such issues, I am reminded of David Foster Wallace’s words:

There’s a way, it seems to me, that reality’s fractured right now, at least the reality that I live in. And the difficulty about writing… writing about that reality is that text is very linear and it’s very unified, and… I, anyway, am constantly on the lookout for ways to fracture the text that aren’t totally disorienting—

Web-based literature is one way. Or probably many ways. It’s still a bub.