Home
Video Content
Audio Content
Music

My goal with this project was to create an immersive aural and visual experience based on my previous work in the class, while incorporating improvisation and live performance. The project will be performed / created in the actlab on December 11 at noon, using a modular audio/video synthesis system that I have designed. The basic setup is a G4 running Logic 6 Platinum and Arkaos VJ video software with various midi controllers hooked up to it for realtime manipulation of content.


The audio content will be created on the Nova II ASM Synth, Emagic ES2/ EXS24,EMU E5000 Sampler, and other hardware and software synthesizers. Instead of generating only musical loops, the content will be based on sonic textures and patterns inspired by the several things that I love. This should not be confused with simply recording and playing back recordings of the actual objects and things in and around the site. (Although using recordings of these things as sound sources to be manipulated has not been ruled out). What would a leaf or a city sound like if it could talk or make music?


The video content will be created using various digital imaging applications (Adobe Photoshop, Illustrator,, Bomb, Aesthesis, Morph) and generated live using Arkaos VJ. VJ allows for the real time manipulation of images and effects according to user defined MIDI controller numbers. The clips to be played back will be based on, like the audio, textures/ gestures, and patterns. It is my hope and expectation that the existing patterns will evolve during the creation of new content. Output will be handled by the 4B' speaker system and Alesis M1 Active MkII monitors.

This is a fairly technologically advanced project. The amount of sequencing time expected for the audio alone was around 100 hours. I feel that it is important for people to embrace technology. To to this, it is often helpful to give meaning to the technology. In the case of this project, technology (is being represented by) all of my gizmos in their live set-up, and the performance itself. It is the project.

But what is the point of all of this if no one else really cares that much about synthesizers and live visual interaction? The point is to make them care. Specifically displaying image and sound simultaneously in a synchronous and synesthetic way, I will provoke thoughts of what green sounds like and how to live in a synthesizer.