My goal with this project is to create an immersive aural
and visual experience based on my derive and the work that
followed, while incorporating improvisation and live performance.
The project will be performed / created in a nightclub (Dec.
11, Flamingo Cantina on 6th) using a modular audio/video
synthesis system that I have designed. A multitambral analog
modeling synthesizer, computer and sampler are monitored
for live audio content and, via MIDI, simultaneously triggers
video content live from a computer running a video performance
application.
My
derive was an exploration of my home. This performance will
take the audience through the five nodes of the derive,
showing images and sounds associated with the patterns I
have developed. The opening movement is the "plant
movement", an ambient, down tempo section with minimal
drums. It is driven by a woodblock arpeggio in D that sounds
like leaves. This is accompanied by sun-like pads and bright
guitar sounds that glorify a bright, organic frequency spectrum.
This section evolves into a Sci-Fi piano sound track inspired
by the SW Austin model. This is where the synths first really
start up. Then, after a build up, the second movement, the
"synth movement" , starts. This section is many
different synths with extreme cutoff manipulation. The downtown
movement is an upbeat breakbeat piece with atmospheric pads
and textures and a bassline that to me says traffic. The
maze section is a dark movement in D that switches, confusingly
between full (128 BPM) and half (64 BPM) and is loaded with
spiraling arpeggios and synth patterns. Coming to the last
movement, "enlightenment" is much like movement
one in terms of key and structure, except that it is backwards
(devolving, decreasing intensity) and the instrumentation
is more psychedelic.
To
get an idea of the video content, simply look at my map
section and content section of project 3. The idea is that
the video will be cut the audio, making patterns of plants,
synths, circuits, mazes and god only knows.
The audio content will be created on the Nova II ASM Synth,
Emagic ES2/ EXS24,EMU E5000 Sampler, and other hardware
and software synthesizers. Instead of generating only musical
loops, the content will be based on sonic textures and patterns
inspired by the five locations on the derive. This should
not be confused with simply recording and playing back recordings
of the actual objects and things in and around the site.
(Although using recordings of these things as sound sources
to be manipulated has not been ruled out). What would a
leaf or a city sound like if it could talk or make music?
The video content will be created using various digital
imaging applications (Adobe Photoshop, Illustrator, Final
Cut Pro, Bomb, Aesthesis, Morph) and generated live using
Arkaos VJ. VJ allows for the real time manipulation of video
clips, images and effects according to user defined MIDI
controller numbers. The clips to be played back will be
based on, like the audio, textures/ gestures, and patterns.
The mini worlds created by project 2 will serve as generic
models for the virtual environments that I will create on
the projection screen. It is my hope and expectation that
the existing patterns will evolve during the creation of
new content. Output will be handled by the club's speaker
system and Alesis M1 Active MkII monitors.
Although this system is capable of performing in several
configurations, I have not yet decided how to present the
project to an audience. One part of me would like to have
a traditional performance, where I play the music and trigger
all of the visuals myself, for a more refined performance
.The other part wants to just leave it open for the audience
to experiment with. This, however, presents a problem with
the interface. MIDI keyboards and knobs are really the only
devices that can be used to control MIDI equipment. There
is the I-Cube system that uses motion sensors and the like,
but with its heavy price tag, is out of the question. The
performance will allow me to create a good piece, while
the interactive option allows me to create a good piece
of equipment. Output will be handled by the club's projection
system, or Plan B: obtain a projector from the IMC. In the
end, I hope that the project will incorporate both live
performance and audience interaction.
This
is a fairly technologically advanced project. The amount
of sequencing time expected for the audio alone is around
100 hours. I feel that it is important for people to embrace
technology. To to this, it is often helpful to give meaning
to the technology. In the case of this project, technology
(is being represented by) all of my gizmos in their live
set-up, and the performance itself. It is the project.
But
what is the point of all of this if no one else really cares
that much about synthesizers and live visual interaction?
The point is to make them care. Specifically displaying
image and sound simultaneously in a synchronous and synesthetic
way, I will provoke thoughts of what green sounds like and
how to live in a synthesizer.
|