For the second project in the series of three, I linked a single video played on the projector with a multiple audio file played on all the other lab computers. The audio files had been preprocessed with Blender's game engine and rendered to a 52-track collection split on 26 audio files.
This was made possible through AJAX and a single 'data.php' file which held the current 'refresh code', 'movieinfo.php' file which held the details about the current type of presentation setup, the filename, and primary computer's playback dimensions. Tracking was implemented using PHP to log each computer in sequential order with an option for unordered connection using php get-values.
The audio portion was created using a combination of Blender's game engine, scripting engine, and animation engine. All the audio had to be captured in real-time during playback since Blender 2.48 had no way to intuitively render in-game activity.
Below is the in-class documentation for the project.
For this project I wanted to expand on my previous project and create a 3D array of computers which all generated sound that was mixed on a 3D level. Similar to the first project, the visual component would only run on one screen, the projector's, and the rest of the computers would play audio with a blanked display.
I updated the code for this project slightly to allow for a more manipulable interface. The code was re-written in PHP to allow easy changes to the trigger file (tracker.php) through the use of a control webpage (control.php) which would also update the filename for the video to be played. A few options were added after my second revision like the overwriting of the current tracker number to allow computers to be instantiated out of order through the use of the get variable "tracker". I also allowed the option to have a single video be played on all the screens simultaneously, one video for the projector and another cloned on the Macs, or a unique video for each screen to allow for the future development of 3D enhancement.
Before I began mapping my sound to a 3D environment I first needed to choose my video clip. I quickly landed on the opening scene to Lethal Weapon 2, in my opinion one of the best action intros (followed perhaps by the mesmerizing intro from the first Lethal Weapon film with "Jingle Bell Rock" playing during the opening shot). Probably unneeded information: but when my dad brought home full standing wood speakers with marble tops when I was a child, he ordered me to look for Lethal Weapon 2 while I was trying to see what they sounded like. After a few more orders, I ran for the VHS tape. I will forever have that moment of pure excitement as we heard the Warner Brothers logo blend and warp into police sirens going 100 miles per hour in a traffic ridden tunnel.
I quickly got to work once I knew what I was going to be doing. I took measurements of the ACTLab and the positions of all the computers and recreated that in Blender, an open source 3D program.
I have used Blender before on many occasions, but for this instance I needed to implement moving video and audio, which I knew was still in its experimental stages. By following one tutorial that ran off the Blender game engine, yet another aspect of blender that I had not previously used, I was able to create a basic box that would play the audio portion of the video file in a 3D space.
In order to create this extended mix, I parsed the 5.1 audio of the file and created separate cubes, or speakers, to emit the sound files. I then tied these cubes to empties to be able to workaround a blender error of non-movable speakers by parenting the speakers to the empties and solely moving the empties. The fact that I didn't have real-time control over an array of as many speakers as I was mapping to was never really much of a problem since I was able to hear the general left right panning and enabled visible speakers to cue me into the exact location of the audio source over the virtual computers. I essentially had the front and rear speakers in their standard position and move the center track around to create the panning effect while leaving the ambiance constant. In order to keep all the virtual computers from seeming like they were playing the same sound, I lifted the speakers above the virtual computers to create a radius of sound, almost, that would only envelope the virtual computers almost directly underneath them. (NOTE: Towards the end of the project however, the position of the front speakers had to be pulled to a further height since their solely music tracks seemed to overpower even the center channel's panning and the rear speakers were dropped lower since they played such a minimal part with a standard mix, but created an greater feel of space the few times that audio was recorded on them.)
I then went about creating scripts (all viewable in the final blender file under the text panel) to capture my mouse movements and convert them into data that would be usable to record into the IPO-Curves for the sound object. By doing this I was able to edit the sound on the fly, almost like a conductor for the presentation. But then I realized that it would be great if I could actually playback in real time instead of just recording in real time since rendering would definitely slow down the project. Then I created more code that would get all the data I had previously recorded and play that back through the game engine. Somewhere however, the code conflicted and there exists a small mishap if you try to edit a previously recorded piece. So at first, a general movement was recorded, the IPO-curves cleaned by reducing keyframes, then those keyframes were manipulated to my exact liking. As the project progressed however, I began working on the general Blender interface frame-by-frame with a copy running on VLC and used the game engine primarily for checking to see if the audio moved fluidly.
So the entire time I worked on the project I NEVER thought to check to see if the audio from the game engine portion would transfer to the animation portion. It turns out that blender outputs audio only when it is in the video sequencer, the normal linear editing portion of Blender. So in a quick rush to complete the project, I used my workplace computers to screen capture the real-time preview that I had been using to check my progress. I then scrapped all the video and saved all the files their final individual formats.
Video Portion: index.php?tracker=0
*See appended note.
For the presentation portion, I forgot to play the above mentioned song to create the mood since the server kept segfaulting. It wasn't until the end of project 3 when I realized that my 0-second timeout for sending xhtml requests actually would render the server useless. By using the tload command through putty, I realized that a second of 0-second timeouts lead to 30+ seconds of server processing. Essentially, I was unknowingly causing a distributed denial-of-service attack (DDoS attack). After a bunch of hassle and server switching I was finally able to have full control of the computers again. The presentation went alright even though the volume was a bit low for my liking. When I had tested the file out the previous night in the lab, with only 12 computers, but a handful of students working, I thought the sound was overpowering enough for the classroom. But once an additional 20 students came into the room with all ears on the audio, much of the audio was dampened and was not overpowering at all. I had knowingly turned off the house speakers since I felt it would add a greater dimension for the unique audio files, but that choice may have hurt me a bit presentation wise. Had the speakers been louder, the audio files' volume increased, and the house speakers turned up higher, it may have been the exact presentation that I would have imagined. But that is always why there is project 3!
NOTE: This was NOT a fast streaming video because I wanted to make sure the entire video was loaded to prevent server drop-offs during the actual presentation. The bitrate is also INCREDIBLE overkill but I wanted the video to present in the highest quality possible when displayed on a projector.
*The documentation is the website, the video was just part of the installation.
ACTLab/soundscapes_2/control.php.txt