My latest demo uses particles using WebGL and Audio Apis for audio analysis.
Experience it @ http://jabtunes.com/labs/arabesque
A decent machine with a recent (chrome or firefox) browser recommended. If your browser cant play this, check out the screen capture video here.
Somehow I’m attracted to the arts, both audible and visual (and programming a scientific art?). Perhaps it was a way I could express myself, and I picked up music over the age most kids do, and started learning these 3d stuff after graduating without attending any graphics classes.
Needed lots of them and tried getting them from everywhere I could – flash blogs, papers, and even cutting age html5/webgl demos, etc. But perhaps the great concentration of work done with music visualizations with particles I found are videos created with visual effects/motion graphics/3d authoring tools, for example Adobe After Effects, and plugins like red giant’s trapcode suite (form, particular for particles and soundkeys for audio) and Krakatoa. Among the top hits of such videos is a brilliant music visualization done by Esteban Diacono
So it started out with really lousy stuff, but I learnt just a little more each time. Then I learn’t about three.js and also looked at particles demos done in actionscript. I wanted an easy way to create a particle systems in three.js so I’ve wrote Sparks.js which is like the flint or stardust particle engines in flash and their 3d engines. (Okay, this should have been another post).
Part of my experiments with three.js and sparks particle was to moving emitters along paths. I started created random paths (bezier, splines etc), moving the emitter and chasing it with the camera. Deciding there were too variables and things was getting messy, I decide to fix the x,y movements of the emitter, control the direction of particles movement. With particles moving away from the emitter, it also created a illusion of movement, and I stuck with it till the end of this demo.
Experimenting bezier paths and stuff.
Before continuing on particles, let’s look at the music and audio aspects.
Picking a music for the demo was tough – I thought picking a piece too short might not be worth the effort, but having a music too long would be much extremely difficult technically and artistically. I decided if I could record some music first, I could try, before requesting friends to help me do so, or sourcing creative commons music online or at the last resort purchasing some. So I went with the first option, and my friends at the NUS orchestra kindly allowed me to use the practice room for the recording.
Out of the few pieces I narrowed to record, turned out the Arabesque seemed the most suitable despite the mistakes made. Oh well, needs more practice. The composer who wrote the piece is Claude Debussy, a impressionist French composer who also composed Clair de Lune, as heard in Ocean 11′s ending and several other music visualizations. I happened to stumble upon Arabesque looking through a piano book I have, but kinda grew to love this piece too. Anyways, feel free to download the mp3 or ogg music.
Most demos apart from flash doing music visualizations either take prebake audio data or uses the audio data api which is available only on firefox. The web audio api for webkit browsers uses the realtimeanalyzer to perform fft (fast fourier transformation for turning into frequency-aptitude domain) on it. It seems like there aren’t any libraries out there which does spectrum analyzing on both chrome and firefox yet (although there are solutions to fallback from ff audio api to flash), so I thought I had to learn both APIs and see what I could do. Thanks to Mohit’s posts on the Audio API, it wasn’t difficult using the web audio API. Although there were kinks too, for example not being able to seek through an audio correctly, or return its playing time accurately.
var audio = new AudioKeys('arabesque1.ogg');
// When audio file is ready to play
AudioKey’s API allows you to “watch” or monitor portions of the frequency spectrum so this means you could get different amplitude levels from monitors watched in the different parts of the spectrum (eg. 1 for bass, 1 for treble, 1 for mids).
Add monitors and retrieve audio levels.
// adds an audio monitor
audio.addWatchers(startFreqBin, endFreqBin, lowLimit, highLimit);
// in your application loop
// returns the average amptitude level of the monitored section of audio
levels = audio.getLevels(0);
I probably got inspirations for doing this from some of the video tutorials using Trapcode SoundKeys, and I created a tool to help view the frequency spectrum. Not only for spectrum viewing, this tool allows creation of monitor bins.
Drag a range on the spectrum to monitor specified parts.
On firefox browsers, AudioKeys falls back to MozAudioKeys, a firefox implementation for a compatible AudioKeys api, but internally uses dsp.js for its fft processing. However, getting both implementations to provide the same levels is a pain (somehow the levels are not analyzed or normalized the same manner) and I hacked MozAudioKeys to give some usable levels for the demo. If anyone have a more elegant or correct way to do this, please let me know!
Audio Levels with Particles
Now we have to combine audio to interact with the particle systems. Even without audio, a particle systems would have plenty of parameters to tweak around with. You could check out my developer version of the demo at jabtunes.com/labs/arabesque/developer.html where it has controls to play around with the parameters of the particle systems. This version also shows the particle count, current audio time, and average fps.
Now sparks.js has its own engine loop. It’s decouple from the render loop for which data is processed for rendering to webgl or the screen. The render loop uses Paul Irish’s wrapper of browsers’ requestAnimationFrame() to paint the screen when there are resources. For example if the rendering is heavy, the render loop might only run times per seconds. However, during this time, the sparks’s engine would continue to run perhaps 50 or 100 times a seconds, ensuring that particles birth, spawning, actions and death takes place.
Since the particle engine runs at a more predictable interval, I’ve added audio.process(); for performing fft to the particle engine loop callback. After processing, one could use these values to alter different parameters of the particles engine, let’s say the velocity, colour, direction, acceleration etc. Since I used 2 watchers – 1 for the base, and 1 for the mids, the levels are add up to range from 0 to 1.2. This is multiplied by the maximum height I like my emitter to reach, in case 300 units in y. Now as audio levels moves up and down quickly, the peakHeight is recorded and allowed to falls a little slower if the audio level drops. Now, this is targetHeight to archive. The difference between this and the current height is the difference in y-position. The y-diff values can be added to the y-component velocity of the new particles, giving a bubbling flows of particles when there is amplitude in the music.
This would be the most major factor used in the demo. Other minor effects include change in emission rate and slight vignetting flicker. Now lets look at some other effects employed in the demo.
Hue change – This is a technique which first saw on wonderfl, and I kept using it for the past particles demos. Based on the Hue Saturation Levels systems, the hue is rotated through the entire color circle. Perhaps I should do a color ramp the next time?
Colors – Subtractive blending is used here, so its feels a little like some water color effects rather than fireworks which uses additive blending.
Vignette – as mentioned earlier, this is done with a fragment/pixel shader, created using alterdq’s new EffectComposer to simply post processing.
Blur – I started experimenting with some blur processing shader pass filters, but since I was generating the particles’ sprite procedurally using 2d canvas and radial gradients, they looked soft and blurred enough that there was no need to add blur in post. Which is good, because that could could improve some bit of frame rates. Particle sizes are randomly between 40 and 160, so there’s a variety of small beats and large clots of ink.
An undesired effect which looked like a spiral.
I then wrote a Turbulence action class for sparks.js and one could use it doing sparksEmitter.addAction(new Turbulence()); Now the particles appear to be more agitated depending on the parameters you use. So this seemed to work quite well actually, and it runs pretty fast in chrome, perhaps at most 5fps penalty. Firefox, however is more less forgiving and suffers 10fps loss. So what to do? I’ve decided to try use shader based perlin noise for that, and I used the same vertex shader used in this turbulent cloud demo. Maybe its not exactly accurate, since the turbulence doesn’t affect the positions of particle in sparks engine, but at least, there’s some illusion of turbulence and it runs really quickly, with almost no penalty in fps.
Final touches – Text Captions
In case a single particle trail and a bad recording can only capture limited attention, I decided to experiment with adding text captions before releasing quickly. First, pick a nice font from Google Web Fonts, next experiment with CSS3 Effects. Text Shadows are used for some blurring, inspired by this, add some css3 transitions animations and do cross browsers testings(! – at least i’m picking only ff and chrome this time). To script text captions through the duration of the music, I’ve decided to add text in html. For example, let’s take this.
<div class="action" time="20 28">
This means the message “Please enjoy” would appear at the 20s and exit at 28 seconds of the music. I created the function getActions() to the render loop. This methods accepts 2 parameters, the current and the last playing time of which the values could be taken from audiokeys.getCurrentTime();
So that’s all I could think of for now. Let me know if you have any questions or comments! I’m not sure if such a long write up would have bored anyone. I’m pretty glad to have finished this demo, perhaps just 6 months ago, I might not even imagined how I could do this and have learn’t quite a bit along the way. I was also worried that I would be spending lots of time on something that didn’t turn out well, but I’m glad by the retweets and nice comments so far! Google Analytics also tells me that the average time on site is 4 minutes, the length of the song which I worried was also too long and that many would close their browsers 30 seconds through the piece, so I’m quite relived that those numbers. \(^.^)
Despite coming a long way here, it only tells me that there’s a greater journey to travel. I’ve been thinking of implementing shadows, and perhaps try out other particle and fluid techniques done more in the GPU. For example, mrdoob points me to a fairlight/directtovideo music particles demo. That’s really way to go! For now though, I might need to take a break to rest, since I’ve fallen sick over the past days (not sure too much webgl or chocolates), I’ll also need concentrate on my day job, before continuing exploring on this journey!