Music Colour Particles


My latest demo uses particles using WebGL and Audio Apis for audio analysis.

Experience it @

A decent machine with a recent (chrome or firefox) browser recommended. If your browser cant play this, check out the screen capture video here.


Somehow I’m attracted to the arts, both audible and visual (and programming a scientific art?). Perhaps it was a way I could express myself, and I picked up music over the age most kids do, and started learning these 3d stuff after graduating without attending any graphics classes.

What’s interesting is that long ago I could admire without understanding how music visualizers (eg. Milkdrop in winamp) worked. Even after knowing some form of programming, the amount of work to create such graphics and audio processing would look really scary. Fast forward to today, all these is actually possible with javascript running in browsers, and with the vast knowledge online, I just wanted to try it. What better time to release this demo after the weekend Chrome 14 would have hopefully push to the masses, shipping with web audio api.


Needed lots of them and tried getting them from everywhere I could – flash blogs, papers, and even cutting age html5/webgl demos, etc. But perhaps the great concentration of work done with music visualizations with particles I found are videos created with visual effects/motion graphics/3d authoring tools, for example Adobe After Effects, and plugins like red giant’s trapcode suite (form, particular for particles and soundkeys for audio) and Krakatoa. Among the top hits of such videos is a brilliant music visualization done by Esteban Diacono


So start off with particles first. I probably did some simple experiments in actionscript after reading “Making things move”. Then I experimented with some javascript dom based particles effects for dandelions. Then some for the jtendorion experiment. Not too long ago, I revisited different javascript techniques (dom, css, canvas2d) with my simple fireflies particles also used for Renderflies.

So it started out with really lousy stuff, but I learnt just a little more each time. Then I learn’t about three.js and also looked at particles demos done in actionscript. I wanted an easy way to create a particle systems in three.js so I’ve wrote Sparks.js which is like the flint or stardust particle engines in flash and their 3d engines. (Okay, this should have been another post).

Part of my experiments with three.js and sparks particle was to moving emitters along paths. I started created random paths (bezier, splines etc), moving the emitter and chasing it with the camera. Deciding there were too variables and things was getting messy, I decide to fix the x,y movements of the emitter, control the direction of particles movement. With particles moving away from the emitter, it also created a illusion of movement, and I stuck with it till the end of this demo.

Experimenting bezier paths and stuff.

Before continuing on particles, let’s look at the music and audio aspects.


Picking a music for the demo was tough – I thought picking a piece too short might not be worth the effort, but having a music too long would be much extremely difficult technically and artistically. I decided if I could record some music first, I could try, before requesting friends to help me do so, or sourcing creative commons music online or at the last resort purchasing some. So I went with the first option, and my friends at the NUS orchestra kindly allowed me to use the practice room for the recording.

Out of the few pieces I narrowed to record, turned out the Arabesque seemed the most suitable despite the mistakes made. Oh well, needs more practice. The composer who wrote the piece is Claude Debussy, a impressionist French composer who also composed Clair de Lune, as heard in Ocean 11’s ending and several other music visualizations. I happened to stumble upon Arabesque looking through a piano book I have, but kinda grew to love this piece too. Anyways, feel free to download the mp3 or ogg music.

Audio Analysis
Most demos apart from flash doing music visualizations either take prebake audio data or uses the audio data api which is available only on firefox. The web audio api for webkit browsers uses the realtimeanalyzer to perform fft (fast fourier transformation for turning into frequency-aptitude domain) on it. It seems like there aren’t any libraries out there which does spectrum analyzing on both chrome and firefox yet (although there are solutions to fallback from ff audio api to flash), so I thought I had to learn both APIs and see what I could do. Thanks to Mohit’s posts on the Audio API, it wasn’t difficult using the web audio API. Although there were kinks too, for example not being able to seek through an audio correctly, or return its playing time accurately.

The result of this work to simplify the api is a javascript class I call AudioKeys.js. What AudioKeys does is simply to load a audio files, playback and return its levels. Usage of AudioKeys is pretty simple.

var audio = new AudioKeys('arabesque1.ogg');
audio.load(function() {
// When audio file is ready to play;

AudioKey’s API allows you to “watch” or monitor portions of the frequency spectrum so this means you could get different amplitude levels from monitors watched in the different parts of the spectrum (eg. 1 for bass, 1 for treble, 1 for mids).

Add monitors and retrieve audio levels.

// adds an audio monitor
audio.addWatchers(startFreqBin, endFreqBin, lowLimit, highLimit);

// in your application loop
// returns the average amptitude level of the monitored section of audio

levels = audio.getLevels(0);

I probably got inspirations for doing this from some of the video tutorials using Trapcode SoundKeys, and I created a tool to help view the frequency spectrum. Not only for spectrum viewing, this tool allows creation of monitor bins.

Drag a range on the spectrum to monitor specified parts.

On firefox browsers, AudioKeys falls back to MozAudioKeys, a firefox implementation for a compatible AudioKeys api, but internally uses dsp.js for its fft processing. However, getting both implementations to provide the same levels is a pain (somehow the levels are not analyzed or normalized the same manner) and I hacked MozAudioKeys to give some usable levels for the demo. If anyone have a more elegant or correct way to do this, please let me know!

Audio Levels with Particles
Now we have to combine audio to interact with the particle systems. Even without audio, a particle systems would have plenty of parameters to tweak around with. You could check out my developer version of the demo at where it has controls to play around with the parameters of the particle systems. This version also shows the particle count, current audio time, and average fps.

Now sparks.js has its own engine loop. It’s decouple from the render loop for which data is processed for rendering to webgl or the screen. The render loop uses Paul Irish’s wrapper of browsers’ requestAnimationFrame() to paint the screen when there are resources. For example if the rendering is heavy, the render loop might only run times per seconds. However, during this time, the sparks’s engine would continue to run perhaps 50 or 100 times a seconds, ensuring that particles birth, spawning, actions and death takes place.

Since the particle engine runs at a more predictable interval, I’ve added audio.process(); for performing fft to the particle engine loop callback. After processing, one could use these values to alter different parameters of the particles engine, let’s say the velocity, colour, direction, acceleration etc. Since I used 2 watchers – 1 for the base, and 1 for the mids, the levels are add up to range from 0 to 1.2. This is multiplied by the maximum height I like my emitter to reach, in case 300 units in y. Now as audio levels moves up and down quickly, the peakHeight is recorded and allowed to falls a little slower if the audio level drops. Now, this is targetHeight to archive. The difference between this and the current height is the difference in y-position. The y-diff values can be added to the y-component velocity of the new particles, giving a bubbling flows of particles when there is amplitude in the music.

This would be the most major factor used in the demo. Other minor effects include change in emission rate and slight vignetting flicker. Now lets look at some other effects employed in the demo.

Hue change – This is a technique which first saw on wonderfl, and I kept using it for the past particles demos. Based on the Hue Saturation Levels systems, the hue is rotated through the entire color circle. Perhaps I should do a color ramp the next time?

Colors – Subtractive blending is used here, so its feels a little like some water color effects rather than fireworks which uses additive blending.

Vignette – as mentioned earlier, this is done with a fragment/pixel shader, created using alterdq’s new EffectComposer to simply post processing.

Blur – I started experimenting with some blur processing shader pass filters, but since I was generating the particles’ sprite procedurally using 2d canvas and radial gradients, they looked soft and blurred enough that there was no need to add blur in post. Which is good, because that could could improve some bit of frame rates. Particle sizes are randomly between 40 and 160, so there’s a variety of small beats and large clots of ink.

I wanted to make the particles more interesting by spiraling around the x axis pole. I tried adding some wind effort blowing in a circle, but they didn’t turn out well. So I learn’t that turbulence can be created with perlin noise or simplex noise which is a improvement of perlin’s original algorithm. I used Sean Bank’s port of Simplex Noise in javascript and added 4D noise not knowing if I needed it.

An undesired effect which looked like a spiral.

I then wrote a Turbulence action class for sparks.js and one could use it doing sparksEmitter.addAction(new Turbulence()); Now the particles appear to be more agitated depending on the parameters you use. So this seemed to work quite well actually, and it runs pretty fast in chrome, perhaps at most 5fps penalty. Firefox, however is more less forgiving and suffers 10fps loss. So what to do? I’ve decided to try use shader based perlin noise for that, and I used the same vertex shader used in this turbulent cloud demo. Maybe its not exactly accurate, since the turbulence doesn’t affect the positions of particle in sparks engine, but at least, there’s some illusion of turbulence and it runs really quickly, with almost no penalty in fps.

More optimizations.
Now some final optimization to make sure things run more smoothly, esp. with firefox using 50% cpu running dsp in javascript. In three.js, a set of particles are reserved for the particle pool for Webgl. This is so new webgl particle systems buffers need not be created, at least from my understanding. So we have 2 pools of particles, one for the webgl rendering and one for sparks particle engine. Unused particles by the engine would be “hidden” placing them at infinite positions and changing the color (to black). When they are required by the engine, they would be assigned to a particle’s target, and reflect the true colours, size and position. Now, while a webgl particle pool of 200,000 seems to run reasonably well, the time for each render loop takes longer, since it takes more time to transfer the buffer from CPU to the GPU each loop. Reducing them would be a great idea, but not by too much in case the pools run out. So I reduced three.js particle pool to 8000 by observing the level of particles which peaked only at 1k+, there was almost enough particles 4 fold. The frame rates now average 30 to 60 frames per seconds (with firefox peaking 70-80fps).

Final touches – Text Captions
In case a single particle trail and a bad recording can only capture limited attention, I decided to experiment with adding text captions before releasing quickly. First, pick a nice font from Google Web Fonts, next experiment with CSS3 Effects. Text Shadows are used for some blurring, inspired by this, add some css3 transitions animations and do cross browsers testings(! – at least i’m picking only ff and chrome this time). To script text captions through the duration of the music, I’ve decided to add text in html. For example, let’s take this.

<div class="action" time="20 28">
Please enjoy.

This means the message “Please enjoy” would appear at the 20s and exit at 28 seconds of the music. I created the function getActions() to the render loop. This methods accepts 2 parameters, the current and the last playing time of which the values could be taken from audiokeys.getCurrentTime();

getActions() would handle not just the entrance and exits, but at half a second before the actual time, make it to a “prepare to enter stage” positions and property using a css class. At entrance time, the fade in css class for transition properties would be enabled and there’s where the fadeIn happens. At ending time, the “fade out” class with animation transitions would be enabled and the text would start to fade out. 2 seconds later, “actionHide” class would be applied which basically adds a “display:none;” style to hide the element. Turns out it works pretty well (although text shadows are inconsistent across browsers – ff’s lighter than chrome), and potentially this could be a javascript library too. Pretty cool css3 effects right? Just one additional point to note here, it seems that the frame rates drops to almost half when the css3 animations are running. My guess is that drop shadows, text shadows, gradients are generated as images internally by the browser, and could be just a bit intensive, despite running natively. Anyways, that’s my guess.

So that’s all I could think of for now. Let me know if you have any questions or comments! I’m not sure if such a long write up would have bored anyone. I’m pretty glad to have finished this demo, perhaps just 6 months ago, I might not even imagined how I could do this and have learn’t quite a bit along the way. I was also worried that I would be spending lots of time on something that didn’t turn out well, but I’m glad by the retweets and nice comments so far! Google Analytics also tells me that the average time on site is 4 minutes, the length of the song which I worried was also too long and that many would close their browsers 30 seconds through the piece, so I’m quite relived that those numbers. \(^.^)

Despite coming a long way here, it only tells me that there’s a greater journey to travel. I’ve been thinking of implementing shadows, and perhaps try out other particle and fluid techniques done more in the GPU. For example, mrdoob points me to a fairlight/directtovideo music particles demo. That’s really way to go! For now though, I might need to take a break to rest, since I’ve fallen sick over the past days (not sure too much webgl or chocolates), I’ll also need concentrate on my day job, before continuing exploring on this journey!

4 thoughts on “Music Colour Particles

  1. Hey man! Really cool stuff you put up there =D Amazing! Really hope that there would be a new version, with cooler features like more matching between music and the particles; or maybe even a site that allows users to upload their songs to enjoy with your color particles. Just some random suggestions 😀 But really good work!!!

  2. Thank you all for the comments! :)

    Greg, I’m glad you enjoyed, I’ll need to check out audiolib.js soon!

    Duy, I love the suggestions, although bandwidth costs to support user music content might be too much for me to support!

    Markus, thanks, I’m glad you like it. Looking forward to creating more demos when possible :)

Comments are closed.