Category Archives: Browsers

Emscripten Experience Part II – Optimizing the GLSL-Optimizer

There are many ways to love Emscripten. You could take an existing piece of C code or library and turn that into Javascript code that has benchmark speeds close to compiling and running native (thanks non garbage collected asm.js). You could use it to code with the type-safety of C. You could bring an application which previously requires compilation or installation to anyone on the internet simply by loading a webpage. You could even use browser developer tools to profile C++ code.

One reason I’ve been intrigued by Emscripten is the possibilities that happen with an application’s portability by the JS cross-compilation target. That has also spark off more interest in me to learn more about languages, C, LLVM or even Javascript itself.

Despite all the interest, I had a slow start with Emscripten – which was the topic in my last post. Overcoming some of the difficulties was I able to enjoy some fun and success with it. Still challenges lurked along the journey, and at end of the previous post I had 2 problems to solve:

a) getting link-time optimizations on glsl-optimizer

b) difficultly of porting a wavetable midi synthesizer to the web browser

One day @thespite mentioned to incorporate my emscripten port of GLSL-optimizer into his cool WebGL Shader Editor extension for Chrome, and that prompted me to revisit my project and Emscripten. So I continued the journey with a couple more tales to tell but as these stories become slightly unwieldy, I decided to cover the second section in another post.


Have you tried playing old games on emulators? A decade or more ago, I used to follow development on emulators (eg. playstation) to check out what additional new game compatibility they have made with new releases. Somehow the idea of being able to emulate and play many games from a different platform amazed me.

This draws a parallel to application of Emscripten. Huge codebase originally written in C or C++ could be “emulated” for the web. It excites me to see stuff like server libraries, desktop applications, graphics platform, codecs, even programming languages ported to JS that runs either on node.js or in the browser. 2 examples of projects I thought was really interesting is emscripten-QT and pypy.js! This wiki page has a great list of projects to check out!

Excess Code

While Emscripten has been known to work of huge amount of existing code, it might also mean that it could potentially generate huge amount of code too. Well, like many other cross-compilers or machine generated code, this isn’t new.

But is that really a big deal? Considering being able re-use most or all of an existing codebase for javascript with relatively little effort and time (compared to re-writing in js). Considering that the equivalent binaries aren’t that small either, especially if you include about shared libaries and dependencies already installed on your system. Considering that different binaries are needed for different platforms, now they would now run on the most universal platform (the web) with additional compilation. Considering without binaries, no installation is needed and runs almost instantaneously. Considering without the web as a platform, how much more difficult it is to acquire users who needs to go the hassle try the software on their computer. Considering these could justify the download wait.

Yet a casual visitor on your website loading up that fat javascript file wouldn’t appreciate that. The javascript may be so huge the browser is taking time downloading it. The browser may be look it has stop responding taking additional time to parse, interpret or compile the huge script files. The visitor may be wondering if the site is broken, the network is down, whether the code works, and it’s definitely not the best experience.

Which is why emscripten isn’t always the magic pill, there are other alternatives like hand porting or using other tools – bonsai-c, cheerp (though from some of my initial tests cheerp seems to be generating a bigger code ).

GLSL Optimizer Releases

When I first announced my emscripten port of GLSL-Optimizer, the build was 8MB. That’s almost the size of 10 floppy disks. These days though, homes have 1Gbp internet, but running a huge javascript file still takes additional time to get the code running.

Of course the file size wasn’t satisfactory. One thing I wasn’t doing was to run it at -O1, -O2 or -O3 optimizations. Running with those flags can activate link-time optimizations and runs the resulting code through JS minification which would improve overall file size and performance. Strangely, using these optimization flags resulted in infinite loops during runtime.

Why weren’t the builds optimized
One factor I suspected why optimizations were failing was because I was using Embind for bridging JS and C world. I observed that for some strange reasons I was able to run at -O1 instead of -O0 when I had embind disabled, so I decided to revert to non-embind bindings and use function return values to pass back successful and failure values back to JS land.

But that seem only to be able to get me as far as -O1 optimizations. The resulting JS weren’t even minified so I wanted to check if I could do closure minification without O2. I ended up filing an issue in github because the flags didn’t allow me to do that with emscripten. So even though that might be a bug, minification without link-time optimizations would also have limited effectiveness (besides, running huge code through the minifier tends to end up with crashes).

Alon Zakai aka kripken asked why the compiler optimizations and pointed me to some compiler settings I could use to trace memory segmentation faults. Those settings turns out to be really useful to debug emscripten code in general.

Tweaking around the flags, I still couldn’t solve the problem. I started thinking that there’s a possibility it was a bug emscripten. I lay this matter to rest till some time later, there were new versions of emscripten and I decided to give it another shot. I upgraded to 1.30.0, no luck. I decided to git clone master version and try again, still it did not work run time problems.


Since I was on the master branch, I decided to check out the latest developments. Emterpretor was in. Asyncify was out. Interesting, but what was that?

I ran Emscripten + Emterpretor anyways and O2 still fails. On the bright side, through the network pane I found the resulting code to be much more compact. Wow! 8.7MB→1.8MB (1.7MB→720KB gzipped) So what happened?

“We take C/C++ code and convert it to asm.js a subset of JavaScript. Then we convert the asm.js to byte code and spit out an interpreter for it written in JavaScript.

The JavaScript interpreter is loaded in the usual way into the web page and the bytecode is loaded as data. The big advantage of this scheme is that the byte code doesn’t have to be parsed and this speeds up the entire load process.” (source)

So from my understanding the Emterpretor is like a virtual machine which interpret and runs your emscripten code. It load code as a binary / bytecode format (like JVM) that is more compressed and concise than Javascript itself. It also allows code to be run even before the entire code can be parsed (unlike asm.js or JS). The virtual machine would also possibly allow some cool stuff like saving the state of the machine, and running virtual threads (green threads). The idea of an interpreted language running interpreted stuff is cool, isn’t it? There are drawbacks though, like preventing AOT optimizations (because instructions would interpreted evaluated at runtime). There’re ways around that, eg using whitelisting, something I may bring up in the next post.

While uploading the tinier builds to github, I realized something. Github pages weren’t enabling gzip compression with emscripten.js.mem. As a hack, I renamed that to mem.js, make emscripten load the custom memory file sogithub-pages would pull a gzip served asset. Yah!


At this point, I’m tempted again to derail my topic to talk about the not-too-long ago announced WebAssembly/WebASM/WASM. It has some of the ideas of Emterpretor, AST byte code format solving faster load times and allowing optimizations in the JS engine. I think it’s an exciting topic. It seems like a natural progression for asm.js -> emterpretor -> WebASM which is at the same time backward compatible with polyfills. It’s even greater that all vendors agree on this standard. I believe it is also something that can make the people asm.js bothers happy. It is also awesome that emscripten can also support that with a flick of a switch. But let me go back on topic.

Real Fix
Up to now, I’ve been trying to get around fixing the optimization problem looking everywhere, except one place – the code base (one of my emscripten lessons I shared was to understand the original code base, apparently I didn’t heed my own advice). In an issue in three.js, Ben Adams mentioned my project in a thread that leads @tschw to the original optimizing issue I’ve open for glsl-optimizer. With his sorcery (which he denies), he was able to trace to an issue upstream from glsl-optimizer to the MESA glsl compiler that was causing how the optimizations not to work. Whoever tschw is, awesome work there!

Finally we can perform -O3 optimizations!!!!

If we stop here for a moment, I think we have a happy ending. The GLSL-optimizer has also been added to the Shader Editor Extension.

So here we have a little milestone, but I believe there’s more work that can be done. thespite suggested for a tree-shaking pruning (aka dead-code elimination) feature that doesn’t alters the original GLSL variables. I think that’s a absolutely good feature to have, unfortunately it may be difficult to alter MESA GLSL parser or GLSL optimizer to do so. Others have mentioned other tools: peg.js, glsl-unit, glslprep.js, glsl-simulator and the StackGL set of tools: glsl-tokenizer, glsl-parser etc. These suggestions are great, someone just have to look into them and apply them. Maybe one day when I have too much time on my own, I might also write my own parser in JS to play around with the GLSL code. Well, maybe as always.

So that’s all for “Optimizing the GLSL-Optimizer” in this post, and I’ll try to write about “Setting WildMidi Wild on the Web” into the next post, till then feel free to drop me comments @blurspline 😀

Related Links / Readings – Slides on “Compiling C/C++ To Javascript GDC 2013” by Alon Zakai From asm.js to Web Assembly
The Emterpreter: Run code before it can be parsed Original announcement of glsl-optimizer on twitter
– Web Assembly (Google, Microsoft, Mozilla And Others Team up, Design FAQ, prototype)
Javascript is C, not assembly of the Web Twitter News on Emscripten

Behind “It Came Upon”

Falling snowflakes, a little snowman, a little flare from the sun. That was the simple idea of a beautiful snow scene I thought could be turned into an interactive animated online christmas greeting card to friends. Turns out “It came upon” became one of my larger “webgl” personal project in year 2011 (after arabesque).

(screencast here if you do can’t run the demo).

The idea might have only took a second, or at most a minute, but rather a simple piece of work, the final production needed the fusion of many more experimentation done behind the scenes. I guess this is how new technologies gets developed and showcased in Pixar short films too. I wonder if a writeup of the technical bits would really lengthy or boring, so I’ll try leaving it for another post, and focus on the direction I had on the project, and share some thoughts looking back.

The demo can mainly be broken down into 4 scenes
1. splash scene – flying star fields
2. intro scene – stars, star trails and auroras (northern lights)
3. main scene – sunrise to sunset, flares, snows, text, snowman, recording playback, particles
4. end scene – (an “outro”? of) an animated “The End”

Splash scene.
Its kind of interesting as opposed to flash intro scenes, you don’t want music blasting off the moment a user enters the site. Instead, the splash scene is used to allow the full loading of files, and giving another chance in scaring users from running the demo, especially if WebGL is not enabled on their browsers. Then the thought of giving user some sense of flying through the stars while the user gets ready. (I hope nobody’s thinking starwars or the windows 3.1 screensaver) There again, flight in splash scenes are not new – we had flocking birds in wildness downtown, and flying clouds in rome.

Intro scene.
The intro scene was added originally to allow a transition into the main snow scene, instead of jumping in and ending there. Since it was daylight in the snowscene, I thought a changing sky scene from night to day would be nice, hence the startrails and Aurora Borealis (also known as Northern lights). I’ve captured some startrails before, like
or more recently in 2011, this timelapse.

a little stars from Joshua Koo on Vimeo.

But no, I haven’t seen Aurora Borealis in real life, so its must be my impression from photos and videos seen on the net, and hopefully one day I’ll get to see them in real life! Both still and time-lapsed photography appeals to me, which also explains the combination used.
Time-lapsed sequences gives the sense of earth’s rotation and
the still photographs imitates long exposures which creates beautiful long streaks of stars.

Both a wide lens and a zoom lens were also used in this sequence for a more dramatic effect. Also, I fixed the aspect of the screen to a really wide format, as if it was a wide format movie or a panorama. Oh btw, “It came upon” the comes from the title of the musicplayed in the background, the christmas carol It Came Upon a Midnight Clear.

Snow scene.

At the central of the snow scene is a 3D billboard, probably something like a hollywood sign or the fox intro. Just a static scene but the opening was meant to be more mysterious, slowly revealing the text through the morning fog while the camera moves towards it and then panning along it, while interweaving with still shots from different angles. As the sun rises, the flares hits the camera and the scene brightens up. The camera then spins around the text in an ascending manner as if it could only be done with expensive rigs and cranes or mounted on a helicopter. Snowfall have already started.

Some friends might have asked why is the camera always against sun. Having the sun infront of the camera makes the flare enter the lens, and to allows shadows falling towards the camera to be more dramatic. Part of making or breaking the rule of not shooting into the light blogged sometime ago.

Part of this big main scene was to embed a text message. There were a few ideas I had to do this. One was having 3D billboards laid up a snow mountain, and as the camera flies through the mountain, the message gets revealed. The other was firing fireworks into the sky, and forming text in the air. Pressed for time, I implemented creating text on the ground, and turning them into particles so the next line of messages can be revealed. The text playback was created by recording of keystrokes. The camera jerks every time a key is pressed to give a more impactful feedback, such like the feelings of using a typewriter. On hitting the return key, the text characters turn into particles and flyaway. This was also the interaction that I had intended to give users, allowing them to record and generate text in real-time and sharing it with others. An example of a user recorded text by chrome experiements or my new year greetings.

And so, like all shows, it ends with “The End”. But not really so, as one might noticed a parody of the famous Pixar intro, as the snowman turns to life and eventually hops on a text to flatten it. Instead of hopping across the snow scene, it falls into the snow, swimming like a polar bear, (stalking across the snow as one has commented), to the letter T. After failing to be flattened by the snowman, it gets impaled by the “T” and gets drag down into the snow. The End.

A friend asked for blood from the snowman, but in fact, I already tried tailoring the amount of sadism, it wasn’t the least mild (at least I thought), nor I wanted it overly sadistic (many commented the snowman was cute). Interestingly as I was reviewing the snowmen in Calvin and Hobbes, I realized there’s this horror elements to them, as much as I would have loved to have snowmen as adorable as himself or hobbes. Then again it could have represent Calvin’s evil geniuses, or that the irony that snowmen never survives after winter.


First of all, I realized that its no easy task to create a decent animation or any form of interactive demo, especially with pure code, and definitely not something that can be done in a night or two. Definitely I have a respect for those in the demo scene. I had the idea in a minute, and a prototype in a day having thought, what could be so difficult, there’s three.js to do all the 3D work and spark.js for particles. Boy was I wrong, and indeed have a better understanding of 1% inspiration and 99% perspiration. With my self imposed deadline to complete it before the end of the year, not everything were be up to even my artistic or technical expectations, but I was just glad I had it executed and done with.

Looking back at year 2011, its probably a year of graphics, 3D and WebGL programming for me. I started with nothing and ended with at least something (still remembered asking noob questions starting in irc). I did alot of readings (like blogs and siggraph papers), had a whole lot of experimentation (though a fair bit were uncompleted and failures), generated some wacky ideas, contributed in someways to three.js, have now a fair bit of knowledge about the library and webgl, and have at least 2 experiments featured. What’s for me in 2012?

Music Colour Particles


My latest demo uses particles using WebGL and Audio Apis for audio analysis.

Experience it @

A decent machine with a recent (chrome or firefox) browser recommended. If your browser cant play this, check out the screen capture video here.


Somehow I’m attracted to the arts, both audible and visual (and programming a scientific art?). Perhaps it was a way I could express myself, and I picked up music over the age most kids do, and started learning these 3d stuff after graduating without attending any graphics classes.

What’s interesting is that long ago I could admire without understanding how music visualizers (eg. Milkdrop in winamp) worked. Even after knowing some form of programming, the amount of work to create such graphics and audio processing would look really scary. Fast forward to today, all these is actually possible with javascript running in browsers, and with the vast knowledge online, I just wanted to try it. What better time to release this demo after the weekend Chrome 14 would have hopefully push to the masses, shipping with web audio api.


Needed lots of them and tried getting them from everywhere I could – flash blogs, papers, and even cutting age html5/webgl demos, etc. But perhaps the great concentration of work done with music visualizations with particles I found are videos created with visual effects/motion graphics/3d authoring tools, for example Adobe After Effects, and plugins like red giant’s trapcode suite (form, particular for particles and soundkeys for audio) and Krakatoa. Among the top hits of such videos is a brilliant music visualization done by Esteban Diacono


So start off with particles first. I probably did some simple experiments in actionscript after reading “Making things move”. Then I experimented with some javascript dom based particles effects for dandelions. Then some for the jtendorion experiment. Not too long ago, I revisited different javascript techniques (dom, css, canvas2d) with my simple fireflies particles also used for Renderflies.

So it started out with really lousy stuff, but I learnt just a little more each time. Then I learn’t about three.js and also looked at particles demos done in actionscript. I wanted an easy way to create a particle systems in three.js so I’ve wrote Sparks.js which is like the flint or stardust particle engines in flash and their 3d engines. (Okay, this should have been another post).

Part of my experiments with three.js and sparks particle was to moving emitters along paths. I started created random paths (bezier, splines etc), moving the emitter and chasing it with the camera. Deciding there were too variables and things was getting messy, I decide to fix the x,y movements of the emitter, control the direction of particles movement. With particles moving away from the emitter, it also created a illusion of movement, and I stuck with it till the end of this demo.

Experimenting bezier paths and stuff.

Before continuing on particles, let’s look at the music and audio aspects.


Picking a music for the demo was tough – I thought picking a piece too short might not be worth the effort, but having a music too long would be much extremely difficult technically and artistically. I decided if I could record some music first, I could try, before requesting friends to help me do so, or sourcing creative commons music online or at the last resort purchasing some. So I went with the first option, and my friends at the NUS orchestra kindly allowed me to use the practice room for the recording.

Out of the few pieces I narrowed to record, turned out the Arabesque seemed the most suitable despite the mistakes made. Oh well, needs more practice. The composer who wrote the piece is Claude Debussy, a impressionist French composer who also composed Clair de Lune, as heard in Ocean 11’s ending and several other music visualizations. I happened to stumble upon Arabesque looking through a piano book I have, but kinda grew to love this piece too. Anyways, feel free to download the mp3 or ogg music.

Audio Analysis
Most demos apart from flash doing music visualizations either take prebake audio data or uses the audio data api which is available only on firefox. The web audio api for webkit browsers uses the realtimeanalyzer to perform fft (fast fourier transformation for turning into frequency-aptitude domain) on it. It seems like there aren’t any libraries out there which does spectrum analyzing on both chrome and firefox yet (although there are solutions to fallback from ff audio api to flash), so I thought I had to learn both APIs and see what I could do. Thanks to Mohit’s posts on the Audio API, it wasn’t difficult using the web audio API. Although there were kinks too, for example not being able to seek through an audio correctly, or return its playing time accurately.

The result of this work to simplify the api is a javascript class I call AudioKeys.js. What AudioKeys does is simply to load a audio files, playback and return its levels. Usage of AudioKeys is pretty simple.

var audio = new AudioKeys('arabesque1.ogg');
audio.load(function() {
// When audio file is ready to play;

AudioKey’s API allows you to “watch” or monitor portions of the frequency spectrum so this means you could get different amplitude levels from monitors watched in the different parts of the spectrum (eg. 1 for bass, 1 for treble, 1 for mids).

Add monitors and retrieve audio levels.

// adds an audio monitor
audio.addWatchers(startFreqBin, endFreqBin, lowLimit, highLimit);

// in your application loop
// returns the average amptitude level of the monitored section of audio

levels = audio.getLevels(0);

I probably got inspirations for doing this from some of the video tutorials using Trapcode SoundKeys, and I created a tool to help view the frequency spectrum. Not only for spectrum viewing, this tool allows creation of monitor bins.

Drag a range on the spectrum to monitor specified parts.

On firefox browsers, AudioKeys falls back to MozAudioKeys, a firefox implementation for a compatible AudioKeys api, but internally uses dsp.js for its fft processing. However, getting both implementations to provide the same levels is a pain (somehow the levels are not analyzed or normalized the same manner) and I hacked MozAudioKeys to give some usable levels for the demo. If anyone have a more elegant or correct way to do this, please let me know!

Audio Levels with Particles
Now we have to combine audio to interact with the particle systems. Even without audio, a particle systems would have plenty of parameters to tweak around with. You could check out my developer version of the demo at where it has controls to play around with the parameters of the particle systems. This version also shows the particle count, current audio time, and average fps.

Now sparks.js has its own engine loop. It’s decouple from the render loop for which data is processed for rendering to webgl or the screen. The render loop uses Paul Irish’s wrapper of browsers’ requestAnimationFrame() to paint the screen when there are resources. For example if the rendering is heavy, the render loop might only run times per seconds. However, during this time, the sparks’s engine would continue to run perhaps 50 or 100 times a seconds, ensuring that particles birth, spawning, actions and death takes place.

Since the particle engine runs at a more predictable interval, I’ve added audio.process(); for performing fft to the particle engine loop callback. After processing, one could use these values to alter different parameters of the particles engine, let’s say the velocity, colour, direction, acceleration etc. Since I used 2 watchers – 1 for the base, and 1 for the mids, the levels are add up to range from 0 to 1.2. This is multiplied by the maximum height I like my emitter to reach, in case 300 units in y. Now as audio levels moves up and down quickly, the peakHeight is recorded and allowed to falls a little slower if the audio level drops. Now, this is targetHeight to archive. The difference between this and the current height is the difference in y-position. The y-diff values can be added to the y-component velocity of the new particles, giving a bubbling flows of particles when there is amplitude in the music.

This would be the most major factor used in the demo. Other minor effects include change in emission rate and slight vignetting flicker. Now lets look at some other effects employed in the demo.

Hue change – This is a technique which first saw on wonderfl, and I kept using it for the past particles demos. Based on the Hue Saturation Levels systems, the hue is rotated through the entire color circle. Perhaps I should do a color ramp the next time?

Colors – Subtractive blending is used here, so its feels a little like some water color effects rather than fireworks which uses additive blending.

Vignette – as mentioned earlier, this is done with a fragment/pixel shader, created using alterdq’s new EffectComposer to simply post processing.

Blur – I started experimenting with some blur processing shader pass filters, but since I was generating the particles’ sprite procedurally using 2d canvas and radial gradients, they looked soft and blurred enough that there was no need to add blur in post. Which is good, because that could could improve some bit of frame rates. Particle sizes are randomly between 40 and 160, so there’s a variety of small beats and large clots of ink.

I wanted to make the particles more interesting by spiraling around the x axis pole. I tried adding some wind effort blowing in a circle, but they didn’t turn out well. So I learn’t that turbulence can be created with perlin noise or simplex noise which is a improvement of perlin’s original algorithm. I used Sean Bank’s port of Simplex Noise in javascript and added 4D noise not knowing if I needed it.

An undesired effect which looked like a spiral.

I then wrote a Turbulence action class for sparks.js and one could use it doing sparksEmitter.addAction(new Turbulence()); Now the particles appear to be more agitated depending on the parameters you use. So this seemed to work quite well actually, and it runs pretty fast in chrome, perhaps at most 5fps penalty. Firefox, however is more less forgiving and suffers 10fps loss. So what to do? I’ve decided to try use shader based perlin noise for that, and I used the same vertex shader used in this turbulent cloud demo. Maybe its not exactly accurate, since the turbulence doesn’t affect the positions of particle in sparks engine, but at least, there’s some illusion of turbulence and it runs really quickly, with almost no penalty in fps.

More optimizations.
Now some final optimization to make sure things run more smoothly, esp. with firefox using 50% cpu running dsp in javascript. In three.js, a set of particles are reserved for the particle pool for Webgl. This is so new webgl particle systems buffers need not be created, at least from my understanding. So we have 2 pools of particles, one for the webgl rendering and one for sparks particle engine. Unused particles by the engine would be “hidden” placing them at infinite positions and changing the color (to black). When they are required by the engine, they would be assigned to a particle’s target, and reflect the true colours, size and position. Now, while a webgl particle pool of 200,000 seems to run reasonably well, the time for each render loop takes longer, since it takes more time to transfer the buffer from CPU to the GPU each loop. Reducing them would be a great idea, but not by too much in case the pools run out. So I reduced three.js particle pool to 8000 by observing the level of particles which peaked only at 1k+, there was almost enough particles 4 fold. The frame rates now average 30 to 60 frames per seconds (with firefox peaking 70-80fps).

Final touches – Text Captions
In case a single particle trail and a bad recording can only capture limited attention, I decided to experiment with adding text captions before releasing quickly. First, pick a nice font from Google Web Fonts, next experiment with CSS3 Effects. Text Shadows are used for some blurring, inspired by this, add some css3 transitions animations and do cross browsers testings(! – at least i’m picking only ff and chrome this time). To script text captions through the duration of the music, I’ve decided to add text in html. For example, let’s take this.

<div class="action" time="20 28">
Please enjoy.

This means the message “Please enjoy” would appear at the 20s and exit at 28 seconds of the music. I created the function getActions() to the render loop. This methods accepts 2 parameters, the current and the last playing time of which the values could be taken from audiokeys.getCurrentTime();

getActions() would handle not just the entrance and exits, but at half a second before the actual time, make it to a “prepare to enter stage” positions and property using a css class. At entrance time, the fade in css class for transition properties would be enabled and there’s where the fadeIn happens. At ending time, the “fade out” class with animation transitions would be enabled and the text would start to fade out. 2 seconds later, “actionHide” class would be applied which basically adds a “display:none;” style to hide the element. Turns out it works pretty well (although text shadows are inconsistent across browsers – ff’s lighter than chrome), and potentially this could be a javascript library too. Pretty cool css3 effects right? Just one additional point to note here, it seems that the frame rates drops to almost half when the css3 animations are running. My guess is that drop shadows, text shadows, gradients are generated as images internally by the browser, and could be just a bit intensive, despite running natively. Anyways, that’s my guess.

So that’s all I could think of for now. Let me know if you have any questions or comments! I’m not sure if such a long write up would have bored anyone. I’m pretty glad to have finished this demo, perhaps just 6 months ago, I might not even imagined how I could do this and have learn’t quite a bit along the way. I was also worried that I would be spending lots of time on something that didn’t turn out well, but I’m glad by the retweets and nice comments so far! Google Analytics also tells me that the average time on site is 4 minutes, the length of the song which I worried was also too long and that many would close their browsers 30 seconds through the piece, so I’m quite relived that those numbers. \(^.^)

Despite coming a long way here, it only tells me that there’s a greater journey to travel. I’ve been thinking of implementing shadows, and perhaps try out other particle and fluid techniques done more in the GPU. For example, mrdoob points me to a fairlight/directtovideo music particles demo. That’s really way to go! For now though, I might need to take a break to rest, since I’ve fallen sick over the past days (not sure too much webgl or chocolates), I’ll also need concentrate on my day job, before continuing exploring on this journey!

3D Text Bending & UV Mapping

Another back post on updates of some new features added to three.js ExtrudeGeometry a couple of weeks ago.


The 2 most notable changes are
1. 3D Text Bending – This allows text to be bend or wrapped along a spline path. Curve classes shares a common API so straight lines, bezier curves and catmull-rom splines are supported. The initial inspiration for this work came from this neat article

2. UV mapping – Finally, managed to add UV mapping for extrude geometries classes.

3D Text Bending

Curve classes contain methods to get tangents and normal vectors at a point t, and one use case is for path wrapping/bending. CurvePath is now a connected set of curves with the Curve interface/API. It shares the same interfaces with the base Curve class, therefore a bend parameter could take in a curve, an array of curve (curvepath), or a path (for drawing api).

To apply path bending to a shape, use shape.addWrapPath(path). Calling getTranformedPoints() would run transformation on extractAll(Spaced)Points() internally;

[edit] one important key to how path bending works is that the text is transformed according to the tangents of the spline path. I wrote the base curve class to calculate the tangent based on gradient of a tiny difference of t (kinda like using limits to derive differential. While this potentially can calculate the tangent of any generic sub-curve, a sub-curve should overwrite itself to have faster and more accurate differentiation. I should not forget to thank Puay Bing for helping me with the derivative of the cubic bezier spline the other day!

Bend in text example (

Bend with spline (

Bend with connected curves (

Cubic and Quadratic Beziers (

UV Mapping
UV mapping applies a texture onto a material to wrapped around the object.

ExtrudeGeometry now takes in material and extrudeMaterial, material for front/back faces, extrudeMaterial for extrusion and beveled faces

Using these 2 gradient textures
textures material

can produce
uv three

with sufficient subdivisions, you can get a smooth looking model..
close up on 20 subdivisons

advice learn from mrdoob to “experiment, experiment, experiment”
experimented and flipped

i think i just fell in love with uv and gradients, until some transparency came
some transparency

Finally for some demos – TextBending & Text LOD’s on Github, UV Mapping at and remake of mr.doob’s “Hello” at

The end of the weekends means the end of my code twinkling for the week again. A lot more exciting/crazy stuff to explore and work on, unfortunately good things may have to wait.

At least I think I have a little small checks on my never ending TODO list. (^^)

Signing off geeknotes.

after adding uv mapping, alteredq added smooth shading to the webgl text example. Pretty cool right?

Extrusion Bevels – Three Attempts in Three.js

Three.js is out now at r43 with more features. Check it out! :)

In a previous post I had highlighted my approach for creating 3d geometries extruding a 2d text font (shape) along its z-axis. There have been a couple of changes for Text too – `Text` is now `TextGeometry`, much of what has been in that class has been refactored into a couple of other classes, but since we have a generic extrude geometry class `ExtrudeGeometry` which applies not only to text fonts but to any generic closed forms shapes with support holes with `Curve` and `Path` classes. Along with TextGeometry is wrapping to splines/curved paths, maybe a topic for another post 😉

Bevel and Text Geometry with Level-of-details

In this post, I talk a little about beveling, (termed fillet caps in Cinema4D), which I mistakenly labelled it as bezel at first (the round part around rolex watches). Bevel slopes or rounds the corners of an extruded block, kind of chiseling or sandpapering a wooden block in the real life analog. Beveling also seems to be a common feature in 3d programs, and you might have probably even use it in applications like photoshop or word. While seemingly a simple feature, it almost took me almost four rewrites to get to a version which I’m satisfied, of which I’m sharing my approaches below.

To start off, I wasn’t even sure whether to keep the original shape at its extruded section or at its front and back caps. Just thinking about this stirred internal debate within myself and some amount of code rewrites. After some experimentations and observations, I decided that original shape at the caps and widen shapes at extruded bodies looks better for text. It made even more sense after watching greyscale gorilla described the usage fillet caps for giving a strong look to fonts and making edges smoother in C4D.

Interesting strange effect in my WIP bevels

Expansion Around Centroids
So the first approach I jumped in quickly to code was to scale the shapes using its contour points. To know where to scale it around, for which I used the centroid, calculated by averaging all the points of the contour. Works pretty well, but what about holes? Centroid for contour points for holes are also calculated, but instead of scaling in the direction of the shape parameters outwards, they are scaled in the opposite direction inwards. This seems to work pretty well for many shapes until… you take a careful look at the tip of “e”. this strange appearance is due to the concave nature of the shape. Drawing it on paper, its pretty easy to understand why this algorithm would not work on the inside edges of concave shapes.

Expansion via angled midpoints
So approach 1 seemed successful, but failed for certain use cases. How can we solve the problem of the first? One way is to perhaps break down the concave shapes such that only convex shapes remain, but that would have its sets of problems. So I took another approach – to extrude its corners based on the angles between edges. Each new corner will be extruded outwards with the angle averaged or equidistance from both sides of the lines connected to the corner, by the extruded amount. Again, a simple method, and seems to work pretty well, but on careful inspection one would notice strange looking corners especially with sharp edges. Since the expanded corners are determined by angles, there is no guarantee that the bevel amount from the edges are equal throughout and are affected by the shapes.

Corners based on Edges Offsets
In the 2nd approach, 2 connecting edges at each point is required to computed a single bevelled point. The 3rd approach starts out similarly till here, then lines are offset outwards at its edges normals (90 degrees from the edge’s slope) to obtained its extruded edges. From its offset positions, new points are calculated. In pseudo code

For each point,

Find the line connecting to the point
The normal left perpendicularly to this line is used, and

For the line connecting from the point,
the normal right perpendicularly to the line is used.

Offset lines a unit to its associated normals, and find the
intersections of these 2 new lines

Intersection would be the new point.

Now this seems to work much better for most of the cases, since new extruded bevel edges now has a equidistance to the original edges consistently. Unfortunately at sharp converging edges, a point from these sharps corners seems to be missing. In such rare cases, the intersection point of the extruded edge reserves its direction and pushes the point further away from where it should be. The work around for this is to revert to the algorithm of 2nd attempt to prevent such artifacts.

So here is it, at the end of at the bevel attempts, while not 100% perfect, has reached a state which I’m generally satisfied with the results.

If you are any more interested in this, check out the new examples and source on github

Stay tuned for more development updates. :)

disclaimer: i never gotten the chance to take any computer graphics classes, nor came across any papers on this topic, so please feel free to refer or suggest any better approaches if you may know.

p.s. for curved bevels, I used sinuous function of t. wonder if there’s a better way to go about this?

Distributed Web Rendering

(This note was the first of a series written in early May this year shared on my facebook notes. Its really hard to imagine that it hasn’t been too long since then but I’ve already ventured much deeper into the 3d and webgl world.)

The Short: Watch this short 30s video.

Fireflies rendered with RenderFlies.

The Long: A demo using web browsers to render images a before creating a final video render on the server side.

Just before sleeping, ideas revolved around my head, but while this idea being just one of them, ended up manifested in code as a proof of a concept last night.

I imagine that anyone who does video encoding, post processing or 3d, might find the rendering process slow and painful. Which is why I imagine, super computers and render farms are therefore used for serious work like in the movie industry.

Since web browsers are well supporting the canvas element and other html5 technologies, especially with javascript performance greatly improved, this is really an exciting time for web developers (although not too long in the past I imagine flash and actionscript was really exciting). I then had the idea is that a simple distributed rendering system with html5 can be created. Such a system can be easily deployed and widely available web browsers can be act as thin light weight render client with ease.

The how:

To start hacking this system together, I first modified my html5canvas based fireflies experiment. Instead of the usual canvas repainting at a regular interval, a few changes needs to be done. After updating particals and updating the canvas, the image on canvas is extracted with toDataURL encoding the images to a base64 string, which would be pushed to the node.js server.

The node.js basically decodes and write the file to the file system. When the images are done rendering, ffmpeg is spawn as a seperate process to encode the images to a mp4 video. Note that node.js is spectacular dealing with external process. Due to its event driven, non-blocking architectural, the http request can wait until the encoding is done before notifiying the web client. This is a good reason, apart from just thinking that its cool using javascipt on the server side.

For this fireflies demo, the images are created at full HD resolution 1920×1080 saved in the default png format. I encoded the mp4 with ffmpeg at 4Mb bitrate.

Some Notes:

I’m quite satistifed with Firefox 4 performance here, in fact, it is faster than Chrome of an average of 2 seconds every 30 frames. I noted some of the actual numbers somewhere which I’m lazy to pull out now. However, one reason why Firefox is faster is that its toDataUrl operation takes about 200ms while Chrome takes about 300ms (on the mac).

On the average for my final render of ~40 particles (I used about 500 particles for benchmarking), Firefox and node.js locally takes about 200 seconds to transfer 900 images (30 seconds, 30 frames) and takes about 1 minute for the ffmpeg processing at full HD resolution. Safari seems to be able to give a reasonable performance on the mac too.

On my windows machine, I’ve tested with IE10 preview (with backwards compatability till 9), opera, firefox, chrome. (Now you know why its called Html5 – because you always have to test 5 browsers). The new IE’s performance is pretty good too, and all browsers are almost on par, with chrome running a little slower.

PNG Transpaceny: I realized the renders if transpacy in background was used was quite bad. After encoding to MP4s, it looked ugly and tooked up 10x the size. Fix: Paint/fill the entire background instead of/in addition to clearing the canvas every refresh.

More practical use of this ideas:
Right now this demo only works for a single client-server.

qn: posible to extend this to multiple clients? Yes. Multiple clients can be rendering the seperate project or a single project, but while on the same project, they can be distributed at giving a different slice at different frames. It is even possible to have a collabrative raytracing to a single image as shown here.

qn: Can a particle system animation be rendered distributely? Possible if a set of model is unified. For random calculations, they should use the some predefined values or use the same random seed.

qn: is this distributed rendering practical? It depends. This is case, a screen video capturing program could have been used, or you can argue that its more cool to run things real-time anyway. However, this might come in useful in cases when your system cant keep up, for example attempting to run 1M particles on canvas 2d, you might opt to render it instead. This idea however can work with WebGL (canvas 3d) or more CPU intensive javascript ray tracing methods.

qn: what about untrusted clients? That depends on your application. If its a social experiment, what can use what SETI or similar distributed program do, requiring similar sets of answers for different clients to accept a solution.

qn: Would distributed rendering really work? This project is not yet at the stage, but the challenges here is about solving bottlenecks, whether its a CPU or an IO bound job. For example, the rendering performance of this demo can be improved at its network layer. Making a http request for every file incures higher overhead and lag times. One way is that a buffer be used to store a few render images, before making a POST, or that use websockets and stream the data continously at less overheads.

Some other TODOs: use ffmpeg2theora for encoding to ogv for firefox browsers.

Lastly, while I tried googling this idea some time ago, nothing came up but recently I have found out that a collabrative browser based mapreduced has been though about and discussed before.

Source code is available @

Okay, I have more important stuff to work on, signing off geeknotes today. :)

three.js 3D Text

(Since TextGeometry has been added to r42 of three.js I decided to create a repost of my note written @ – warning code example api might be outdated by now)

If you haven’t heard of three.js, its an extremely cool and simple 3D library in javascript started by mrdoob. I suggest checking out the extremely cool examples like “wilderness downtown”, “and so they say”, “rome” ). Three.js supports both canvas and webgl rendering.

Procedural 3D Text Geometry is my humble contribution to this project. I’ve managed to apply the little things I knew or experimented on geometry, and learned more in the progress of adding this feature. I’m thankful to @alteredq too, who tested and clean up my code (in additional to adding cool shaders to my text demo), and is a fantastic contributor to the project.

If a demo speaks more than words, check out the webgl demo (should work with the latest chrome or firefox)

Or if you browser/graphics card doesn’t support webgl, try the software rendered (canvas 2d) implementation

Now what TextGeometry does is help to create 3d geometry to render 3d quickly in the browser without additional tools (eg. export text models from your 3d software) with just a simple call like var text = new THREE.TextGeometry( text, { size: 10, height: 5, curveSegments: 6, font: "helvetiker", weight: "normal", style: "bold" }); There was a couple of motivation to create this, but one of them was my curiosity and interest motion graphics.

For now let me dive into some technical details of how the 3D Text Geometry works.

The steps are as follows;

1) vector paths extraction from fonts
2) points extraction from 2d paths
3) triangulation of polygons for shape faces
4) extruding 2d faces to create 3d shapes.

process of creating 3d typography – points generation and triangulation

1. shape paths
like how text and fonts are rendered on computers, vector paths are needed from font data. there are 2 main open source projects which have tools for converting fonts to javascript format so it can be rendered with javascript, namely cufon and typeface.js. typeface.js data files was used here.

2. extractions of points
this step is required for triangulation in the next step. a little math and geometry understanding here is useful, but paths mainly consist of lines and bezier curves. lines are straight forward and bezier curves requires a little maths and sub dividing the to create points from the curve.

3. triangulation or tessellation here is important because that’s how faces of 3d models can be rendered. the polygon triangulation algorithm was first implemented in AS3 before my port to JS. This algorithm however, does not support holes in shape, and therefore I had to implement an additional hole slicing algorithm described in (page is down, so check google’s cache if interested)

creating 3d typography in javascript – wireframes

4. creating the 3d geometry
so far 80% of the hardwork is done. creating the geometry requires vertices (array of 3d points/vectors3d) and faces (list of triangles or quads describing an area using indices to the vertices). one just have to be careful with the normals, and the direction of the vertices (whether clockwise or anticlockwise) directions. the front and back faces are added with triangles calculated in the previous step and extrusion created with quads (face4)

lastly, its time to be creative, experimental using the 3d meshes generated.

[geeknotes] Now, Have you met.. Instant QR Codes?

(Imported from my facebook note dated Monday, 28 June 2010 at 01:37)

This is a follow up to my previous geek note “Say Hi to Instant Barcodes”. The quick story here: nothing too fanciful, just a simple html 5 mashup for instant qrcodes using javascript, jquery, and Kazuhiko Arase qrcode jscode. See


The advantage for using QRCode over my previous barcode generator is that this 2d barcode packs more information in it, specifically for this mashup implementation, 119 binary characters with 30% recovery rate (using up to QR Code version 10, can be much more if we can implement till version 40). Think of it as maybe, twitter on a picture!

A picture can tell a hundred words, and this tiny qrcode does store a hundred characters.

Yes, the barcode scanner on android works with 2d barcodes.

And I think this is also a good way to send urls, telephone numbers and other contact to each other. With a lack of a standard vcard or bluetooth protocol, I think qrcodes should work much better!

Feedbacks would be great! and yes, you can download some codes and send me messages in QR Codes! Goodnight! :)

p.s. Tested on all modern browsers (& ie9 beta) except mobile browsers (pls let me know if it works!).

[geeknotes] Say Hi to Instant Barcodes

(This is repost of my facebook note dated 25 June 2010. Apparently there’s more notes to be imported, but maybe I’ll leave that for another time)

It seems to be ages since I last touched javascript but here’s a latest addition to my html 5 canvas experiments, a really simple (1d) barcode generator.

Type some words or numbers, and a barcode appears immediately (using the simple code 39 implementation without checksum- drawbacks, the not suitable for long data due to low density). Admire the barcodes, save it, otherwise artistically modify it (or have fun writing coded messages to each other).

To skip the talk and get to the action, see

Barcode scanner on the android is a nifty cool feature.

Scanning my name. You could use it for telephone numbers too.

Yes it works.

Lastly here’s a barcode if you’d like some practice.

Goodnight :)

p.s. tested in Chrome, Firefox, Opera (barcodes without text for IE). Javascript lovers, the html file is self contained (except for jquery). Use/hack as you like

Really Simple High Defination Youtube Video Downloader

This is a really delayed repost of my “geeknote” Really Easy High Defination Youtube Video Downloader. I’m not sure how happy is Google/Youtube with this, or how soon they would change their site so that this wouldn’t work, so no warranties and please use this wisely at your own risk.

It started as a bookmarklet I created to download Vimeo HTML5 MP4 videos easily, but soon others were interested in a youtube version. So read on and I’ll try to explain this simply and briefly.

How can I use it?
Go to a youtube video, eg Click on the download bookmark, and the download links will appear under the title. Pick the version you like, and you can watch it in your browser or download them. Usually I use 270p version if I wish to transfer it to my mobile phone for watching. If 720p or 1080p are available, they gives really nice high quality videos.

How can I install this?
Create a bookmark in firefox, chrome, safari or opera. For the link, paste the following block of code into the bookmark’s url.


How can I uninstall this?
Just delete the bookmark if you have added it. Some users have mentioned you can just run the code pasting the entire block into the url.

What’s so special about this compared to so many other downloaders?
This downloader is minimally intrusive and tries to be as unobstructive as possible. It runs only when you click on the bookmarklet. Although it’s unobstructive, its give you the options on the quality, resolution and format you can download. Its really fast and easy to use.

Compared to other software, plugins, addon etc, the installation requires no downloads or browser restarts. Uninstalling is as simple as deleting a bookmarklet. Not to mention that many other scripts are not working well due to some recent changes made by Youtube/Google.

Any updates planned?
Only if I have the time, I would parse the parameters to detect the available formats supported in the current video. This can eliminate the links the current versions adds to the site, but this feature is unnecessary if the user observe what formats the video provides in the little box at the bottom right of the screen.

Reasons why to download videos rather than streamed live?
1. You wish to enjoy more of the video and less of silly comments and the links call ads Google place into the video player.
2. If you are using on a smaller device like a netbook and find that the flash player lags quite a little
3. You think the experience of youtube’s HTML5 player is really horrible
4. You just bought a new 1 Petrabyte hard disk, and wish to borrow some of Google’s bandwidth to utilize more of your unlimited mobile internet and relatively empty disk space.

This is a great tool! Do you accept donations?
I have not tried accepting donations before, but I’ll leave how you can thank me to your creativity. :)

This script have been modified by my friend Ernest who did an all-nighter for automatically detecting video quality (eg. the HD links will not appear if they are not available).