Tag Archives: javascript

Virtual Rendering The Million Items Roundup

Some days I’m unproductive, some other days I explore, discover random stuff too quickly that I easily forget stuff I do. So in this post, I intent to round up the experiment started in my previous post of “Virtual Rendering 1,000,000 Items Efficiently”.

The Canvas Approach

In the last post, I showed how one could implement a virtual scroll bar and the display of 1,000,000 row items in a virtual scrollpane quickly. That was done by creating dom elements when they get scrolled in the view and removing them and they get scrolled out of view. In this post, I would talk about how the HTML5 Canvas element could be used instead for rendering the contents. This would not be about which approach is superior, as both have its pros and cons, but I’ll just note down aloud some thoughts I observed in Canvas implementation and the differences of both approaches.

Can you spot the visual differences?

Since this prototype hack was created quickly on Chrome, it has probably been tested in these browsers, still different versions, differenent computers and different OS typically give different results.

– most computers typically redraw the scrollpanes less than 20ms (and under 10ms most of the time), which I think is a satisfying result.
– both seems to be performing on par in the above comparison jsdo.it fiddle.
– The Canvas approach is typically much faster over the DOM approach when the scroll viewport is huge – eg. 1600×900 as the DOM approach seems to slow down more considerably in the bigger view port.
– There are certain things much simpler to implement in DOM vs in Canvas. For example, dashed borders could be written in a line of css, but the Canvas approach require some custom coding esp when dashed line drawing hasn’t been supported in the Canvas API.
– In implementing custom code to mimic the dashed borders, the Canvas rendering slows down to.
– At first, I was making these expensive calls, because paint calls were made after every dash stroke (eg. lineTo for a 3px line, stroke, move to a 3px gap)

	//After every row
	var ww = 0;
	while(ww < w ) {
		ctx.moveTo(ww,~~item.y + 0.5);
		ctx.lineTo(ww+3,~~item.y + 0.5);

– after I discovered it would be more efficient doing it in this manner.

	//After every row
	var ww = 0;
	while(ww < w ) {
		ctx.moveTo(ww,~~item.y + 0.5);
		ctx.lineTo(ww+3,~~item.y + 0.5);

– Here stroke calls are batched, rendering speeds up, but falls behind the browser native css rendering. An exception is that Chrome on windows seems to match both the CSS and custom rendering speeds pretty close.
– In canvas, an advantage might be that drawing custom objects would be one step easier, compared to creating new dom objects and placing custom objects in there.
– Some rendering done in canvas could be quicker than using complex css rules which the browser might requires reflow calculations.
– Using a canvas approach, one might find sub-pixels or anti-aliasing issues. For drawing a 1 pixel line cleanly on Canvas might have to resort to tricks like ~~(y_position) + 0.5;

Its interesting making things work in both approach, but as seen above, there are too many variables and this is not intended to be performance comparison test, but a documentation of observations. Depending on one’s use case, either approach might fit the developer better.

Working with Canvas brings many possibilities but brings many challenges at the same time. While there were many interesting projects that used Canvas for rendering, some didn’t have good endings.

1. Google Wave – I believed they used canvas for rendering the custom elements, scrollbars, cursors, etc . Those who have used Google wave might have already waved it goodbye.
2. Mozilla Thunderhead – This was the custom library used to developed Mozilla Bespin, a realtime collaborative editing and development platform. Many of its UI components were Canvas drawn – esp for handling its text editor, showing multiple cursors while multiple users were editing collaboratively. Sadly, the developers left mozilla, Bespin became Skywriter, and now uses the ACE library (more DOM based) which is also used in the cloud9 IDE.
3. Blossom – An RIA library which attempts to use HTML5 canvas for rendering. This is another library spin off from the Sproutecore library, and while this library seems to have some potential, it also seems to be at its infancy.
4. xgui.js – Certainly it doesn’t mean a gloomy end for custom Canvas UI components, and I think xgui.js, a handly controls gui , is a good example that useful and innovative libraries can be built on the 2D Canvas.

So, whether is DOM, CSS, Canvas, there are practical use cases and interesting possibilities. I’ll pause this experimentation at this point, but when I have the need for some huge data and virtual scrolling, I’ll be back.

Virtual Rendering 1,000,000 Items Efficiently

You probably haven’t heard from me for a while, and probably some reasons, and one could well be attributed to my dark periods of un-productivity.

Link to the Demo. Recommended that you’re run it with Chrome.

But if you are hearing from me now, that’s because there’s some progress I would like to share on a small experiment, which is an attempt to render large amount of data using html, javascript in the browser efficiently.

Here’s the problem: Let’s say you have a million rows of data, and to simply create a million Divs and placing them to your html document’s body is a good way to freeze or crash your browser, that’s after chewing up large amount of RAM and CPU. That doesn’t only to browser applications, because many text editors are pretty incapable of opening large files.

One example of how I encountered this problem was running three.js inspector with a particle scene, and I realized attempting that the couple thousand or even hundred elements representing them was locking up the browser. One motivation for this experiment was also to try creating new UI components for a more efficient three.js scene inspector.

Solution: Placing objects in memory are way faster than placed in the dom or rendered. In the case of huge list in a scroll panel, the trick is to hide all objects and render only the items in view. Though this isn’t super simple, it isn’t totally new, and a couple of good libraries have already utilized such techniques. CodeMirror, the extensible and powerful browser based code editor, uses handles large amount of code in this manner. Slickgrid, a powerful open source grid/spreadsheet library created by a Google employee, is built on this technique to allow large rows of data. Sites that implement the “infinity” scrolling interface (eg. Pinterest, Facebook timeline) utilizes a similar technique to reduce huge memory footprint (twitter, I’m looking at you).

Approach: So why did I try to do this myself? Similar to reasons why one would write a library is to have more understanding and control over your own code, while experimenting on stuff you otherwise wouldn’t have ever try. In this experiment, I got to attempt skinning my own scrollbar, as well as to experiment and benchmark a DOM versus a Canvas implementation of Virtual Rendering. In this post, we would look at the DOM approach, which is similar to how CodeMirror and Slickgrid do it.

If you take a look at the source, there 4 simple classes.

1. SimpleEvent
A minimalistic approach to do the observer patten in the style of Signals.

For example usage,

2. ScrollBar
This is a UI component that displays a scrolltrack and slider, that fires some scroll events when scrollbar is clicked or dragged via .onScroll. Its 2 public interfaces are .setLength() for defining the size of the slider block in percentage, while .setPosition() moves the slider according to the document viewport’s position by a fraction.

(See bottom for more UI/UX notes on the scrollbar)

3. RowItem
The row item is responsible for storing its own data and rendering its dom representation when it comes into view. Here, it simply stores a string for its text data.

For visual representation, its needs to store its x, y, width and height to be activated and positioned correctly for rendering.

4. ScrollPane
This is class which integrates all the above into this Virtual Rendering component. Firstly, it needs to be represented as a div element to be inserted to the dom. ScrollPane tracks the items, and keeps the total virtual area it contains. Based on its dimension, it calls ScrollPane to update its visual elements. The ScrollPane listens to its component for mousewheel events and listens to the ScrollBar for scroll events.

On any request to update its viewport, ScrollPane iterates over its item, quickly find which RowItem are in view and calls draw(). RowItem would create dom objects on demand, and position it. Upon another update, dom objects found visible in the previous render would be reused and those elements scrolled out of view would be remove from dom and destroyed to free memory.

Results: While this has only been tested and ran in Chrome, I’m pretty satisfied with the results of this hack. At a million objects (I’ve tested up to 6 million objects), scrolling is pretty responsive, initially loading it takes ~2 seconds to load, each repaint takes ~25ms, and the heap memory footprint stays below 100Mb. Apart from requiring more refactoring, i think its has quite minimal javascript and css.

Applications: Apart of my planned usage for a revamp three.js inspector, there are perhaps a few mini usages for this widget. For example, the developer’s console can choke up when displaying huge amount of data (now chrome devtools splits them up into small sub chunks, so it works but not the best way imho), and such could be a solution to this problem.

So hopefully this little experiment would find itself some nice usage. Until you hear from me again, continue reading for more side thoughts when I was playing around on the scrollbar. And the link to the demo if you didn’t see it earlier.

UI/UX notes on the scrollbar.
While working on the Scrollbar implementation, I observed that clicking in the track outside the slider has a same effect of “page up” and “page down” both in mac and windows. Instead of moving the slider to the clicked position (as I thought was the correct behavior initially), it fires onScroll event to a controller listener. While it seems that Mac Lion “un-obstructive” scrollbars are “in”, evidenced by sites like Facebook that imitate that, I’m however not a fan of it. Somehow I find those scrollbars ugly for applications that have a non-white background, so I’ve tried to style my scrollbars in the dark gradient colors after Sublime Editor instead.

Another challenge is that the slider would not be able to size proportionally as it would result in thinner than a pixel height for huge list. Therefore the scrollbar has an minimal size which makes it more practical for dragging. There’s probably more tweaks that can be done here, and there are alternative UI designs that eliminates the scrollbar in large lists (eg. your iphone contact list).

The Ascii Effect

Brief: Sometime ago, I decided to try what happens if the three.js renders ascii. Originally called the AsciiRenderer, it has been renamed to AsciiEffect and included with three.js examples.

**since this screenshot is displayed in ascii format**

               @@@####+++++-=====......     .               
               @@%%%%%*****=======......  ...               
58 FPS (58-58)

Demo link: http://jabtunes.com/labs/3d/canvas_ascii_effects.html

I can’t really remember what triggered the thought process, but I thinking of that while being in the shower. Isn’t bathrooms quite a source of inspiration, like the Archimedes’ “Eureka!” story? While I’m not really an ASCII fanatic, I think I can appreciate some creative use of them. Some sites dedicated to these Ascii art are pretty interesting, such as 16 colors which utilises the javascript library escapes.js.

Some other ASCII effects I have found interesting includes a fluid solver, javascript raytracers, and a morphing animations! So much for the ASCII influence, that’s there even a text based category in the demoscene.

Moving to some implementation details:

1. JS Ascii Libraries
Originally I ambitiously wanted to complete the proof of concept in 15 minutes, so the first thing was to look existing ascii libraries in JS out there. Turns out there were a few libraries out there such as this and this, but I decided on nihilogic’s jsascii library which seems to provide a little more options despite being written a longer time before.

2. Extracting Image Data
Since jsascii and the similar libraries uses images as an input, the first integration approach extract the base64 encoded image data via the Canvas’s toDataURL() from CanvasRenderer, and import the image to the library and pull the Ascii output. Doesn’t take too long to realize its pretty silly wasting cycles when we could process the image data directly without encoding and decoding the image data by using canvas.getImageData().

So I started modifying jascii library to tap straight into the domElement of CanvasRenderer. I worked but performance was still horrible, so I being more tweakings, particularly the text resolution. By having a lower resolution, a larger font size would be use, resulting greater performance but less clarity. After making these adjustments, I found there certain ratios where there was good performance with reasonable clarity. In fact, the framerates were also on par or sometimes faster than rendering the canvas to screen. It ran well on Chrome, and ran even faster in Firefox.

3. Swapping WebGLRender with CanvasRenderer
The next little experiment was to use WebGLRenderer instead of CanvasRender for rasterizing the scene. Turns out that rendering ASCII with WebGLRenderer was slower than the CanvasRenderer. Alteredq suggest that it was probably more expensive to pull data out of the webgl context. Perhaps gl.readPixels might be a more efficient way of extracting data, but I haven’t tried that yet.

4. More Settings
Here’s another area to explore for the adventurous. JSAscii supports various options like enabling colors, inverting them, using blocks, and its pretty interesting to see some of these effects although some of these options have a big penalty on performance.

At the least, the reader might want to have some fun and try custom character sets. The ASCII library works by converting a pixel block into text by measuring its brightness and using an ascii character which represents that brightness. One could also try looking up and utilize unicode characters. In my experiments, I realized the use of even really simple pallets, for example one made of different sizes of dots, are interesting and effective too. (for example: start with something simple like ” .*%”)

5. Ascii Renderer -> Effects
In r49, the previously Anaglyph, Crosseyed, ParallaxBarrier Renderers were refactored to Effects, and are located in /examples/js/effects. AsciiRenderer follows the style and gets refactored to AsciiEffect. In r50dev, mrdoob works on several new software renderers, I did a couple of tests of the AsciiEffect with the new rasterizers, and they work too. Now in theory, with some modifications to the software renderer and ascii effect, one can effectively render a three.js scene in old non-canvas supported browsers, such as IE6, using ASCII output.

Finally, to throw out more ideas, one could create ASCII stereogram, combining the styles of Crosseyed and Ascii effect. ASCII stereograms may look like they would give a big headache, but they’re pretty cool!

Summing up, I hope the additional of the Ascii Effect provides a really simple way to do interesting 3d scenes and animation in Ascii, and probably a good way to relax when you get tired of writing glsl shaders.

Who knows if I’ll be writing up more text experiments again, but till then, it’d be interesting what else you may come up with!

ps. on a somewhat unrelated theme, I recently came across the demoscene production “the Butterfly Effect” by ASD and turns out that its a demo I really like…

Cool Street View Projects

Haven’t tried Google Street Views? You should! That’s a practical way to teleport and also a nice way to travel around the world virtually for free.

[asides: if you’re already a street view fan, did you tried that 8-bit version? and if you’ve tried that, what about google’s version of photosynth?]

Let me highlight a couple of innovative projects that came to my attention utilizing street views.

1. cinemascapes – street view photography

beautiful images (with post processing of course) captured using street views around the world. take a look.

9 eyes has a similar approach, with a different style. street view fun is kind of the paparazzi version. so do you really think your iphone is the best street photography camera?

2. Address Is Approximate – stop-motion video

Creative and beatiful stop motion film using street views. Watch it!! (some may be reminded of the cinematography in Limitless)

“Address is Approximate” was brilliantly executed but there’s no harm to trying it yourself. like this]

3.Chemin Vert – interactive 360 video

Look around while being on a train travelling at 1500km/h, across five continents and four seasons. You need webgl + and a decent pc + gpu for this. Try the “HI-FI” version if you have lots of patience waiting for the video to load, and for your browser to crash. And the vimeo version if you like the little planets projections.

And if you’re are like me, watching the (un)projected video source is great enough. hifi version

[asides: though, the first time i saw this technique being employed was in thewildernessdowntown, which i think mrdoob and thespite worked on, see next point]

4. thespite’s Experiments – interactive meshups

Cool experiments by thespite shows that he has been doing such street view meshups with three.js for some while now. What’s better now is that he has just released his Street Views panorama library on github! Fork it!

[asides: thespite’s a nice guy who gave me the permission to use GSVPano.js even before releasing it]

5. Honorable mention – stereographic street views

Stereographic Streetviews is an open source project which produces stereographic (aka “little planets”) views. Uses the power of graphic cards for processing, and allows you to easily customize the shader for additional post-processing.

[asides: some of the post-processing filters are quite cool. and some have others have also created videos with this]

So why I’m writing this? Course I think they are brilliant ideas which I could have thought of but would never get to execute them (like a meshup of GSVPano with renderflies).

Anything else I’m missing? Otherwise, hope to see great new street view projects and I’ll try working on mine. Have a good time on the streets!

[update] Hyperlapse – Another another project to add to the list.

Spline Extrusions, Tubes and Knots of Sorts

The recent development in three.js up to the recent 49th release has been really crazy – just look at the changelog! Some work that made it in were 3d spline extrusions, which you could follow the development reading the thread in issue #905. @WestLangley‘s involvement was also a great help!

3d spline extrusion examples

[TL;DR If this post seems too boring, or just too long (as i’m trying to release some air out of my head), feel free to skip everything but try the example. You might also like to turn on camera spline animation, create your own torus knot, or create a custom tube by writing a formula there]

View tube/spline extrusion example.

Related work (spline curves, shapes extrusion geometry, text bending) were previously mentioned, but extrusion geometry did not handle 3d spline extrusion well, so the discussion brought out new features to be implemented after being left in TODO for a couple of months. The work resulted with couples of jsfiddles, created new classes like THREE.ArrowHelper, THREE.TubeGeometry, improvements to some old classes and created a couple of new examples. Read on if you like to know what went through the thought/development process back in time.

Extruding a spline != Extruding on the Z-axis

I decided to tackle this issue one night, since I barely recall the internals of ExtrudeGeometry after leaving of quite awhile, so I tried to recreate a simple version. The first approach to take the points of the spline – which spline.getPoint(), then recreate the cross-section of the shape. Looks almost like it’d work, until you the spline move vertically up the y axis and the geometry falls apart. Ohoh. WestLangley reminds me that the geometry is not orthogonal – which refers to angles kind of perpendicular or 90 degrees from the tangent. His suggestion to modify the TorusKnotGeometry (done by @oosmoxiecode) to create a TubeGeometry is a wise choice.

Not a very correct path extrusion

Normal of a 2D line != Normal of a 3D line

Ok, I recalled having to deal with a path normals in the text bending implementation. That was done by getting the tangent of the path, and invert the vector like -y / x. I thought surely there’s a formula to do the same in 3D. Looking up the wikipedia topics of normals, there were formulas for normal to a plane, normals to a face, but no, there aren’t any formulas for a vector in space. And so I begun to understand and as @profound7 reminded me that there are just infinite amount of normals to a line in space.

There are however a set of formulas in tackling this issue, as @miningold brought up, the the Frenet–Serret formulas, and that became another topic in math for me to study. From the simple understanding I have now, the Frenet–Serret or TNB frames are like your thumb and 2 fingers in either the left hand (or right hand) rule. Each direction represents the tangent, normal and binormal which are 90 degrees / orthogonally apart.

Sounds simple? Could be, but the ill-rewards of not being a diligent student in school is having to now stare painfully at this paper “Tubes and the TNB frame”. Perhaps after damaged brain cells and hairs interpreting those seemingly foreign mathematica symbols, I thought I could try explain these formulas in my own understanding.


Frenet–Serret Formulas for dummies like zz85

Spline = an imaginary line in space which a point will travel from one end to another end over time.

Tangent = Is the change of position at a particular point of the spline. This the first derivative of position over time. Now what we need to store is simply its unit vector, by dividing its length, so its magnitude is equals to 1, since we only need its direction, not magnitude.

Normal = Change of tangent unit at a particular point / time of the spline. I was wondering for a while why this is not the 2nd derivative of the position over time – and I realized that that’s because T is a unit vector rather its change in position.

Binormal = By doing a cross multiplication of T and N, you will get a binormal perpendicular to both vectors.

Frenet–Serret frames != Magic bullet for spline extrusion

Yeah, with Frenet–Serret formulas, that means I can just plug in a formula to get the normals, and it would be simple to implement the tube geometry? Not so soon again, because I soon hit the situation that miningold was facing earlier – there were some really ugly wraps in geometry. This call for some visual debugging of the geometries normals – WestLangley was doing something similar and so we thought these helpers would be useful. Those got refactored later to THREE.ArrowHelper.

Even with Frenet–Serret formulas, fail.

What happened? Because of inflections in portions of a cat-mull rom spline, the normals may flip around really unexpectedly rapidly. On the positive side, that seems like that’s a well known problem if you search on the net. From this link from CMU provides a pretty simple and usable solution, taking the binormal of the previous segment to compute the normals and binormals for the current ones, which is the moving frame approach. With that, we could already start to implement some fanciful geometry.

Heart Tube

Moving Frames != Ending Frames

The next problem was then brought up, that is normals of the ending and starting normals of closed tubes do not match. That could be quite obvious if the radius segments were few, and joining seam would have an ugly closure. I found links to some papers on RMF Rotation Minimizing Frames (RMF) – a method which double reflects, I guess for comparing rotation changes before and after segments, in order to process the frames correctly. While I didn’t manage to understand all these, WestLangley wrote the implementation for the Parallel Transport Frame approach, making sure the tube is slowly twisting so that the starting and ending binormals. Yeah!

To test out the new TubeGeometry, I started looking for some spline formulas. I also started looking into knots because @mrdoob proposed the possibility to implementing the TorusKnot with TubeGeometry. With a little practice, you would find a defining a curve/spline object using the method THREE.Curve.create() really easy, and I also started appreciating some beauty of mathematics and formulas, in that you could create beautiful, and even complex geometries with really simple code.

Decorated Knots

If you look into curve extras source, there’s a tiny compilation of links to some pretty good resources on curves and knots. However, one resource that particularly interest me was a university paper on decorated knots. By applying certain pattern of formulas to knots, you would get interesting and beautiful geometries. I have used some of the formulas in the spline extrusion example and you could easily use those formulas too.

Decorated Knots

Real-time Knots
So if you haven’t realized, three.js is versatile and powerful enough to be a 3d/math graphing software. Many of the existing sites on knots and curves would require you to download some graphing/knotting software, or at best uses some Java applets. So no longer do you need download any stuff or plugins, you can run that in your browser. In my example, you can create your own torus-knot by defining your p,q,r parameters, or even write an entire new curve by formulas. Well, it seems that Google’s have also thought along the same lines, having enabled a webgl 3d graph plotter in their search engine (Rose Example). Probably a good time now that SwiftShader (webgl software fallback) is getting activated in Chrome too.

Real-time Knots

Camera Movements
On turning on the spline animation, you would kind of follow the direction of the spline from a distance above it. The camera orientation is stabilized using the binormals of the same frames generated for the TubeGeometry. When I started experimenting with it, the simplest way would just to rotate the camera in the direction of the spline tangents. It probably works, but with a side effect, the control of the camera’s spin may be lost. Perhaps not too bad an idea, if you want to create a dizzy effect or re-enact Joseph Kittinger’s not so successful jump (before his world famous record breaking jump). And the “Look Ahead” option kind of gives a different feel by fixing the camera view on a point, a distance away on the spline. Scaling up the geometry while on the spline camera works too.

Camera Movements

What’s next?
ExtrudeGeometry has extended a similar kind of 3d extrusion used in TubeGeometry. It can probably be more refined for its purpose. There has also been some work on ParametricGeometry, which could be used for an internal TubeGeometry refactoring. I’m planning to work on a little three.js DIY rollercoaster experiment which would probably be a good use case for testing and refining all the work here – and a 3D spline editor to come with that (since there’s already some form of 2D spline editor)

Not enough curves?
Oh, and if you are really interested in mathematics, curves and splines, I stumbled upon this really interesting lecture on this topic, touching on the practicality of curves in buildings, bridges and even roller coasters. (the video lecture is not in a very high quality, but you can also download the lecture slides and audio). Have fun!

(ps. today’s three.js 2nd anniversary according to mrdoob. initially thought i could write another post to commemorate this day, oh wells… happy three.js day!)

Behind “It Came Upon”

Falling snowflakes, a little snowman, a little flare from the sun. That was the simple idea of a beautiful snow scene I thought could be turned into an interactive animated online christmas greeting card to friends. Turns out “It came upon” became one of my larger “webgl” personal project in year 2011 (after arabesque).

(screencast here if you do can’t run the demo).

The idea might have only took a second, or at most a minute, but rather a simple piece of work, the final production needed the fusion of many more experimentation done behind the scenes. I guess this is how new technologies gets developed and showcased in Pixar short films too. I wonder if a writeup of the technical bits would really lengthy or boring, so I’ll try leaving it for another post, and focus on the direction I had on the project, and share some thoughts looking back.

The demo can mainly be broken down into 4 scenes
1. splash scene – flying star fields
2. intro scene – stars, star trails and auroras (northern lights)
3. main scene – sunrise to sunset, flares, snows, text, snowman, recording playback, particles
4. end scene – (an “outro”? of) an animated “The End”

Splash scene.
Its kind of interesting as opposed to flash intro scenes, you don’t want music blasting off the moment a user enters the site. Instead, the splash scene is used to allow the full loading of files, and giving another chance in scaring users from running the demo, especially if WebGL is not enabled on their browsers. Then the thought of giving user some sense of flying through the stars while the user gets ready. (I hope nobody’s thinking starwars or the windows 3.1 screensaver) There again, flight in splash scenes are not new – we had flocking birds in wildness downtown, and flying clouds in rome.

Intro scene.
The intro scene was added originally to allow a transition into the main snow scene, instead of jumping in and ending there. Since it was daylight in the snowscene, I thought a changing sky scene from night to day would be nice, hence the startrails and Aurora Borealis (also known as Northern lights). I’ve captured some startrails before, like
or more recently in 2011, this timelapse.

a little stars from Joshua Koo on Vimeo.

But no, I haven’t seen Aurora Borealis in real life, so its must be my impression from photos and videos seen on the net, and hopefully one day I’ll get to see them in real life! Both still and time-lapsed photography appeals to me, which also explains the combination used.
Time-lapsed sequences gives the sense of earth’s rotation and
the still photographs imitates long exposures which creates beautiful long streaks of stars.

Both a wide lens and a zoom lens were also used in this sequence for a more dramatic effect. Also, I fixed the aspect of the screen to a really wide format, as if it was a wide format movie or a panorama. Oh btw, “It came upon” the comes from the title of the musicplayed in the background, the christmas carol It Came Upon a Midnight Clear.

Snow scene.

At the central of the snow scene is a 3D billboard, probably something like a hollywood sign or the fox intro. Just a static scene but the opening was meant to be more mysterious, slowly revealing the text through the morning fog while the camera moves towards it and then panning along it, while interweaving with still shots from different angles. As the sun rises, the flares hits the camera and the scene brightens up. The camera then spins around the text in an ascending manner as if it could only be done with expensive rigs and cranes or mounted on a helicopter. Snowfall have already started.

Some friends might have asked why is the camera always against sun. Having the sun infront of the camera makes the flare enter the lens, and to allows shadows falling towards the camera to be more dramatic. Part of making or breaking the rule of not shooting into the light blogged sometime ago.

Part of this big main scene was to embed a text message. There were a few ideas I had to do this. One was having 3D billboards laid up a snow mountain, and as the camera flies through the mountain, the message gets revealed. The other was firing fireworks into the sky, and forming text in the air. Pressed for time, I implemented creating text on the ground, and turning them into particles so the next line of messages can be revealed. The text playback was created by recording of keystrokes. The camera jerks every time a key is pressed to give a more impactful feedback, such like the feelings of using a typewriter. On hitting the return key, the text characters turn into particles and flyaway. This was also the interaction that I had intended to give users, allowing them to record and generate text in real-time and sharing it with others. An example of a user recorded text by chrome experiements or my new year greetings.

And so, like all shows, it ends with “The End”. But not really so, as one might noticed a parody of the famous Pixar intro, as the snowman turns to life and eventually hops on a text to flatten it. Instead of hopping across the snow scene, it falls into the snow, swimming like a polar bear, (stalking across the snow as one has commented), to the letter T. After failing to be flattened by the snowman, it gets impaled by the “T” and gets drag down into the snow. The End.

A friend asked for blood from the snowman, but in fact, I already tried tailoring the amount of sadism, it wasn’t the least mild (at least I thought), nor I wanted it overly sadistic (many commented the snowman was cute). Interestingly as I was reviewing the snowmen in Calvin and Hobbes, I realized there’s this horror elements to them, as much as I would have loved to have snowmen as adorable as himself or hobbes. Then again it could have represent Calvin’s evil geniuses, or that the irony that snowmen never survives after winter.


First of all, I realized that its no easy task to create a decent animation or any form of interactive demo, especially with pure code, and definitely not something that can be done in a night or two. Definitely I have a respect for those in the demo scene. I had the idea in a minute, and a prototype in a day having thought, what could be so difficult, there’s three.js to do all the 3D work and spark.js for particles. Boy was I wrong, and indeed have a better understanding of 1% inspiration and 99% perspiration. With my self imposed deadline to complete it before the end of the year, not everything were be up to even my artistic or technical expectations, but I was just glad I had it executed and done with.

Looking back at year 2011, its probably a year of graphics, 3D and WebGL programming for me. I started with nothing and ended with at least something (still remembered asking noob questions starting in irc). I did alot of readings (like blogs and siggraph papers), had a whole lot of experimentation (though a fair bit were uncompleted and failures), generated some wacky ideas, contributed in someways to three.js, have now a fair bit of knowledge about the library and webgl, and have at least 2 experiments featured. What’s for me in 2012?

Baroque Performance with Werckmeister Temperament

This baroque era tuning seems to be a popular method for performing pieces like Bach’s well tempered clavier. It is a tuning system based on the Pythagorean tuning but “tempered” with so it would sound well on different chromatic scales.

As a (pretenious) baroque music lover and thanks also to the wonderful influence of my harpsichordist(/musicology) friend, I’ve dived into implementing the Werckmeister tuning system in javascript last night for more experiments with audio synthesis in HTML5 audio.

First, let us look at the basic method for deriving frequencies using the equal temperament (ET). Taking A440 (concert pitch A at 440hz), we can calculate frequencies by doing this.

var noteFrequency = 440 * Math.pow(2, semitones / 12);

where semitones is the number of semitones away from the concert A. Pretty simple right?

Instead of a universal formula for ET, or recalculating the tempering of frequencies, we would use the chart for frequency relation of the Werckmeister I (III) system [“correct temperament” based on 1/4 comma divisions], conveniently taken from wikipedia.

Since there’s 12 distinct notes in an octave, to get a frequency of a particular note, you multiply the relative ratio of the note you wish to get. For different octaves, with simply half or double the frequency, which can be expressed as powers of 2, eg. Math.pow(2, octave_difference);

So after we precompute the ratio table into rational numbers, we can calculate and check cents by running Math.log(werckmeisterRatio)/Math.log(2)* 1200 ) on each note. The precision you would get would be higher than the rounded numbers in wikipedia.

Now, what’s left is to multiply the frequency ratios to a real frequency, but what frequency should we use? What’s more puzzling is that the base frequency for the charts is using C and intead of concert A. But no worries, my friend tells me 415hz (or even 390hz) is usually used for the baroque A, and with that we can estimate the frequency of C. [This topic of frequency is always an area of debate. But a reason how they calculated the lower baroque pitch, at least from a book I’ve read, is by measuring the frequencies created by organs from the baroque era]. One way you could do it is by running baseFrequency = 415  / Math.pow(2, 9/12);

(If you’re observant enough, that’s a equal temperament formula. for the werckmeister method, check out the source code)

Now, let’s test the system using a favorite piece of mine – the 3rd movement of Brandenburg Concerto No. 5, by J. S. Bach. Years ago I typeset this piece in lilypond, which generates the midi too (which is available for download together with the pdf). Credits to Matt Westcott’s brilliant jasmad library, we can interpret and play a midi file in the browser simply with javascript without any real midi devices or midi software on the computer, with any tuning system but you’ve to tolerate my poor audio synthesizing capabilities here.

So for the first time in the history of music, let’s us listen to a live performance of Bach’s Brandenburg Concerto in the Html5 Audio Concert hall, played on periodic javascript instruments, in the Werckmeister temperament, performed by the javascript chamber orchestra!

Free entrance  @ http://jabtunes.com/labs/werckmeister/ (Firefox attire recommended:)

p.s. relevant source code for midi number to frequency using werckmeister’s tuning @ https://gist.github.com/1406293

p.p.s. actually, what’s unclear is that in a chamber setting, what temperament do instruments apart from the keyboard use? to the real musicians, please enlighten :)

Subdivision Surfaces

I haven’t been creating anything productive or creative these past weekends, so I thought I could at least write a little post on some subdivision surfaces work using Catmull-Clark algorithm done in three.js.

Suzanne the Monkey’s Head, getting 2 levels of subdivision. Click the image to run the example.

What Catmull Clark subdivision algorithm does is that it creates a smoother looking geometry from a existing one. This feature is common in 3d applications, eg. Mesh > Smooth in Maya, Add Subdivision Modifier in Blender. The algorithm works by subdividing an existing mesh (adding more vertices and splitting each face into 4 parts), and adjusting the original points such that the overall mesh looks smoother.

It seems like this method of smoothing is preferred over other algorithms and approaches like Nurbs because of its simplicity and predictability. This technique was perhaps also popularized when Pixar used it extensively in their 1997 award winning short animated filmGeri’s Game. Its interesting for me to also note that Edwin Catmull (who also developed algorithms like catmull-rom spline) is the current president of Walt Disney Animation Studios and Pixar Animation Studios.

Somehow I started writing this implementation because I got distracted by the subdivision feature in Blender while trying to learn some basic modeling, and then started reading more on it (despite still not grasping blenders basic modeling yet). I shouldn’t pretend to be an expert on this topic, so you like to know more about this, hopefully the articles and links I found most useful in my understanding of catmull-clark subdivisions, would be informative for you too.

  1. Wikipedia Article
  2. An simple and illustrative article
  3. Corner and Creases from a series of Subdivision posts in XRT Renderer’s development.
  4. “Subdivision Surfaces in Character Animation” – A paper outlining the techniques additional to the orignal algorithm used by Pixar.

To use subdivisions in three.js, create a Subdivision Modifier object, then use the modify to apply it to a geometry.

var modifier = new SubdivisionModifier( 3 );
modifier.modify( geometry );

If you used custom vertex colors in your original geometry, set
modifier.useOldVertexColors = true; before running modify method (this feature requested by @macouno).

So finally if you have not seen the subdivision example, check it out at http://jabtunes.com/labs/3d/geometry_subdivison.html. Now, someone just needs to mesh up the drag-and-drop functionality of three.js 3d models with this.

Music Colour Particles


My latest demo uses particles using WebGL and Audio Apis for audio analysis.

Experience it @ http://jabtunes.com/labs/arabesque

A decent machine with a recent (chrome or firefox) browser recommended. If your browser cant play this, check out the screen capture video here.


Somehow I’m attracted to the arts, both audible and visual (and programming a scientific art?). Perhaps it was a way I could express myself, and I picked up music over the age most kids do, and started learning these 3d stuff after graduating without attending any graphics classes.

What’s interesting is that long ago I could admire without understanding how music visualizers (eg. Milkdrop in winamp) worked. Even after knowing some form of programming, the amount of work to create such graphics and audio processing would look really scary. Fast forward to today, all these is actually possible with javascript running in browsers, and with the vast knowledge online, I just wanted to try it. What better time to release this demo after the weekend Chrome 14 would have hopefully push to the masses, shipping with web audio api.


Needed lots of them and tried getting them from everywhere I could – flash blogs, papers, and even cutting age html5/webgl demos, etc. But perhaps the great concentration of work done with music visualizations with particles I found are videos created with visual effects/motion graphics/3d authoring tools, for example Adobe After Effects, and plugins like red giant’s trapcode suite (form, particular for particles and soundkeys for audio) and Krakatoa. Among the top hits of such videos is a brilliant music visualization done by Esteban Diacono


So start off with particles first. I probably did some simple experiments in actionscript after reading “Making things move”. Then I experimented with some javascript dom based particles effects for dandelions. Then some for the jtendorion experiment. Not too long ago, I revisited different javascript techniques (dom, css, canvas2d) with my simple fireflies particles also used for Renderflies.

So it started out with really lousy stuff, but I learnt just a little more each time. Then I learn’t about three.js and also looked at particles demos done in actionscript. I wanted an easy way to create a particle systems in three.js so I’ve wrote Sparks.js which is like the flint or stardust particle engines in flash and their 3d engines. (Okay, this should have been another post).

Part of my experiments with three.js and sparks particle was to moving emitters along paths. I started created random paths (bezier, splines etc), moving the emitter and chasing it with the camera. Deciding there were too variables and things was getting messy, I decide to fix the x,y movements of the emitter, control the direction of particles movement. With particles moving away from the emitter, it also created a illusion of movement, and I stuck with it till the end of this demo.

Experimenting bezier paths and stuff.

Before continuing on particles, let’s look at the music and audio aspects.


Picking a music for the demo was tough – I thought picking a piece too short might not be worth the effort, but having a music too long would be much extremely difficult technically and artistically. I decided if I could record some music first, I could try, before requesting friends to help me do so, or sourcing creative commons music online or at the last resort purchasing some. So I went with the first option, and my friends at the NUS orchestra kindly allowed me to use the practice room for the recording.

Out of the few pieces I narrowed to record, turned out the Arabesque seemed the most suitable despite the mistakes made. Oh well, needs more practice. The composer who wrote the piece is Claude Debussy, a impressionist French composer who also composed Clair de Lune, as heard in Ocean 11’s ending and several other music visualizations. I happened to stumble upon Arabesque looking through a piano book I have, but kinda grew to love this piece too. Anyways, feel free to download the mp3 or ogg music.

Audio Analysis
Most demos apart from flash doing music visualizations either take prebake audio data or uses the audio data api which is available only on firefox. The web audio api for webkit browsers uses the realtimeanalyzer to perform fft (fast fourier transformation for turning into frequency-aptitude domain) on it. It seems like there aren’t any libraries out there which does spectrum analyzing on both chrome and firefox yet (although there are solutions to fallback from ff audio api to flash), so I thought I had to learn both APIs and see what I could do. Thanks to Mohit’s posts on the Audio API, it wasn’t difficult using the web audio API. Although there were kinks too, for example not being able to seek through an audio correctly, or return its playing time accurately.

The result of this work to simplify the api is a javascript class I call AudioKeys.js. What AudioKeys does is simply to load a audio files, playback and return its levels. Usage of AudioKeys is pretty simple.

var audio = new AudioKeys('arabesque1.ogg');
audio.load(function() {
// When audio file is ready to play

AudioKey’s API allows you to “watch” or monitor portions of the frequency spectrum so this means you could get different amplitude levels from monitors watched in the different parts of the spectrum (eg. 1 for bass, 1 for treble, 1 for mids).

Add monitors and retrieve audio levels.

// adds an audio monitor
audio.addWatchers(startFreqBin, endFreqBin, lowLimit, highLimit);

// in your application loop
// returns the average amptitude level of the monitored section of audio

levels = audio.getLevels(0);

I probably got inspirations for doing this from some of the video tutorials using Trapcode SoundKeys, and I created a tool to help view the frequency spectrum. Not only for spectrum viewing, this tool allows creation of monitor bins.

Drag a range on the spectrum to monitor specified parts.

On firefox browsers, AudioKeys falls back to MozAudioKeys, a firefox implementation for a compatible AudioKeys api, but internally uses dsp.js for its fft processing. However, getting both implementations to provide the same levels is a pain (somehow the levels are not analyzed or normalized the same manner) and I hacked MozAudioKeys to give some usable levels for the demo. If anyone have a more elegant or correct way to do this, please let me know!

Audio Levels with Particles
Now we have to combine audio to interact with the particle systems. Even without audio, a particle systems would have plenty of parameters to tweak around with. You could check out my developer version of the demo at jabtunes.com/labs/arabesque/developer.html where it has controls to play around with the parameters of the particle systems. This version also shows the particle count, current audio time, and average fps.

Now sparks.js has its own engine loop. It’s decouple from the render loop for which data is processed for rendering to webgl or the screen. The render loop uses Paul Irish’s wrapper of browsers’ requestAnimationFrame() to paint the screen when there are resources. For example if the rendering is heavy, the render loop might only run times per seconds. However, during this time, the sparks’s engine would continue to run perhaps 50 or 100 times a seconds, ensuring that particles birth, spawning, actions and death takes place.

Since the particle engine runs at a more predictable interval, I’ve added audio.process(); for performing fft to the particle engine loop callback. After processing, one could use these values to alter different parameters of the particles engine, let’s say the velocity, colour, direction, acceleration etc. Since I used 2 watchers – 1 for the base, and 1 for the mids, the levels are add up to range from 0 to 1.2. This is multiplied by the maximum height I like my emitter to reach, in case 300 units in y. Now as audio levels moves up and down quickly, the peakHeight is recorded and allowed to falls a little slower if the audio level drops. Now, this is targetHeight to archive. The difference between this and the current height is the difference in y-position. The y-diff values can be added to the y-component velocity of the new particles, giving a bubbling flows of particles when there is amplitude in the music.

This would be the most major factor used in the demo. Other minor effects include change in emission rate and slight vignetting flicker. Now lets look at some other effects employed in the demo.

Hue change – This is a technique which first saw on wonderfl, and I kept using it for the past particles demos. Based on the Hue Saturation Levels systems, the hue is rotated through the entire color circle. Perhaps I should do a color ramp the next time?

Colors – Subtractive blending is used here, so its feels a little like some water color effects rather than fireworks which uses additive blending.

Vignette – as mentioned earlier, this is done with a fragment/pixel shader, created using alterdq’s new EffectComposer to simply post processing.

Blur – I started experimenting with some blur processing shader pass filters, but since I was generating the particles’ sprite procedurally using 2d canvas and radial gradients, they looked soft and blurred enough that there was no need to add blur in post. Which is good, because that could could improve some bit of frame rates. Particle sizes are randomly between 40 and 160, so there’s a variety of small beats and large clots of ink.

I wanted to make the particles more interesting by spiraling around the x axis pole. I tried adding some wind effort blowing in a circle, but they didn’t turn out well. So I learn’t that turbulence can be created with perlin noise or simplex noise which is a improvement of perlin’s original algorithm. I used Sean Bank’s port of Simplex Noise in javascript and added 4D noise not knowing if I needed it.

An undesired effect which looked like a spiral.

I then wrote a Turbulence action class for sparks.js and one could use it doing sparksEmitter.addAction(new Turbulence()); Now the particles appear to be more agitated depending on the parameters you use. So this seemed to work quite well actually, and it runs pretty fast in chrome, perhaps at most 5fps penalty. Firefox, however is more less forgiving and suffers 10fps loss. So what to do? I’ve decided to try use shader based perlin noise for that, and I used the same vertex shader used in this turbulent cloud demo. Maybe its not exactly accurate, since the turbulence doesn’t affect the positions of particle in sparks engine, but at least, there’s some illusion of turbulence and it runs really quickly, with almost no penalty in fps.

More optimizations.
Now some final optimization to make sure things run more smoothly, esp. with firefox using 50% cpu running dsp in javascript. In three.js, a set of particles are reserved for the particle pool for Webgl. This is so new webgl particle systems buffers need not be created, at least from my understanding. So we have 2 pools of particles, one for the webgl rendering and one for sparks particle engine. Unused particles by the engine would be “hidden” placing them at infinite positions and changing the color (to black). When they are required by the engine, they would be assigned to a particle’s target, and reflect the true colours, size and position. Now, while a webgl particle pool of 200,000 seems to run reasonably well, the time for each render loop takes longer, since it takes more time to transfer the buffer from CPU to the GPU each loop. Reducing them would be a great idea, but not by too much in case the pools run out. So I reduced three.js particle pool to 8000 by observing the level of particles which peaked only at 1k+, there was almost enough particles 4 fold. The frame rates now average 30 to 60 frames per seconds (with firefox peaking 70-80fps).

Final touches – Text Captions
In case a single particle trail and a bad recording can only capture limited attention, I decided to experiment with adding text captions before releasing quickly. First, pick a nice font from Google Web Fonts, next experiment with CSS3 Effects. Text Shadows are used for some blurring, inspired by this, add some css3 transitions animations and do cross browsers testings(! – at least i’m picking only ff and chrome this time). To script text captions through the duration of the music, I’ve decided to add text in html. For example, let’s take this.

<div class="action" time="20 28">
Please enjoy.

This means the message “Please enjoy” would appear at the 20s and exit at 28 seconds of the music. I created the function getActions() to the render loop. This methods accepts 2 parameters, the current and the last playing time of which the values could be taken from audiokeys.getCurrentTime();

getActions() would handle not just the entrance and exits, but at half a second before the actual time, make it to a “prepare to enter stage” positions and property using a css class. At entrance time, the fade in css class for transition properties would be enabled and there’s where the fadeIn happens. At ending time, the “fade out” class with animation transitions would be enabled and the text would start to fade out. 2 seconds later, “actionHide” class would be applied which basically adds a “display:none;” style to hide the element. Turns out it works pretty well (although text shadows are inconsistent across browsers – ff’s lighter than chrome), and potentially this could be a javascript library too. Pretty cool css3 effects right? Just one additional point to note here, it seems that the frame rates drops to almost half when the css3 animations are running. My guess is that drop shadows, text shadows, gradients are generated as images internally by the browser, and could be just a bit intensive, despite running natively. Anyways, that’s my guess.

So that’s all I could think of for now. Let me know if you have any questions or comments! I’m not sure if such a long write up would have bored anyone. I’m pretty glad to have finished this demo, perhaps just 6 months ago, I might not even imagined how I could do this and have learn’t quite a bit along the way. I was also worried that I would be spending lots of time on something that didn’t turn out well, but I’m glad by the retweets and nice comments so far! Google Analytics also tells me that the average time on site is 4 minutes, the length of the song which I worried was also too long and that many would close their browsers 30 seconds through the piece, so I’m quite relived that those numbers. \(^.^)

Despite coming a long way here, it only tells me that there’s a greater journey to travel. I’ve been thinking of implementing shadows, and perhaps try out other particle and fluid techniques done more in the GPU. For example, mrdoob points me to a fairlight/directtovideo music particles demo. That’s really way to go! For now though, I might need to take a break to rest, since I’ve fallen sick over the past days (not sure too much webgl or chocolates), I’ll also need concentrate on my day job, before continuing exploring on this journey!

3D Text Bending & UV Mapping

Another back post on updates of some new features added to three.js ExtrudeGeometry a couple of weeks ago.


The 2 most notable changes are
1. 3D Text Bending – This allows text to be bend or wrapped along a spline path. Curve classes shares a common API so straight lines, bezier curves and catmull-rom splines are supported. The initial inspiration for this work came from this neat article http://www.planetclegg.com/projects/WarpingTextToSplines.html

2. UV mapping – Finally, managed to add UV mapping for extrude geometries classes.

3D Text Bending

Curve classes contain methods to get tangents and normal vectors at a point t, and one use case is for path wrapping/bending. CurvePath is now a connected set of curves with the Curve interface/API. It shares the same interfaces with the base Curve class, therefore a bend parameter could take in a curve, an array of curve (curvepath), or a path (for drawing api).

To apply path bending to a shape, use shape.addWrapPath(path). Calling getTranformedPoints() would run transformation on extractAll(Spaced)Points() internally;

[edit] one important key to how path bending works is that the text is transformed according to the tangents of the spline path. I wrote the base curve class to calculate the tangent based on gradient of a tiny difference of t (kinda like using limits to derive differential. While this potentially can calculate the tangent of any generic sub-curve, a sub-curve should overwrite itself to have faster and more accurate differentiation. I should not forget to thank Puay Bing for helping me with the derivative of the cubic bezier spline the other day!

Bend in text example (http://i53.tinypic.com/fyjgz.png)

Bend with spline (http://i55.tinypic.com/2m7vjt3.png)

Bend with connected curves (http://i54.tinypic.com/33u8xa9.png)

Cubic and Quadratic Beziers (http://i51.tinypic.com/30ws1t4.jpg)

UV Mapping
UV mapping applies a texture onto a material to wrapped around the object.

ExtrudeGeometry now takes in material and extrudeMaterial, material for front/back faces, extrudeMaterial for extrusion and beveled faces

Using these 2 gradient textures
textures material

can produce
uv three

with sufficient subdivisions, you can get a smooth looking model..
close up on 20 subdivisons

advice learn from mrdoob to “experiment, experiment, experiment”
experimented and flipped

i think i just fell in love with uv and gradients, until some transparency came
some transparency

Finally for some demos – TextBending & Text LOD’s on Github, UV Mapping at http://jsdo.it/zz85/h5EM and remake of mr.doob’s “Hello” at http://jsdo.it/zz85/hello

The end of the weekends means the end of my code twinkling for the week again. A lot more exciting/crazy stuff to explore and work on, unfortunately good things may have to wait.

At least I think I have a little small checks on my never ending TODO list. (^^)

Signing off geeknotes.

after adding uv mapping, alteredq added smooth shading to the webgl text example. Pretty cool right?