Resizing, Moving, Snapping Windows with JS & CSS

Imagine you have a widget written in CSS, how would you add some code so it would get some ability to resize itself? The behaviour is so ingrained with our present windows managers or GUI that its quite easily taken for granted. While, it’s quite possible that there might be some plugins or framework which does this, but the challenge I gave myself was to do it in vanilla javascript, and to handle the resizing without adding more divs to the dom. (I thought adding additional divs to use as a draggable bar is pretty common).

Past work
Which reminds me, I wanted this similar behaviour for ThreeInspector, and while hacking the idea, I went with the approach of using the CSS3 resize property for the widget. The unfortunate thing was that min-width and min-height was broken for a really long time in webkit (the bug was filed long ago in 2011, and I’m not entirely sure what the status is now). Being bitten by the bug, I become hesitant every time I think of the css3 resize approach.

Screenshot 2014-11-15 09.31.08

JS Resizing
So, for own my challenge, I start with a single purple div and add a bit of js.

resize1

Done within 100 lines of code, this turns out not to be difficult. The trick is adding a mousemouse handler to document (not document.body, as it fails in FF), and calculate when the mouse is within a margin from the edge of the div. Another reason to always add handlers to document instead of a target div is when you need mouse events even if the cursor moves out of the defined boundary. This is useful for dragging and resizing behaviours, and especially in resizing, you wouldn’t want to waste time hunting bugs because the events and divs resizing are not in sync.

Also for my first time, I made extensive use of document’s event.clientX, event.clientY, together with div.getBoundingClientRect(). It does get me almost everything I need to deal for handling positions, size and events, although it’s a possibility that getBoundClientRect might not be as performant as getting offsets.

What’s nice about using JS vs a pure CSS3-resize is that you get to decide which sides of the div you wish to allow resizing. I went for the 4 sides and 4 corners, and the fun just started, so next I started implementing moving.

Moving
Handling basic moving / dragging just needs a few more lines of code. Pseduocode: Mousedown, check that cursor isn’t on the edge (reserved for resizing), store where the cursor and the bounds of the box is. Mousemove, update the box’s position.

Still simple, so let’s try the next challenge of snapping the box to the edges.

Snapping
Despite the bad things Mac might say to PC, one thing that is pretty good since Windows 7 is its snap feature. On my mac I use Spectacle, which is the replacement for Window’s windows docking management. I took inspiration of this feature in Windows and implemented this with JS and CSS.

resize2

One sweet detail in Snap is the way a translucent window shows where the window would dock or snap into place before you release your mouse. So in my implementation, I used an additional div with a slight transparency one z-index lower than the div I’m dragging. Css animation transition property was used for a more organic experience.

There’s slight deviations to the actual Aero’s experience that Windows users may notice. In Windows, dragging a window to the top snaps the window full screen, while dragging the window to the bottom of the screen has no effect. In my implementation, the window can be docked to the upper half or lower half, or the fullscreen if the window get dragged further beyond the edge of the screen. In Windows, a vertical half is only possible with the keyboard shortcut.

Another difference is that Windows snaps happen when the cursor touch the edge of the screen. My implementation snaps when the div’s edge touches the browser window edge. I thought this might be a better, because users typically use less movements for non-operating-system gesutures. One last difference is that Windows’ implementation sends tiny ripples at the point the cursor touches the screen. Ripples are nice (I noticed they are an element used frequently in Material Design), but I’ll leave it to be an exercise for another time.

As after thoughts, I added touch support and limit mousemove updates to requestAnimationFrame. Here’s the demo, feel free to try and check out the code on codepen.

See the Pen Resize, Drag, Snap by zz85 (@zz85) on CodePen.

Exploring Simple Noise and Clouds with Three.js

A while back I attempt writing a simple CloudShader for use in three.js scenes. Exploring the use of noise, it wasn’t difficult creating clouds which isn’t visually too bad looking. Another idea came to add Crepuscular rays (“God rays”) to make the scene more interesting. Failing to mix them correctly together on my first tries, I left these experiments abandoned. Until one fine day (or rather one night) I decided to give it another shot and finally got it to work so here’s the example.


(Now comes with sliders for clouds control!)

In this post, I would going to explain some of the ideas generating procedural clouds with noise (these topics frequently go hand in hand). While noise might be bread and butter in the world of computer graphics, but it did took me awhile to wrap my head around it.

Noise
Noise, could be described as a pseudo-random texture. Pseudo-random means that it might appear to be totally random, but being generated by the computer, it is not. Many might also refer to noise as Perlin noise (thanks to work by Ken Perlin), but there are really different form of noise, eg. Perlin noise, Simplex noise, Value noise, Wavelet noise, Gradient noise, Worley noise, Simulation noise.

The approach that I use for my clouds could be considered Value noise. Let’s start creating some random noise by creating a DataTexture of 256 by 256 pixels.


// Generate random noise texture
var noiseSize = 256;
var size = noiseSize * noiseSize;
var data = new Uint8Array( 4 * size );
for ( var i = 0; i < size * 4; i ++ ) {
    data[ i ] = Math.random() * 255 | 0;
}
var dt = new THREE.DataTexture( data, noiseSize, noiseSize, THREE.RGBAFormat );
dt.wrapS = THREE.RepeatWrapping;
dt.wrapT = THREE.RepeatWrapping;
dt.needsUpdate = true;

Now if we were to now render this texture, it would look really random (obviously) like a broken TV channel.

noisebw
(we set alpha to 255, and r=g=b to illustrate the example here)

Let’s say if we were to use the pixel values as a height map for a terrain (another use for noise), it is going to look really disjoint or random. One way to fix it is to interpolate the values from one point to another. This is becomes smooth noise. The nice thing about textures is that these interpolation can be done on the graphics unit automatically. By default, or by setting `.minFilter` and `.magFilter` properties of a THREE.Texture to `THREE.LinearMipMapLinearFilter`, you get almost free interpolation when you try to read a point on the texture between 2 pixels or more.

Still, this isn’t enough to look anything like clouds. The next step is to apply Fractional Brownian Motion, which is a summation of successive octaves of noise, each with higher frequency and lower amplitude. This generates Fractal noise which generates a more interesting and continuous texture. I’m doing this in the fragment shader with a a few lines of code…


float fnoise(vec2 uv) {
    float f = 0.;
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
    f += texture2D(uv * scale).x / scale;
    }
    return f;
}

Given this my data texture has 4 channels (RGBA), one could pull out 4 or 3 components if needed, like


vec3 fNoise(vec2 uv) {
    vec3 f = vec3(0.);
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
        f += texture2D(uv * scale).xyz / scale;
    }
    return f;
}

Now if you were to render this, it might look similar to the perlinNoise class in flash/actionscript or the cloud filter in photoshop.

Screenshot 2014-11-08 22.16.53

2D Clouds
Although we have a procedural cloud shader, how do we integrate it into a three.js scene? One way is to texture it over a sphere or a skybox, but the approach I use is to create a paraboloid shell generated with ParametricGeometry, similar to the approach Steven Wittens used to render Auroras in his demo “NeverSeenTheSky”. The code / formula I use is simply this


function SkyDome(i, j) {
    i -= 0.5;
    j -= 0.5;
    var r2 = i * i * 4 + j * j * 4;
    return new THREE.Vector3(
        i * 20000,
        (1 – r2) * 5000,
        j * 20000
    ).multiplyScalar(0.05);
};
var skyMesh = new THREE.Mesh(
    new THREE.ParametricGeometry(SkyDome, 5, 5),
    shaderMaterial
);

Now the remaining step, but its probably the most important step to simulate clouds is to make use and tweak the values from the fractal noise to obtain the kind of clouds you want. This is done in the fragment shader where you could decide what thresholds to apply, (eg. cutting off the high or low values) or apply a curve or a function to the signals. 2 articles which gave me ideas are Hugo Elias’s clouds and Iñigo Quilez’s dynamic 2d clouds. Apart from these, I added a function (where o is opacity) to reduce the clouds of the skies nearer the horizon to make it more illusion of clouds disappearing into the distant.


// applies more transparency to horizon for
// to create illusion of distant clouds
o = 1. – o * o * o * o;

Crepuscular rays
So I’m going a break explaining some of my ideas for producing the clouds. This might be disappointing if you’re expecting more advanced shading techniques, like ray-marching or volumetric rendering, but I am trying to see how far we could go with just the basic/easy stuff. Now if adding the crepuscular rays works, it would produce a more impressive effect that we can avoid complicated stuff at the moment.

So for the “God rays”, I started with the webgl_postprocessing_godrays example in three.js, implemented by @huwb using a similar technique used by Crytek. After some time debugging why my scene didn’t render correctly, I realized that my clouds shader (a ShaderMaterial) didn’t play well in the depth rendering step (which override the scene with the default MeshDepthMaterial), that was needed to compute occluded objects correctly. For that I manually override materials for the depth rendering step, and pass a uniform to the CloudShader to discard or write depth values based on the color and opacity of the clouds.

Conclusion
I hope I’ve introduce the ideas behind noise and how simple it could be for generating clouds. One way to get started with experimenting is to use Firefox, which now have a Shader Editor with its improved web developer tools that allows experimentation of the shaders in real time. Much is up to one’s imagination or creative, for example, turning the clouds into moving fog.

Clouds is also such common and an interesting topic which I believe there is much (advanced) literature on it (like websites, blogs and papers like this). As mentioned earlier, the links I found to be good starting point is by Hugo Elias and Iñigo Quilez. One website which I found explaining noise in an easy to understand fashion is http://lodev.org/cgtutor/randomnoise.html

Before ending, I would love to point out a couple other of realtime browser-based examples I love, implemented in in a very different or creative approaches.

1. mrdoob’s clouds which uses billboards sprites – http://mrdoob.com/lab/javascript/webgl/clouds/
2. Jaume Sánchez’s clouds which uses css – http://www.clicktorelease.com/code/css3dclouds/
3. IQ Clouds which uses some form of volumetric ray marching in a pixel shader! – https://www.shadertoy.com/view/XslGRr

So if you’re interested, read up and experiment as much as you can, for which there are never ending possibilities!

Rendering Lines and Bezier Curves in Three.js and WebGL

It’s relatively simple to draw lines and curves with CanvasRenderingContext2D. Not so with WebGL. In this post, I’ll explore some ways to draw lines and (quadratic bezier) curves in three.js with webgl (and on the gpu).

It was probably Firefox 3 that led me to explore the use of the Canvas element and the 2d context. Then it was Firefox 4 that led me to explore the webgl context, followed by discovering three.js. That was probably how I got involved with 3d graphics. The point is that there’s some connection between the 2nd and 3rd dimensions. Enough with my history and let’s start exploring some vector graphics.

Straight lines
A straight line connects 2 points.

2D Canvas – Here’s how we draw a line from (x1, y1) to (x2, y2) in canvas 2d.

ctx = canvas.getContext(‘2d’);
ctx.lineWidth = 2;
ctx.strokeStyle = ‘black';
ctx.moveTo(x1, y1);
ctx.lineTo(x2, y2);
ctx.stroke();

Three.js/WebGL – Now here’s how we do it in three.js/WebGL.

geometry = new THREE.Geometry();
geometry.vertices.push(new Vector2(x1, y1, 0));
geometry.vertices.push(new Vector2(x2, y2, 0));
material = new THREE.LineBasicMaterial( { color: 0xffffff, linewidth: 2 } );
line = new THREE.Line(geometry, material);
scene.add(line);

Drawing straight lines in webgl requires a little more code (even with three.js), but not really too much additional code for now.

Quadratic Bézier
A quadratic bézier curve connects points (x0, y0) to (x2, y2) interpolating through control point (x1, y1).

2D Canvas – Just one function name change from a straight line.

ctx = canvas.getContext(‘2d’);
ctx.lineWidth = 2;
ctx.strokeStyle = ‘black';
ctx.moveTo(x0, y0);
ctx.quadraticCurveTo(x1, y1, x2, y2);
ctx.stroke();

Three.js/WebGL – Now it gets a little different.

SUBDIVISIONS = 20;
geometry = new THREE.Geometry();
curve = new THREE.QuadraticBezierCurve3();
curve.v0 = new THREE.Vector3(x0, y0, 0);
curve.v1 = new THREE.Vector3(x1, y1, 0);
curve.v2 = new THREE.Vector3(x2, y2, 0);
for (j = 0; j < SUBDIVISIONS; j++) {
geometry.vertices.push( curve.getPoint(j / SUBDIVISIONS) )
}
material = new THREE.LineBasicMaterial( { color: 0xffffff, linewidth: 2 } );
line = new THREE.Line(geometry, material);
scene.add(line);

Unlike canvas 2d approach, we can't yet simply change one function call, because three.js/webgl doesn't support bezier curves as a primitive geometry. We need to subdivide the curve and draw them as line segments. With sufficient line segments, they would almost represent a nice curve.

Here are a couple of drawbacks though.

1. There needs to be sufficient subdivisions or the curve segments might look like straight lines instead
2. More objects and draw calls are created in the process.
3. ANGLE does not render lineWidth > 1 length by WebGL specification.

For #1, number of divisions can be estimated so the resulting curve still look pleasant at a particular zoom level. For #2, one can switch to BufferGeometry to reduce overheads.
For #3, its the least easily fixed. I encountered this with my bezier lights experiment last year.

WebGL LineWidth Limitations

Screenshot 2014-09-07 16.12.06

My “fix” then was to alter the experiment a little.

Screenshot 2014-09-07 16.13.37

Of course that doesn’t exactly fix the lineWidth issue and someone tweeted about the approach they used for drawing lines in WebGL.

So this was last year. This year I revisited this thinking about how to render lines in WebGL for a visualization project I’m working on.

Screenshot 2014-09-07 16.19.20
(the screenshot here shows employs the easier canvas 2d approach)

The handy ParticleGeometry class

Earlier I created a class called ParticleGeometry for rendering the leaves in a cherry blossom experiment.

ParticleGeometry (probably not the best name again) is a wrapper around BufferGeometry (for performance reasons). It was created to be render rotatable sprites/particles in a more optimized fashion without being too difficult to use. 2 triangles are used for each sprite/texture which are rotated in the vert shaders (instead of in js) with values passed in via attributes. I find this approach to be more efficient than typical approaches in three.js (eg. using multiple plane meshes or using SpritePlugin). Later I discovered I could easily modify this class to work with drawing lines and curves.

ParticleGeometry for Lines

Let’s start with initalizing a ParticleGeometry. LINES is the number of lines or sprites we allocate for our line pool. Internally, it would create 2x amount of triangles and other attributes required.

lineGeometry = new THREE.ParticleGeometry( LINES, size );

Straight Lines in WebGL.
the first modification to ParticleGeometry is a setLine method.

THREE.ParticleGeometry.prototype.setLine = function(line, x1, y1, x2, y2, LINE_WIDTH)

Based on the line width, this method updates the 2 triangles for the referenced line from the starting to the ending point. With this, we solve the problem of rendering lines with line widths greater than 1. In fact, we could have custom individual line widths, which is probably more difficult to do with the default LineMaterial.

Demo: testing lines

Benchmarking Canvas 2D performances

Before we continue, I wanted to see how fast canvas 2d rendering lines and curves.

2000 lines – 60fps
5000 lines – 38fps
10000 lines – 20fps.

500 quadratic curves – 60fps
1000 quadratic curves – 40fps
2000 quadratic curves – 25fps
5000 quadratic curves – 12fps

These numbers look pretty decent to me. However these are stroked with lineWidth = 1. When I increase lineWidth to 2, the frame rates drop.

2000 lines – 38fps
500 quadratic curves – 6fps

In contrast with ParticleGeometry.setLine(), I get 30fps for 10000 lines. A tad faster, but not really faster than canvas 2d. The biggest difference come when I increased lineWidths to 5, I could still get 25fps, and 18fps at lineWidth 10.

The script used for running these numbers can be found in this gist.

Rendering Bezier curves with WebGL
So if we are able to draw straight lines with variable widths, how do we tackle bezier curves? I started experimenting with a couple of concepts.

Concept 1 – Place a 2d canvas as an overlay above the webgl context and use the 2d context api. Depending on your needs, this might not be a bad idea given canvas 2d performance isn’t bad at all. Just an additional bit of effect projecting the 3d points back into 2d points for rendering.

Iteration 2 – extend ParticleGeometry.setLine() technique with Bezier curves. Now based on the performance numbers, if we have a pool of 4000 lines to work with at 60fps, we are could calculate how many line segments we want per curve. Let’s say we are satisfied with 20 segments per curve, we could draw 200 curves. This could work, but based on our canvas 2d numbers, this doesn’t look very promising.

Iteration 3 – Blinn-Loop approach.
A pretty well known technique developed by Charles Loop and Jim Blinn in Microsoft used for rendering vector art on the gpu. Their paper is called Resolution Independent Curve Rendering using Programmable Graphics Hardware.

I’ve used a similar approach earlier with this three.js vector font experiment.


In this approach, each triangle is used to fill a quadratic curve segment. However to stroke a bezier curve, we need to draw a line instead of filling an area. So I modify the frag shader glsl code with a threshold.


float inCurve(vec2 uv) {
float d = uv.x * uv.x – uv.y;
if (d >- 0.05 && d < 0.) return -1.;
return 1.;
}

Demo: stroke bezier with modified inCurve function

performance: 5000 curves – 30fps, 10000 curves – 20fps, 1000 curves – 60fps. So far, this looks like best performance for bezier curve so far. However, controlling the lineWidth is difficult in this approach.

So let turn on GL_OES_standard_derivatives to be able to estimate stroke bezier with modified sdCurve function

Now we have slightly better control of the lineWidth, however there’s still an issue. Because we are only using 1 triangle, rendering becomes a little problematic when the stroke width is thicker than the height of the triangle.

4rd Iteration. Render bezier curve stroke in the fragment shader using a distance function.

A couple of days ago I watched the talk about GLyphy. In short GLyphy allows you to render vector fonts by representing the vectors on the texture to be rendered on the GPU. What was interesting was the way it had to convert bezier curves to arc segments so it’d be easier to compute in the shaders. It is also interesting how the original concept came from research for rendering vector graphics.

Similarly Taylor Holliday ported the hlsl code found in this paper, to produce this glsl shader code for rendering bezier curve by distance approximation and I was excited to be able to use this.

Screenshot 2014-09-08 08.01.27

The next challenge for me is how to employ this technique with my ParticleGeometry class. More specifically is how to position triangles so they could be used to draw the bezier lines. I wrote added a method just to do this.

Basically I use 2 triangles to cover a grid over where the bezier stroke would be drawn. Based on the line width, I giving sufficient padding on the left, right, top and bottom so that the strokes do not get clipped. I calculate the transformed control point to the shaders via attributes, and this works pretty well!

Demo: stroke bezier with fast distance estimation

There are however some caveats with this technique:
1. It breaks for thicker line widths. However, it should be sufficient for most small line widths.
2. It breaks when the control point is colinear with the start and end points. For that iq has a fix which draws a straight line in these scenarios.
3. It breaks when the control point almost colinear with the start and end points. This disturbs me a little, but could be “fixed” by drawing a straight line with a bigger threshold.

5rd Iteration – Solve for actual distance. To fix the minor issue with the 4th iteration, I found an exact distant to bezier curve solver here.

The nice thing is that I just need to swap one glsl function call from the 4th iteration and we are done.

Demo: stroke bezier with distance solver

Now for some numbers:

4th iteration
1000 curves (~line width 1-3) = 45fps
2000 curves (~line width 1-3) = 30fps

5th iteration
1000 curves (line width 1) = 45fps
1000 curves (line width 3) = 30fps
2000 curves (line width 1) – 23fps

Here we see that 5th approach is slightly slower but not too much. The values are pretty similar to canvas2d bezier speeds at lineWidth 1, but the webgl performance is more scalable for high line widths.

Results
From thin straight lines
Screenshot 2014-09-08 09.42.15

Thick bezier lines
Screenshot 2014-09-08 09.44.38
in WebGL. (alright, aesthetically isn’t better, but it’s a WIP).

Conclusion

Hopefully I have shown you in this post how you could render lines and bezier curves in both canvas 2d and webgl.

I have shown how the Canvas 2D API is relative easy, and the performance of 2d canvas is nothing shaddy. For most use cases, why not use it?

I have also shown that despite challenges with working with webgl at a lower level, it is still possible to render lines and curves at great quality and speed.

The additional effort might be able to get you some performance gains for large quantity of curves with thickness. There are also some other reasons why you may opt to draw bezier curves the webgl way.

– make use of post processing
– integrate gpgpu
– keep things in the webgl workflow
– control other little details (eg. shading)

I’ll leave you to consider these tradeoffs when deciding what to use.

Lastly, I’ll leave you with some inspirations on some cool demos that can be created with bezier curves.

Bateria by grgrdvrt
Fat cat by Roxik
Fluid Jelly by Fabien Bizot
Muscular-Hydrostats by soulwire.

JS Parsers in JS

In this post, I’ll share a little about some tools for parsing javascript in JS.

There was a time javascript wasn’t taken seriously. Then v8 came along (ok, spidermonkey and nitro too). Then nodejs. Then I watched Ryan Dahl’s original JSConf talk in 2009 about node.js online, it was a like another revelation. I used node.js for my final year project in school and the early adoption of node.js in Zopim was also one reason why I joined them.

Anyways the point is that a couple of years ago I might have thought it’d be crazy for anyone to write a JS parser in JS (since it wasn’t something you might run on a slow and hard to debug scripting language). But fast forward today, we have at least 3 pretty mature parsers for JS in JS.

1. UglifyJS
UglifyJS is probably one of the most popular tools for JS minification in general. It’s written in JS, but it seems to be faster than its java counterparts yui-compressor and closure-compiler. Under its hood, UglifyJS does AST parsing, analyzes and converts it back to uglyified/minified/beautified JS code. One reason I might prefer it over the closure compiler would be it’s ease to install (based on the node and npm ecosystem, assuming its on a fresh environment).

2. Esprima
Esprima, (to quote from their site), is a high performance, standard-compliant ECMAScript parser written in ECMAScript. It was created by Ariya Hidayat (who also wrote PhantomJS).Basically it takes in Javascript code and turns it into Mozilla Parser AST. It does the job really well and fast, there are probably some benchmarks that shows Esprima parsing with faster speeds than Uglifyjs. However it merely prepares the syntax tree, so code analysis, mangling etc needs to come from somewhere else.

The good news is that there are tools out there already for minfication.
Eg.
ESMangle for mangling AST for minification
ESCodegen to turn Mozilla AST back into code. Great article on using both the tools and Mozilla AST as IR here.

However, parsing JS code is not limited to code minfication. You could do code completion, static code analysis etc, and there are already a couple of really cool applications built on top of esprima:
scrubby – Bret Victor inspired code editor
eslint – esprima based JSLint/JSHint alternative
jscs – JS Code Style checker. I think it even ships with Mr.doob’s Code Style™ now.
Code Painter – JS Code formatter, pretty nice, unfortunately too developed at this point of time (and there are plans for code formatting for jscs)
Instabul for Code Coverage

3. Acron.js
So Esprima is fast (when i mean fast, its takes at most a couple of seconds parsing several MB of JS code, even in the browser). But Marijn Haverbeke, the brilliant creator behind CodeMirror, decides to write another JS parser named Acron.js. In less code. And performs faster. Even after Esprima continued to improve, Acron is still slightly faster and smaller.

The nice thing is that both Acron and Esprima uses the Firefox JS AST, which allows one to swap one for another. For example, (iirc it is the google caja project) acorn.js could be used with escodegen for sandboxing js widgets.

4. Others
There are other JS Parsers but I’ve not tried them for me to comment.

So what’s so great about JS tools in JS? I’m thinking of a few reasons.
– Its nice to have JS tools within the JS/CommonJS/NPM ecosystem.
– You can run it on node
– You can run it in the browser (using require/browserify/duo/whatsoever)
– JS can run elsewhere, perhaps? (eg. vert.x, yosemite maybe?)

Marijn has a JS minification service but that runs server side. Here’s an idea: it doesn’t take 15 minutes to be able to package uglifyjs in a js file that provides the same functionality running completely in the browser.

Well, so much JS is considered like C these days, you might try running use emscripten to compile GIMP, Wine, X windows, blabla, runing on asm.js in firefox. (Sorry that’s a spoiler to the birth and death of javascript if you haven’t watched it, but you still should watch it)

On a another thought, one reason why you would parse JS in JS in browsers is for editors for developers in the web. Something like Chrome Dev Editor, Cloud9, Nitrous.IO or so. Yeah, instant feedback.

If JS is still going to be around.

Free Birds

Earlier this year Google announced DevArt, a code and art collaboration with London’s Barbican Centre. Chinmay (a fellow geek in Singapore) and I decided to collaborate on a visual and aural entry for the competition.

As the competition attracted a number of quality works, the short story is that we didn’t win (the long story includes how my last git push failed before the deadline while tethering internet over my mobile in Nepal). This post is however about the stuff we explored, learned and created along the way. We also shared some of that at MeetupJS at the Singapore Microsoft office not long ago. Being open is actually one aspect of the DevArt event too, which is to share some of the source code, workflow and ideas around.

I wanted to reuse some of my work done with GPU flocking birds for the entry, while making it more interactive with a creative idea. We decided to have a little bird on an ipad which you could customize its colors and free it into a bigger canvas of freed birds.

Chinmay whose passionate about acoustics, took the opportunity to dabble more with the web audio api. (He previously created auralizr a really cool web app that transport you to another place using convolution filter). Based on the excellent bird call synthesis article, Chinmay created a javascript + web audio api library for generating synthesized bird calls.


Go ahead, try it, it’s interactive with a whole bunch of knobs and controls.

As for me, here’s some stuff I dabbled in for the entry

  1. Dug out my old library contact.js for some code node.js + websockets interaction.
  2. Mainly used code from three.js gpgpu flocking example, but upgraded it to use BufferedGeometry
  3. Explored GPGPU readback to javascript in order to get positional information of birds, to be feed into bird.js for positioning audio
  4. My first use of xgui.js, which uncovered and fixed some issues, but otherwise a really cool library for prototyping.
  5. Explored more post processing effects with Non-photo Realistic Rendering, unfortunately it wasn’t good enough during that time frame.

So that for the updates for now. Here are some links

Also, another entry of notable mention is Kuafu by yi-wen lin. I thought it shared a couple of similarities with our project, was more polished, but unfortunately didn’t made it to the first place.

WebGL, GPGPU & Flocking Birds – The 3rd Movement

This would be the final section of the 3 part-series covering the long journey of I’ve added the interactive WebGL GPGPU flocking bird example to three.js after a span of almost a year (no kidding, looking at some of the timestamps). Part 1 introduces to flocking, Part 2 talks about writing WebGL shaders for accelerated simulation, and Part 3 (this post) hopefully shows I’ve put things together.

(Ok, so I got to learn about a film about Birds after my tweet about the experiment late one night last year)

Just as it feel like forever to refine my flocking experiments, it feel just as long writing this. However, watching Robert Hodgin‘s (aka flight404) inspiring NVScene 2014 session: “Interactive Simulations (where nobody has to die)” (where he covered flocking too) was a motivation booster.

So much, I would recommend you watching that video (starting at 25:00 for flocking, or just watching the entire thing if you have the time) over reading this. So much I could relate in his talk like on flocking and the gpu, but so much more I could learn from. No doubt Robert have always been one of my inspirations.

Now back to my story, we were playing with simulated particles accelerated with GPGPU about 2 years ago in three.js.

With some pseudo code, here’s what is happening

/* Particle position vertex shader */
// As usual rendering a quad
/* Particle position fragment shader */
pos = readTexture(previousPositionTexture)
pos += velocity // update position the particle
color = pos // writing new position to framebuffer
/* Particle vertex shader */
pos = readTexture(currentPositionTexture)
gl_Position = pos
/* Particle fragment shader */
// As per usual.

One year later, I decided to experiment with particles interacting with each other (using the flocking algorithm).

For a simple GPGPU particle simulation, you need 2 textures (one for the currentPosition and one for the previous, since reading and writing to a same texture could cause corruption). In flocking simulation, 4 textures/render targets are used (currentPositions, previousPositions, currentVelocities, previousVelocities).

There would be one render pass for updating velocities (based on the flocking algorithm)

/* Velocities Update Fragment Shader */
currentPosition = readTexture(positionTexture)
currentVelocity = readTexture(velocityTexture)

for every other particle,
    otherPosition = readTexture(positionTexture)
    otherVelocity = readTexture(velocityTexture)
    if otherPosition is too close
        currentVelocity += oppositeDirection // Seperation
    if otherPosition is near
       currentVelocity += followDirection // Steer
    if otherPosition is far
       currentVelocity += towardsDirection // Cohesion

color = currentVelocity

Updating position is pretty simple (almost similar to the particle simulation)

/* Particle position fragment shader */
pos = readTexture(positionTexture)
pos += readTexture(velocityTexture) 
color = pos

How good does this work? Pretty good actually, getting 60fps for 1024 particles. If we were to run the JS code (no rendering) of the three.js bird canvas example, here’s the kind of frame rates you might be getting (although it certainly could be optimized)

200 birds - 60fps
400 birds - 40fps
600 birds - 30fps
800 birds - 20fps
1000 birds - 10fps
2000 birds - 2fps

With the GPGPU version, I can get about 60fps for 4096 particles. Performance start dropping after that (depending on your graphics card too) without further tricks possibly due to the bottleneck of texture fetches.

The next challenge was trying to render something more than just billboarded particles. It was a feeble attempt at calculating the transformations so results weren’t great.

So I left the matter alone until half a year later when I suggested it was time to add a GPGPU example to three.js and I revisited the issue because of WestLangley persuasion.

Iñigo Quílez had also just written an article about avoiding trigonometry for orientating objects, but I failed to fully comprehend the rotation code and still ended up lost.

I decided finally I should try understanding and learning about matrices. The timing was also right because a really good talk “Making WebGL Dance” Steven Wittens gave at JSConfUS was made available online. Near the 12 minute mark, he gives a really good primer to Linear Algebra and Matrices. The take away is this – matrices can represent a rotation in orientation (,scale & translation), and matrices could be multiplied together.

With mrdoob’s pointer to how the rotations were done in canvas bird example, I managed to reconstruct the matrices to perform similar rotations with shader code and three.js code.

I’ve also learn’t along the way that matrices in glsl are a format from how it’s written typically mat3(col1, col2, col3) instead of mat3(row1, row2, row3).

And there we have it.

There was also one more final touch to the flocking shader which made things more interesting. To Robert’s credit again for writing the flocking simulation tutorial for cinder which I’ve decided to adopt his zone based flocking algorithm and easing functions. He covered this in his NVScene talk too.

Finally some nice words by Craig Reynold, the creator of the original “Boid” program in 1987, who seen the three.js gpgpu flocking experiment.

2014-02-24_222356

It has been a long journey, but certainly not the end of it. I’ve learn much along the way, and hopefully I have managed to share some. You may follow me on twitter for updates I have on flocking experiments.

WebGL, GPGPU & Flocking Birds Part II – Shaders

In the first part of “WebGL, GPGPU & Flocking Birds”, I introduced flocking. I mentioned briefly that simulating flocking can be a computationally intensive task.

The complexity of such simulations in Big O notation is O(n^2), that is since each bird or object has to be compared with every other object. For example, 10 objects requires 100 calculations, but 100 objects requires 10,000 calculations. You can quickly see how simulating even a couple thousand objects can quickly turn a fast modern machine to a crawl especially with just javascript alone.

It’s possible to speed this up using various tricks and techniques (eg with more efficient threading, data structures etc), but when one greed for more objects, yet another brick wall can be found quickly.

So in this second part, I’ll discuss the role of WebGL and GPGPU which i would use for my flocking birds experiments. It certainly may not be the best or fastest way to run a flocking simulation, but it was interesting experimenting with WebGL to do some of heavy lifting of the flocking calculations.

WebGL

WebGL can be seen as the cool “new kid on the block” (with its many interactive demos), and one may also consider WebGL as “just a 2d API“.

I think another way one can look at WebGL, is as an interface or a way to tap into your powerful graphics unit. Its like learning how to use a forklift to lift heavy loads for you.

Intro to GPUs and WebGL Shaders

For a long time, I understood computers had a graphics card or unit but never really understood what it is really until recently. Simply put, the GPU (Graphics Processing Unit) is a specialized piece of hardware for processing graphics efficiently and quickly. (More on Cuda parallel programming on Udacity, if you’re interested)

The design of a GPU is also slightly different from a CPU. For one, GPU can have thousands of cores compared to dozen that a CPU may have. While GPU cores may run at a lower clockrate, its massive parallel throughput may be higher than what a CPU can perform.

A GPU contains vertex and pixel processors. Shaders are code used to do to program them, to perform shading of course. That includes coloring, lighting, post-processing for images.

A shader program have a linked vertex shader and a pixel(aka fragment) shader. In a simple example to draw a triangle, a vertex shader calculates the coordinates of the 3 points of the triangle. The calculated area in between the triangle is passed on to the pixel shader where it paints each pixels in the triangle.

Some friends have asked me what language is used to program WebGL shaders. They are written in language call GLSL (Graphics Library Shading Language), a C like language also used for OpenGL shaders. At least, I think if you understand JS, GLSL shouldn’t be too difficult to be picked up.

Knowing that WebGL shaders are what used to run tasks on the GPUs, we have a basic key in unlocking powerful computation capabilities that GPUs have, even though it is primary for graphics. Moving on to the exciting GPGPU – “General-purpose computing on graphics processing units”.

Exploring GPGPU with WebGL

WebGL, instead of only rendering to the screen, has the ability to write to its own memory. These rendered bitmaps in memory or RAM is referred to Frame Buffer Objects (FBOs) and the process can sometimes be simply referred as a render-to-texture (RTT).

This may start to sound confusing, but what you need to know is that the output of shader can be a texture, and that texture can be an input for another shader. One example is an effect to rendering a scene inside a TV as part of a scene inside a room.

Frame buffers are also commonly use for post-processing effects. (Steven Wittens has a library for RTT with three.js)

Since these in-memory textures or FBOs is that they reside in the GPU’s memory, its nice to note that reading or writing to a texture within the GPU’s memory is way fast, compared to uploading a texture from the CPU’s memory in our context, javascript’s side.

Now how do we start making use or abusing this for GPGPU? First consider what would a Frame Buffer possibility represent. We know that a render texture is basically a quad / 2d array holding RGB(A) values, and we could decide that a texture could represent particles, and that we could represent each pixel as each particle’s position.

For each pixel we can assign the RGB channels to the positional component (red=x position, green=y position, blue=z position). A color channel may only have a limited range of 0-255, but if we enable floating-point texture support, each channel goes from -infinity to infinity (though there’s still limited precision). Currently, some devices (like mobile) do not support floating-point texture support, so one should decide whether to drop support for those devices or pack a number over a few channels to simulate a value type of larger range.

Now we can simulate particle positions in the fragment shaders instead of simulating particles with Javascript. In such cases, a program may be more difficult to debug, but the number of particles can be much higher (like 1 Million) and CPU can be freed up.

The next step after the simulation phase is to display the gpu simulated particles on screen. The approach is to render particles like normal, however, the vertex shader then looks up its positional information stored in the texture. Its like adding just a little more code to your vertex shaders to read the position from the texture, and “hijacking” the position of the vertex you’re about to render. This requires an extension to lookup textures in the vertex shader but likely its supported when floating point textures are.

Hopefully by now, this gives a little idea on how “GPGPU” could be done in WebGL. Notice I mentioned with WebGL, because its likely that you can perform GPGPU in other ways(eg. with WebCL) in a different approach. (Well, someone wrote a library call WebCLGPU, a WebGL library which emulates some kind of WebCL interface, but I’ll leave you to explore that).

(Some trivial: In fact, this whole GPGPU with WebGL was really confusing to me at first and I did not know what its supposed to be called. While one of the earliest articles about GPGPU with Webgl referred to this technique as “FBO Simulations”, many still refer to as GPGPU.

What’s funny what that initially I thought GP-GPU (with its repetitive acronym) is used to describe “ping-pong”-ing of textures in the graphics card but well… there may be some truth in that. 2 textures are typically used and swapped for simulating positions because its not recommended to read and writing to the same textures at the same time.)

Exploration and Experiments

Lastly, one point about exploration and experiments.

Simulating particles GPGPU are getting common these days, (there are probably many on chrome experiments), and I’ve also worked on some in the past. Not to say that they are boring, (one project which i found interesting is the particle shader toy) but I think there are many more less explored and untapped areas of GPGPU. For example, it could be used for fluid, physics simulations and applications such as terrain sculpting, cloth, hair, cloth simulation etc.

As for me, I started playing around with flocking. More about it in the 3rd and final part of these series.

WebGL, GPGPU, and Flocking – Part 1

One of my latest contributions to three.js is a webgl bird flocking example simulated in the GPU. Initially I wanted to sum up my experiences in a blog post of “WebGL, GPGPU, and Flocking”, but that became too difficult to write, and possibly too much to read in a go. So I opt to split them into 3 parts, the first part of flocking in general, and second on WebGL and GPGPU, and the third part to put them all together. So for part 1, we’ll start with flocking.

So what is flocking behavior? From Wikipedia, it is

the behavior exhibited when a group of birds, called a flock, are foraging or in flight. There are parallels with the shoaling behavior of fish, the swarming behavior of insects, and herd behavior of land animals.

So why has flocking behavior has caught my attention along the way? It is an interesting topic technically – simulating such behavior may require intensive computation which poses interesting challenges and solutions. It is also interesting for it use of creating “artificial life” – in games or interactive media, flocking activity can be use to spice up liveliness of the environment. I love it for an additional reason – it exhibits the beauty found in nature. Even if I haven’t have the opportunity to enjoy such displays at lengths in real life, I could easily spend hours watching beautiful flocking videos on youtube or vimeo.

You may have noticed that I’ve used of flocking birds previously in “Boid n Buildings” demo. The code I use for that is based on the version included in the canvas flocking birds example of three.js (which was based on another processing example). A variant of that code which was probably also used for “the wilderness downtown” and “3 dreams of black”.

However, to get closer to the raw metal, one can try to get his hands dirty implementing the flocking algorithm. If you can write a simple particle system already (with attraction and repulsion), its not that difficult to learn about and add the 3 simple rules of flocking

  1. Separation – steer away from others who are too close
  2. Alignment – steer towards where others are moving to
  3. Cohesion – steer towards where others are

I usually find it useful to try something new to me in 2d rather than 3d. It’s easier to debug when things go wrong, and make sure that new concepts are understood. So I started writing a simple 2d implementation with Canvas.

See the Pen Flocking Test by zz85 (@zz85) on CodePen

With this 2d exercise done, it would be easier to extend to 3D, and then into the Shaders (GPGPU).

If you are instead looking for guide or tutorial to follow with, you might find that this post isn’t very helpful. Instead, there are many resources, code and tutorials you can probably find online on this topic. With this, I’ll end this part for now, and leave you with a couple of links you can learn more about flocking. The next part in this series would be on WebGL and GPGPU.

Links:
http://natureofcode.com/book/chapter-6-autonomous-agents/ Autonomous Agents from Nature of code book.
http://www.red3d.com/cwr/boids/ Craig Reynolds’s page on Boids. He’s responsible for the original program named “Boid” and his original paper from 1987 of “Flocks, Herds, and Schools: A Distributed Behavioral Model”.
http://icouzin.princeton.edu/iain-couzin-on-wnyc-radiolab/ A really nice talk (video) on collective animal behavior. This was recommended by Robert Hodgin who does fantastic work with flocking behavior too.
http://www.wired.com/wiredscience/2011/11/starling-flock/ An article on Wired if you just want something easy to read

Over The Hills JS1K Postmortem

This post is a little reflection about my experience with js1k in which I submitted a little game. I spoke about a little about it at MeetupJS Singapore (on the day of the Js1K’13 results was published) and decided to follow up with this blog post. Unfortunately, it has been collecting dust in my draft for month, and I thought that its about time to flush some of my old buffers.

Here’s the slides I gave for my presentation which covered a little about what js1k was about, some byte saving tricks and techniques used, my workflows, and links.

Now if you really want to watch a good talk on a postmortem of a game in which every byte squeezing technique is used, I suggest that you spend some time watching Pitfall Classic Postmortem With David Crane Panel at GDC 2011 (Atari 2600) instead of reading this.

In case you’re still interested of what happened, here’s how it started. Couple of days before the js1k deadline, there was buzz on slashdot and twittersphere about some crazy entries on js1k, and I thought maybe I should join this craze. I thought creating a game that where there’s jumping/ski/riding along a side scrolling terrains. My colleagues pointed this game “Tiny Wings” which apparently was a hit on the IOS devices.

I then watched its promotional video and was mesmerized by it.

It exhibited such beautiful artwork, music, procedural generation, and addictive gameplay that I decided I should try doing a clone of it. It turned out to be a ball jumping “over the hills” which became the title of my entry. Now to recapped what went great, and horribly wrong.

Great
– I attempt the js1k challenge, which is indeed a challenge
– I attempt writing a game!
– I learn crazy byte saving techniques and little more about javascript
– I manage to submit a less than 1024 bytes entry
– It somewhat didn’t look too bad!

BAD!
– Trying to write a game the first time, not the best idea
– Trying to complete the js1k game entry in less than few days, bad idea! (Winners usually spent weekends optimizing their code)
– Not know exactly the best techniques to write and structure game code
– Not using the most efficient byte saving code and some premature optimizations.
– Not much game play

On hindsight, I needed to create a game the normal, and spend much more time porting it to a byte starved version. At least, there were things which I experienced and learnt and a few techniques I applied. Here’s some additional stuff not mentioned in my presentation.

1. Terrain generation
I used a formula/function base using cosines where it returns y for an x position. I thought it was easier, but in retrospective, height-maps with interpolation probably gives better control and more interesting terrain.

Bouncing beholder (one of the previous winning entries), and this tiny wings tutorial actually uses a height map approach.

2. Terrain drawing
I wanted to emulate the beautiful stripped terrain in Tiny Wings. I thought about approaches of mapping textures using canvas path, custom canvas rect painting with rotations, cropping, clipping, gradient patterns but they were all too difficult. So I went for a pixel based approach, which was simple almost like a pixel shader, but slow on canvas2d (it would probably be fast on WebGL in theory). I reduced the game canvas to 480, 320, the original iphone resolution, for speed improvements and also utilized Typed Array, which reduced bytes and brought about 50% improvement to the FPS.

3. Physics
Both “Angry Birds” and “Tiny Wings” uses the famed box2d as their physics engine. For a 1k game, I created a simplified verlet based physics engine, which does seems to work pretty well.

To sum up, the js1k experience was fun, crazy and educational, and anyone with sufficient interest should set aside time to dive into this.

Now again for references, links and resources

** Good To Read ** How to create Tiny Wings like game – http://www.raywenderlich.com/3888/how-to-create-a-game-like-tiny-wings-part-1

JS1K Site – http://js1k.com/
My Js1K Tools – https://github.com/zz85/js1k-tools
Understanding JSCrush – http://blog.nikhilism.com/2012/04/demystifying-jscrush.html
140 Byte Saving Techniques – https://github.com/jed/140bytes/wiki/Byte-saving-techniques
Javascript Golfing – http://www.claudiocc.com/javascript-golfing/
Blog of the creator of Fubree – http://www.romancortes.com/blog/

& WAY PLENTY MORE STUFF TO READ

p.s. I probably upload an improved version of this sometime, somewhere.

Three.js Bokeh Shader

(TL;DR? – check out the new three.js webgl Bokeh Shader here)

Bokeh is a word used by photographers to describe the aesthetic out of focus or blur properties of a lens or a photo. Depth-of-field (DOF) is the distance objects seems to be sharp in a photo. So while “dof” is more measurable and “bokeh” subjective, one might say there’s more bokeh in picture with a shallower dof, because the background and foreground (if there’s a subject) is usually de-emphasized by being blurred when thrown out of focus.

Bokeh seems to be derived from the Japanese word “boke” 暈け – apart from the meaning blur, it might also mean senile, stupid, unaware, or clueless. This is interesting because in Singlish (Singapore’s flavor of English), it has that same negative meaning when referring “blur” to a person (it probably comes from the literal meaning of the opposite of sharp”. And now you might now know non graphical meaning of the word blur in my twitter id (BlurSpline).

IMG_8537
Here’s a photo of the Kinetic Rain I took at Changi Airport Terminal 1. Especially if you like kinetic structures, you should check out the official videos here and here (in which there’s much use of bokeh too)

I remember the time I knew little about 3d programming when I first tried three.js 2 years ago. I wondered whether camera.near and camera.far were the ways of defining when objects in the scene gets blurred when they are at distances at the far or near points.

Turns out of course that I was really wrong, since these values are used for clipping – improving performance by not rendering objects out of the view port. I had that naive thinking three.js work like real life cameras that I was able to create cinematic like scenes. Some helpful one on three.js IRC channel then pointed me to the post-processing DOF example done by alteredqualia who ported the original bokeh shader written by Martins Upitis.

So fast forward to the present, we have seen that shader used in ROME, and Martins Upitis has updated his Bokeh Shader to make it more realistic, and I attempted to port it back to three.js/webgl.


With focus debug turned on


Testing it in a scene


The example added to three.js with glitters.

So to copy what martinsh say the new shader does, it has the flexibility to
• variable sample count to increase quality/performance
• option to blur depth buffer to reduce hard edges
• option to dither the samples with noise or pattern
• bokeh chromatic aberration/fringing
• bokeh bias to bring out bokeh edges
• image thresholding to bring out highlights when image is out of focus
• pentagonal bokeh shape (experimental)
• bokeh vignetting at screen edges

The new three.js example also demonstrates how object picking can be used and interpolated for the focal distance too. More detailed comments about the parameters were also written on github.

Of course the shader is not perfect, as DOF is something not that simple (there are quite a few in depth Graphics Gems articles on it). Much of it is post-processing smoke and mirrors, the way is usually done in rasterization, compared to Path tracing or so. Yet I think its great addition to have in WebGL, just as we have seen DOF used in the Crytech, Nvidia demos or in other high end games. (There was a also a cool video of a minecraft mod using that DOF shader – but now seemed removed as I recently looked for it).

I would love to see the tasteful use of bokeh sometime, not just because it feels cinematic or been been widely used in photography, i think its also more natural given that’s how our eyes work with our brains (more details here).

Finally it seems that the deadline for the current js1k contest is just hours away – this means I gotta head off to do some cranking and crunching, maybe more on that in a later post! :D