Tag Archives: webgl

Exploring Simple Noise and Clouds with Three.js

A while back I attempt writing a simple CloudShader for use in three.js scenes. Exploring the use of noise, it wasn’t difficult creating clouds which isn’t visually too bad looking. Another idea came to add Crepuscular rays (“God rays”) to make the scene more interesting. Failing to mix them correctly together on my first tries, I left these experiments abandoned. Until one fine day (or rather one night) I decided to give it another shot and finally got it to work so here’s the example.


(Now comes with sliders for clouds control!)

In this post, I would going to explain some of the ideas generating procedural clouds with noise (these topics frequently go hand in hand). While noise might be bread and butter in the world of computer graphics, but it did took me awhile to wrap my head around it.

Noise
Noise, could be described as a pseudo-random texture. Pseudo-random means that it might appear to be totally random, but being generated by the computer, it is not. Many might also refer to noise as Perlin noise (thanks to work by Ken Perlin), but there are really different form of noise, eg. Perlin noise, Simplex noise, Value noise, Wavelet noise, Gradient noise, Worley noise, Simulation noise.

The approach that I use for my clouds could be considered Value noise. Let’s start creating some random noise by creating a DataTexture of 256 by 256 pixels.


// Generate random noise texture
var noiseSize = 256;
var size = noiseSize * noiseSize;
var data = new Uint8Array( 4 * size );
for ( var i = 0; i < size * 4; i ++ ) {
    data[ i ] = Math.random() * 255 | 0;
}
var dt = new THREE.DataTexture( data, noiseSize, noiseSize, THREE.RGBAFormat );
dt.wrapS = THREE.RepeatWrapping;
dt.wrapT = THREE.RepeatWrapping;
dt.needsUpdate = true;

Now if we were to now render this texture, it would look really random (obviously) like a broken TV channel.

noisebw
(we set alpha to 255, and r=g=b to illustrate the example here)

Let’s say if we were to use the pixel values as a height map for a terrain (another use for noise), it is going to look really disjoint or random. One way to fix it is to interpolate the values from one point to another. This is becomes smooth noise. The nice thing about textures is that these interpolation can be done on the graphics unit automatically. By default, or by setting `.minFilter` and `.magFilter` properties of a THREE.Texture to `THREE.LinearMipMapLinearFilter`, you get almost free interpolation when you try to read a point on the texture between 2 pixels or more.

Still, this isn’t enough to look anything like clouds. The next step is to apply Fractional Brownian Motion, which is a summation of successive octaves of noise, each with higher frequency and lower amplitude. This generates Fractal noise which generates a more interesting and continuous texture. I’m doing this in the fragment shader with a a few lines of code…


float fnoise(vec2 uv) {
    float f = 0.;
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
    f += texture2D(uv * scale).x / scale;
    }
    return f;
}

Given this my data texture has 4 channels (RGBA), one could pull out 4 or 3 components if needed, like


vec3 fNoise(vec2 uv) {
    vec3 f = vec3(0.);
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
        f += texture2D(uv * scale).xyz / scale;
    }
    return f;
}

Now if you were to render this, it might look similar to the perlinNoise class in flash/actionscript or the cloud filter in photoshop.

Screenshot 2014-11-08 22.16.53

2D Clouds
Although we have a procedural cloud shader, how do we integrate it into a three.js scene? One way is to texture it over a sphere or a skybox, but the approach I use is to create a paraboloid shell generated with ParametricGeometry, similar to the approach Steven Wittens used to render Auroras in his demo “NeverSeenTheSky”. The code / formula I use is simply this


function SkyDome(i, j) {
    i -= 0.5;
    j -= 0.5;
    var r2 = i * i * 4 + j * j * 4;
    return new THREE.Vector3(
        i * 20000,
        (1 – r2) * 5000,
        j * 20000
    ).multiplyScalar(0.05);
};
var skyMesh = new THREE.Mesh(
    new THREE.ParametricGeometry(SkyDome, 5, 5),
    shaderMaterial
);

Now the remaining step, but its probably the most important step to simulate clouds is to make use and tweak the values from the fractal noise to obtain the kind of clouds you want. This is done in the fragment shader where you could decide what thresholds to apply, (eg. cutting off the high or low values) or apply a curve or a function to the signals. 2 articles which gave me ideas are Hugo Elias’s clouds and Iñigo Quilez’s dynamic 2d clouds. Apart from these, I added a function (where o is opacity) to reduce the clouds of the skies nearer the horizon to make it more illusion of clouds disappearing into the distant.


// applies more transparency to horizon for
// to create illusion of distant clouds
o = 1. – o * o * o * o;

Crepuscular rays
So I’m going a break explaining some of my ideas for producing the clouds. This might be disappointing if you’re expecting more advanced shading techniques, like ray-marching or volumetric rendering, but I am trying to see how far we could go with just the basic/easy stuff. Now if adding the crepuscular rays works, it would produce a more impressive effect that we can avoid complicated stuff at the moment.

So for the “God rays”, I started with the webgl_postprocessing_godrays example in three.js, implemented by @huwb using a similar technique used by Crytek. After some time debugging why my scene didn’t render correctly, I realized that my clouds shader (a ShaderMaterial) didn’t play well in the depth rendering step (which override the scene with the default MeshDepthMaterial), that was needed to compute occluded objects correctly. For that I manually override materials for the depth rendering step, and pass a uniform to the CloudShader to discard or write depth values based on the color and opacity of the clouds.

Conclusion
I hope I’ve introduce the ideas behind noise and how simple it could be for generating clouds. One way to get started with experimenting is to use Firefox, which now have a Shader Editor with its improved web developer tools that allows experimentation of the shaders in real time. Much is up to one’s imagination or creative, for example, turning the clouds into moving fog.

Clouds is also such common and an interesting topic which I believe there is much (advanced) literature on it (like websites, blogs and papers like this). As mentioned earlier, the links I found to be good starting point is by Hugo Elias and Iñigo Quilez. One website which I found explaining noise in an easy to understand fashion is http://lodev.org/cgtutor/randomnoise.html

Before ending, I would love to point out a couple other of realtime browser-based examples I love, implemented in in a very different or creative approaches.

1. mrdoob’s clouds which uses billboards sprites – http://mrdoob.com/lab/javascript/webgl/clouds/
2. Jaume Sánchez’s clouds which uses css – http://www.clicktorelease.com/code/css3dclouds/
3. IQ Clouds which uses some form of volumetric ray marching in a pixel shader! – https://www.shadertoy.com/view/XslGRr

So if you’re interested, read up and experiment as much as you can, for which there are never ending possibilities!

Free Birds

Earlier this year Google announced DevArt, a code and art collaboration with London’s Barbican Centre. Chinmay (a fellow geek in Singapore) and I decided to collaborate on a visual and aural entry for the competition.

As the competition attracted a number of quality works, the short story is that we didn’t win (the long story includes how my last git push failed before the deadline while tethering internet over my mobile in Nepal). This post is however about the stuff we explored, learned and created along the way. We also shared some of that at MeetupJS at the Singapore Microsoft office not long ago. Being open is actually one aspect of the DevArt event too, which is to share some of the source code, workflow and ideas around.

I wanted to reuse some of my work done with GPU flocking birds for the entry, while making it more interactive with a creative idea. We decided to have a little bird on an ipad which you could customize its colors and free it into a bigger canvas of freed birds.

Chinmay whose passionate about acoustics, took the opportunity to dabble more with the web audio api. (He previously created auralizr a really cool web app that transport you to another place using convolution filter). Based on the excellent bird call synthesis article, Chinmay created a javascript + web audio api library for generating synthesized bird calls.


Go ahead, try it, it’s interactive with a whole bunch of knobs and controls.

As for me, here’s some stuff I dabbled in for the entry

  1. Dug out my old library contact.js for some code node.js + websockets interaction.
  2. Mainly used code from three.js gpgpu flocking example, but upgraded it to use BufferedGeometry
  3. Explored GPGPU readback to javascript in order to get positional information of birds, to be feed into bird.js for positioning audio
  4. My first use of xgui.js, which uncovered and fixed some issues, but otherwise a really cool library for prototyping.
  5. Explored more post processing effects with Non-photo Realistic Rendering, unfortunately it wasn’t good enough during that time frame.

So that for the updates for now. Here are some links

Also, another entry of notable mention is Kuafu by yi-wen lin. I thought it shared a couple of similarities with our project, was more polished, but unfortunately didn’t made it to the first place.

WebGL, GPGPU & Flocking Birds – The 3rd Movement

This would be the final section of the 3 part-series covering the long journey of I’ve added the interactive WebGL GPGPU flocking bird example to three.js after a span of almost a year (no kidding, looking at some of the timestamps). Part 1 introduces to flocking, Part 2 talks about writing WebGL shaders for accelerated simulation, and Part 3 (this post) hopefully shows I’ve put things together.

(Ok, so I got to learn about a film about Birds after my tweet about the experiment late one night last year)

Just as it feel like forever to refine my flocking experiments, it feel just as long writing this. However, watching Robert Hodgin‘s (aka flight404) inspiring NVScene 2014 session: “Interactive Simulations (where nobody has to die)” (where he covered flocking too) was a motivation booster.

So much, I would recommend you watching that video (starting at 25:00 for flocking, or just watching the entire thing if you have the time) over reading this. So much I could relate in his talk like on flocking and the gpu, but so much more I could learn from. No doubt Robert have always been one of my inspirations.

Now back to my story, we were playing with simulated particles accelerated with GPGPU about 2 years ago in three.js.

With some pseudo code, here’s what is happening

/* Particle position vertex shader */
// As usual rendering a quad
/* Particle position fragment shader */
pos = readTexture(previousPositionTexture)
pos += velocity // update position the particle
color = pos // writing new position to framebuffer
/* Particle vertex shader */
pos = readTexture(currentPositionTexture)
gl_Position = pos
/* Particle fragment shader */
// As per usual.

One year later, I decided to experiment with particles interacting with each other (using the flocking algorithm).

For a simple GPGPU particle simulation, you need 2 textures (one for the currentPosition and one for the previous, since reading and writing to a same texture could cause corruption). In flocking simulation, 4 textures/render targets are used (currentPositions, previousPositions, currentVelocities, previousVelocities).

There would be one render pass for updating velocities (based on the flocking algorithm)

/* Velocities Update Fragment Shader */
currentPosition = readTexture(positionTexture)
currentVelocity = readTexture(velocityTexture)

for every other particle,
    otherPosition = readTexture(positionTexture)
    otherVelocity = readTexture(velocityTexture)
    if otherPosition is too close
        currentVelocity += oppositeDirection // Seperation
    if otherPosition is near
       currentVelocity += followDirection // Steer
    if otherPosition is far
       currentVelocity += towardsDirection // Cohesion

color = currentVelocity

Updating position is pretty simple (almost similar to the particle simulation)

/* Particle position fragment shader */
pos = readTexture(positionTexture)
pos += readTexture(velocityTexture) 
color = pos

How good does this work? Pretty good actually, getting 60fps for 1024 particles. If we were to run the JS code (no rendering) of the three.js bird canvas example, here’s the kind of frame rates you might be getting (although it certainly could be optimized)

200 birds - 60fps
400 birds - 40fps
600 birds - 30fps
800 birds - 20fps
1000 birds - 10fps
2000 birds - 2fps

With the GPGPU version, I can get about 60fps for 4096 particles. Performance start dropping after that (depending on your graphics card too) without further tricks possibly due to the bottleneck of texture fetches.

The next challenge was trying to render something more than just billboarded particles. It was a feeble attempt at calculating the transformations so results weren’t great.

So I left the matter alone until half a year later when I suggested it was time to add a GPGPU example to three.js and I revisited the issue because of WestLangley persuasion.

Iñigo Quílez had also just written an article about avoiding trigonometry for orientating objects, but I failed to fully comprehend the rotation code and still ended up lost.

I decided finally I should try understanding and learning about matrices. The timing was also right because a really good talk “Making WebGL Dance” Steven Wittens gave at JSConfUS was made available online. Near the 12 minute mark, he gives a really good primer to Linear Algebra and Matrices. The take away is this – matrices can represent a rotation in orientation (,scale & translation), and matrices could be multiplied together.

With mrdoob’s pointer to how the rotations were done in canvas bird example, I managed to reconstruct the matrices to perform similar rotations with shader code and three.js code.

I’ve also learn’t along the way that matrices in glsl are a format from how it’s written typically mat3(col1, col2, col3) instead of mat3(row1, row2, row3).

And there we have it.

There was also one more final touch to the flocking shader which made things more interesting. To Robert’s credit again for writing the flocking simulation tutorial for cinder which I’ve decided to adopt his zone based flocking algorithm and easing functions. He covered this in his NVScene talk too.

Finally some nice words by Craig Reynold, the creator of the original “Boid” program in 1987, who seen the three.js gpgpu flocking experiment.

2014-02-24_222356

It has been a long journey, but certainly not the end of it. I’ve learn much along the way, and hopefully I have managed to share some. You may follow me on twitter for updates I have on flocking experiments.

WebGL, GPGPU & Flocking Birds Part II – Shaders

In the first part of “WebGL, GPGPU & Flocking Birds”, I introduced flocking. I mentioned briefly that simulating flocking can be a computationally intensive task.

The complexity of such simulations in Big O notation is O(n^2), that is since each bird or object has to be compared with every other object. For example, 10 objects requires 100 calculations, but 100 objects requires 10,000 calculations. You can quickly see how simulating even a couple thousand objects can quickly turn a fast modern machine to a crawl especially with just javascript alone.

It’s possible to speed this up using various tricks and techniques (eg with more efficient threading, data structures etc), but when one greed for more objects, yet another brick wall can be found quickly.

So in this second part, I’ll discuss the role of WebGL and GPGPU which i would use for my flocking birds experiments. It certainly may not be the best or fastest way to run a flocking simulation, but it was interesting experimenting with WebGL to do some of heavy lifting of the flocking calculations.

WebGL

WebGL can be seen as the cool “new kid on the block” (with its many interactive demos), and one may also consider WebGL as “just a 2d API“.

I think another way one can look at WebGL, is as an interface or a way to tap into your powerful graphics unit. Its like learning how to use a forklift to lift heavy loads for you.

Intro to GPUs and WebGL Shaders

For a long time, I understood computers had a graphics card or unit but never really understood what it is really until recently. Simply put, the GPU (Graphics Processing Unit) is a specialized piece of hardware for processing graphics efficiently and quickly. (More on Cuda parallel programming on Udacity, if you’re interested)

The design of a GPU is also slightly different from a CPU. For one, GPU can have thousands of cores compared to dozen that a CPU may have. While GPU cores may run at a lower clockrate, its massive parallel throughput may be higher than what a CPU can perform.

A GPU contains vertex and pixel processors. Shaders are code used to do to program them, to perform shading of course. That includes coloring, lighting, post-processing for images.

A shader program have a linked vertex shader and a pixel(aka fragment) shader. In a simple example to draw a triangle, a vertex shader calculates the coordinates of the 3 points of the triangle. The calculated area in between the triangle is passed on to the pixel shader where it paints each pixels in the triangle.

Some friends have asked me what language is used to program WebGL shaders. They are written in language call GLSL (Graphics Library Shading Language), a C like language also used for OpenGL shaders. At least, I think if you understand JS, GLSL shouldn’t be too difficult to be picked up.

Knowing that WebGL shaders are what used to run tasks on the GPUs, we have a basic key in unlocking powerful computation capabilities that GPUs have, even though it is primary for graphics. Moving on to the exciting GPGPU – “General-purpose computing on graphics processing units”.

Exploring GPGPU with WebGL

WebGL, instead of only rendering to the screen, has the ability to write to its own memory. These rendered bitmaps in memory or RAM is referred to Frame Buffer Objects (FBOs) and the process can sometimes be simply referred as a render-to-texture (RTT).

This may start to sound confusing, but what you need to know is that the output of shader can be a texture, and that texture can be an input for another shader. One example is an effect to rendering a scene inside a TV as part of a scene inside a room.

Frame buffers are also commonly use for post-processing effects. (Steven Wittens has a library for RTT with three.js)

Since these in-memory textures or FBOs is that they reside in the GPU’s memory, its nice to note that reading or writing to a texture within the GPU’s memory is way fast, compared to uploading a texture from the CPU’s memory in our context, javascript’s side.

Now how do we start making use or abusing this for GPGPU? First consider what would a Frame Buffer possibility represent. We know that a render texture is basically a quad / 2d array holding RGB(A) values, and we could decide that a texture could represent particles, and that we could represent each pixel as each particle’s position.

For each pixel we can assign the RGB channels to the positional component (red=x position, green=y position, blue=z position). A color channel may only have a limited range of 0-255, but if we enable floating-point texture support, each channel goes from -infinity to infinity (though there’s still limited precision). Currently, some devices (like mobile) do not support floating-point texture support, so one should decide whether to drop support for those devices or pack a number over a few channels to simulate a value type of larger range.

Now we can simulate particle positions in the fragment shaders instead of simulating particles with Javascript. In such cases, a program may be more difficult to debug, but the number of particles can be much higher (like 1 Million) and CPU can be freed up.

The next step after the simulation phase is to display the gpu simulated particles on screen. The approach is to render particles like normal, however, the vertex shader then looks up its positional information stored in the texture. Its like adding just a little more code to your vertex shaders to read the position from the texture, and “hijacking” the position of the vertex you’re about to render. This requires an extension to lookup textures in the vertex shader but likely its supported when floating point textures are.

Hopefully by now, this gives a little idea on how “GPGPU” could be done in WebGL. Notice I mentioned with WebGL, because its likely that you can perform GPGPU in other ways(eg. with WebCL) in a different approach. (Well, someone wrote a library call WebCLGPU, a WebGL library which emulates some kind of WebCL interface, but I’ll leave you to explore that).

(Some trivial: In fact, this whole GPGPU with WebGL was really confusing to me at first and I did not know what its supposed to be called. While one of the earliest articles about GPGPU with Webgl referred to this technique as “FBO Simulations”, many still refer to as GPGPU.

What’s funny what that initially I thought GP-GPU (with its repetitive acronym) is used to describe “ping-pong”-ing of textures in the graphics card but well… there may be some truth in that. 2 textures are typically used and swapped for simulating positions because its not recommended to read and writing to the same textures at the same time.)

Exploration and Experiments

Lastly, one point about exploration and experiments.

Simulating particles GPGPU are getting common these days, (there are probably many on chrome experiments), and I’ve also worked on some in the past. Not to say that they are boring, (one project which i found interesting is the particle shader toy) but I think there are many more less explored and untapped areas of GPGPU. For example, it could be used for fluid, physics simulations and applications such as terrain sculpting, cloth, hair, cloth simulation etc.

As for me, I started playing around with flocking. More about it in the 3rd and final part of these series.

WebGL, GPGPU, and Flocking – Part 1

One of my latest contributions to three.js is a webgl bird flocking example simulated in the GPU. Initially I wanted to sum up my experiences in a blog post of “WebGL, GPGPU, and Flocking”, but that became too difficult to write, and possibly too much to read in a go. So I opt to split them into 3 parts, the first part of flocking in general, and second on WebGL and GPGPU, and the third part to put them all together. So for part 1, we’ll start with flocking.

So what is flocking behavior? From Wikipedia, it is

the behavior exhibited when a group of birds, called a flock, are foraging or in flight. There are parallels with the shoaling behavior of fish, the swarming behavior of insects, and herd behavior of land animals.

So why has flocking behavior has caught my attention along the way? It is an interesting topic technically – simulating such behavior may require intensive computation which poses interesting challenges and solutions. It is also interesting for it use of creating “artificial life” – in games or interactive media, flocking activity can be use to spice up liveliness of the environment. I love it for an additional reason – it exhibits the beauty found in nature. Even if I haven’t have the opportunity to enjoy such displays at lengths in real life, I could easily spend hours watching beautiful flocking videos on youtube or vimeo.

You may have noticed that I’ve used of flocking birds previously in “Boid n Buildings” demo. The code I use for that is based on the version included in the canvas flocking birds example of three.js (which was based on another processing example). A variant of that code which was probably also used for “the wilderness downtown” and “3 dreams of black”.

However, to get closer to the raw metal, one can try to get his hands dirty implementing the flocking algorithm. If you can write a simple particle system already (with attraction and repulsion), its not that difficult to learn about and add the 3 simple rules of flocking

  1. Separation – steer away from others who are too close
  2. Alignment – steer towards where others are moving to
  3. Cohesion – steer towards where others are

I usually find it useful to try something new to me in 2d rather than 3d. It’s easier to debug when things go wrong, and make sure that new concepts are understood. So I started writing a simple 2d implementation with Canvas.

See the Pen Flocking Test by zz85 (@zz85) on CodePen

With this 2d exercise done, it would be easier to extend to 3D, and then into the Shaders (GPGPU).

If you are instead looking for guide or tutorial to follow with, you might find that this post isn’t very helpful. Instead, there are many resources, code and tutorials you can probably find online on this topic. With this, I’ll end this part for now, and leave you with a couple of links you can learn more about flocking. The next part in this series would be on WebGL and GPGPU.

Links:
http://natureofcode.com/book/chapter-6-autonomous-agents/ Autonomous Agents from Nature of code book.
http://www.red3d.com/cwr/boids/ Craig Reynolds’s page on Boids. He’s responsible for the original program named “Boid” and his original paper from 1987 of “Flocks, Herds, and Schools: A Distributed Behavioral Model”.
http://icouzin.princeton.edu/iain-couzin-on-wnyc-radiolab/ A really nice talk (video) on collective animal behavior. This was recommended by Robert Hodgin who does fantastic work with flocking behavior too.
http://www.wired.com/wiredscience/2011/11/starling-flock/ An article on Wired if you just want something easy to read

Three.js Bokeh Shader

(TL;DR? – check out the new three.js webgl Bokeh Shader here)

Bokeh is a word used by photographers to describe the aesthetic out of focus or blur properties of a lens or a photo. Depth-of-field (DOF) is the distance objects seems to be sharp in a photo. So while “dof” is more measurable and “bokeh” subjective, one might say there’s more bokeh in picture with a shallower dof, because the background and foreground (if there’s a subject) is usually de-emphasized by being blurred when thrown out of focus.

Bokeh seems to be derived from the Japanese word “boke” 暈け – apart from the meaning blur, it might also mean senile, stupid, unaware, or clueless. This is interesting because in Singlish (Singapore’s flavor of English), it has that same negative meaning when referring “blur” to a person (it probably comes from the literal meaning of the opposite of sharp”. And now you might now know non graphical meaning of the word blur in my twitter id (BlurSpline).

IMG_8537
Here’s a photo of the Kinetic Rain I took at Changi Airport Terminal 1. Especially if you like kinetic structures, you should check out the official videos here and here (in which there’s much use of bokeh too)

I remember the time I knew little about 3d programming when I first tried three.js 2 years ago. I wondered whether camera.near and camera.far were the ways of defining when objects in the scene gets blurred when they are at distances at the far or near points.

Turns out of course that I was really wrong, since these values are used for clipping – improving performance by not rendering objects out of the view port. I had that naive thinking three.js work like real life cameras that I was able to create cinematic like scenes. Some helpful one on three.js IRC channel then pointed me to the post-processing DOF example done by alteredqualia who ported the original bokeh shader written by Martins Upitis.

So fast forward to the present, we have seen that shader used in ROME, and Martins Upitis has updated his Bokeh Shader to make it more realistic, and I attempted to port it back to three.js/webgl.


With focus debug turned on


Testing it in a scene


The example added to three.js with glitters.

So to copy what martinsh say the new shader does, it has the flexibility to
• variable sample count to increase quality/performance
• option to blur depth buffer to reduce hard edges
• option to dither the samples with noise or pattern
• bokeh chromatic aberration/fringing
• bokeh bias to bring out bokeh edges
• image thresholding to bring out highlights when image is out of focus
• pentagonal bokeh shape (experimental)
• bokeh vignetting at screen edges

The new three.js example also demonstrates how object picking can be used and interpolated for the focal distance too. More detailed comments about the parameters were also written on github.

Of course the shader is not perfect, as DOF is something not that simple (there are quite a few in depth Graphics Gems articles on it). Much of it is post-processing smoke and mirrors, the way is usually done in rasterization, compared to Path tracing or so. Yet I think its great addition to have in WebGL, just as we have seen DOF used in the Crytech, Nvidia demos or in other high end games. (There was a also a cool video of a minecraft mod using that DOF shader – but now seemed removed as I recently looked for it).

I would love to see the tasteful use of bokeh sometime, not just because it feels cinematic or been been widely used in photography, i think its also more natural given that’s how our eyes work with our brains (more details here).

Finally it seems that the deadline for the current js1k contest is just hours away – this means I gotta head off to do some cranking and crunching, maybe more on that in a later post! 😀

Nucleal, The Exploding Photo Experiment

Particles. Photos. Exploding motions. The outcome of experimentation of more particles done this year. Check out http://nucleal.com/


This might probably look better in motion

Without boring you with large amount of text, perhaps some pictures to help do some talking.

First you get to chose how much particles to run.


Most decent computers could do 262144 particles easily, even my 3 generation old 11″ macbook air can run 1 or 4 millions particles.

On the top bar, you get some options on how you may interact with the particles, or select which photo albums if connect to facebook.

At the bottom is a film strip which helps you view and select photos from the photo album.

Of course at the middle you view the photo particles. A trackball camera is used, so you could control the camera with the different buttons of your mouse, or press A, S or D while moving your mouse.

Instead of arranging the particles in a plane like a photo, you could assemble them as a sphere, cone, or even a “supershape”.

Static shapes by itself aren’t good, so physics forces could be applied the particle formation.

Instead of the default noise wave, you can use 3 invisible orbs to attract particles with intensity relating to its individual colors

Or do the opposite of attracting, repelling

My favorite physics motion is the “air brakes”.

This slows and stops the particles in their tracks, allow you to observe something like a “bullet time”.

While not every combinations always look good, it’s pretty fun to see how what the particles form after sometime, especially with air brakes between different combinations.

Oh btw noticed the colors are kind of greyscale? That’s the vintage photo effect I’m applying in real time to the photos.

And for the other photo effect I added, a cross-processing filter.

(this btw is my nephew, who allows me to spend less attention and time on twitter these days:)

So hopefully I’ve given you a satisfying walk-through of the Nucleal experiment, at least way simpler than the massive Higgs boson “god” particle experiment.

Of course some elements of this experiments are also not entirely new.

Before this myself have also enjoyed the particle experiments of
Kris Temmerman’s blowing up images
Ronny Welter’s video animation modification
Paul Lewis’s Photo Particles

Difference is now that brilliant idea of using photos to spawn particles can reach a new level of interactivity and particles massiveness, all done in the browser.

While I’m happy with the results, this is just the tip of the iceberg. Since this being an experiment, there’s much room for improvements in both artistic and technical areas.

A big thank you again to those involved in three.js, for which this was built on, to those adventurous who have explored GPGPU/FBO particles before me, to those who blog and share their knowledge about GLSL and graphics for which much knowledge was absorb to assemble this together, and not the least to Yvo Schaap who support me and this project.

Thank you also to others who are encouraging and make the internet a better place.

p.s. this is largely a non-technical post right? Stay tune for perhaps a more in-depth technical post about this experiment :)

Cool Street View Projects

Haven’t tried Google Street Views? You should! That’s a practical way to teleport and also a nice way to travel around the world virtually for free.

[asides: if you’re already a street view fan, did you tried that 8-bit version? and if you’ve tried that, what about google’s version of photosynth?]

Let me highlight a couple of innovative projects that came to my attention utilizing street views.

1. cinemascapes – street view photography

beautiful images (with post processing of course) captured using street views around the world. take a look.

[asides:
9 eyes has a similar approach, with a different style. street view fun is kind of the paparazzi version. so do you really think your iphone is the best street photography camera?
]

2. Address Is Approximate – stop-motion video

Creative and beatiful stop motion film using street views. Watch it!! (some may be reminded of the cinematography in Limitless)

[asides:
“Address is Approximate” was brilliantly executed but there’s no harm to trying it yourself. like this]

3.Chemin Vert – interactive 360 video

Look around while being on a train travelling at 1500km/h, across five continents and four seasons. You need webgl + and a decent pc + gpu for this. Try the “HI-FI” version if you have lots of patience waiting for the video to load, and for your browser to crash. And the vimeo version if you like the little planets projections.

And if you’re are like me, watching the (un)projected video source is great enough. hifi version

[asides: though, the first time i saw this technique being employed was in thewildernessdowntown, which i think mrdoob and thespite worked on, see next point]

4. thespite’s Experiments – interactive meshups

Cool experiments by thespite shows that he has been doing such street view meshups with three.js for some while now. What’s better now is that he has just released his Street Views panorama library on github! Fork it!

[asides: thespite’s a nice guy who gave me the permission to use GSVPano.js even before releasing it]

5. Honorable mention – stereographic street views

Stereographic Streetviews is an open source project which produces stereographic (aka “little planets”) views. Uses the power of graphic cards for processing, and allows you to easily customize the shader for additional post-processing.

[asides: some of the post-processing filters are quite cool. and some have others have also created videos with this]

So why I’m writing this? Course I think they are brilliant ideas which I could have thought of but would never get to execute them (like a meshup of GSVPano with renderflies).

Anything else I’m missing? Otherwise, hope to see great new street view projects and I’ll try working on mine. Have a good time on the streets!

[update] Hyperlapse – Another another project to add to the list.

Experiments behind “It Came Upon”

It has really awhile now, but I thought ok, we need to finish this – otherwise all these would probably be left in dust, and I’ll never proceed. So here is it, trying to finish touching on some of the technical aspects continuing from the post about “It came upon”.

Not all things were probably not done the best, nor right way, but there’s some hope that documenting it anyways might benefit someone, at the least myself in future looking back. There are many elements in that demo, so let me try listing the points in the order they appeared.

1. Dialog Overlays

Image and video hosting by TinyPic

The first thing after landing on the main page is a greeting contained in a dialog. Yes, dialogs like this are common on the net, but guess so different about this? Partially correct, if you are guessing rounded border radius without images using CSS3.

What I wanted to achieve was also a minimalistic javascript integration that is, no more checking for browser dimensions and javascript calculations. I thought that using the new css3 box model would work, but it was giving me headaches across browsers that I rely on a much older technology which actually works – table style. This allows the content to be centered nicely, and the only width I had to (or maybe even not) specified was the dialog width. Javascript was just used for toggling the visibility of the dialog style.display = 'table';

To see its inner guts, its perhaps best to look at the CSS with and the JS. Or perhaps read this for a stab at the CSS3 box border yourself.

2. Starfields

Star fields are a pretty common thing. The way implemented it was using three static Three.js particle systems (which aren’t emitters) and loop them such that once a starfield cross the static camera at a particle time, it would be looped to the back to be reused for a infinite starfield scene.

3. Auroras experiments.
I learn that sometimes easier to write experiments by starting afresh and simple, in this case with javascript + 2d canvas before doing it a webgl shader. Here’s the js version on jsdo.it so you can experiment with it.

Javascript Aurora Effect – jsdo.it – share JavaScript, HTML5 and CSS

This version runs from a reasonable speed to really slow, all depending on the canvas resolution, and number of octaves. The shapes of the aurora is generated with 3d simplex noise, for which 2 dimensions correspond to where each pixel is and the 3rd dimension being time. Just simple parameters can change the effects of perlin noise drastically, so not too good an idea to have too many factors if you do not understand what’s going on. I learnt that its difficult after what I thought was a failed experiment to create procedural moving clouds, and after I reread the basics of perlin noise, and started with a really simple example all over again. In this case, I felt the playing with the X and Y scales created the effect I wanted. These “perlin” levels were mixed with a color gradient containing spectrum of the auroras (which moves a little for extra dynamics). Next, added another gradient was also added top to bottom to emulate the different light intensity vertically.


For kicks, I created a GLSL shader version which potentially runs much faster than the canvas version. However, having too much time to integrate the Aurora with the demo, I used the canvas version instead. It was reduced 128 by 128, (chrome runs 256×256 pretty well, but not so for firefox then), a single octave, then used as a texture for a plane added to the scene. I also experimented with adding the texture onto an inverted sphere, which gave an unexpected but interesting look.


Finally, I thought there aren’t many known ways to create the aurora effects after searching, so this was just my approach although it might not be the simplest or best way either. Recently though I’ve found 2 other procedural generated aurora examples, which one might wish to look into if interested. http://glsl.heroku.com/e#1680.0 Eddie Lee also coded some really beautiful aurora effects for his kyoto project with GLSL in OpenGL, using 3D hermite spline curves wrappable perlin texture (more in that video description and shader source code!)

3. Night Sky Auroras + Trails

The long exposure star trails effect is created by not clearing the buffer. This technique is shown in the trails three.js example. Then played with the toggling on “timeline” (see next point).

4. Director class.
Perhaps its now time to introduce the Director class. For time based animations, you might need something to manage your timeline and animations. Mr doob has a library call sequencer.js (not to be confused with music sequencing!) which is used in various projects he worked on. The library to help load sequences or scene based on time. The Director class I wrote is somewhat similar, except it works on a more micro level, with direct support for animation by easing functions in tween.js. Perhaps this is also more similar to Marcin Ignac’s timeline.js library, except with the support to add Actions (instead of just tweens) at particular times.

The API is something like
director = new THREE.Director();
director.addAction(0, function() { // add objects } )
.addTween(0, 4, camera.position, { x: -280, y: 280, z: -3000},
{ x: -280, y: 280, z: -2600}, 'Linear.EaseNone')
.addAction(4.0, function() {
// here's a discret action)
camera.position.set(700, 160, 1900);
callSomethingElese();
})

// To start just call
director.start();

// in your running loop just call
director.update();

// scenes also can be merged via
director.merge(scene2director);

To simply put, Director is a javascript which does time-based scheduling and tweening using for animations in a scene. I’ve planned to improve and release it but in the meantime you can look at the javascript source. (This is also integrated with point #)

5. Snow scene

The components that make up the main snow scene can actually be found in the comprehensive three.js examples. The main elements apart from snow are shadows, flares, text, and post processing.

I think I’ve covered why I’ve chosen the shadow, flare, text elements in the previous post so look into those examples linked above and I’ll just straight into the post-processing element.

6. Post processing

The only post-processing filter used here is a tilt-shift, which emulates a tile shift effect. Actually a tilt-shift lens has the ability to put items at different focal length in focus, or items at the same focal length to be blurred. Such lenses are usually used for architectural photography, but also has a reputation of creating “miniature landscapes”, which is characterized by the top and bottom blurring of a photo. This post processing filter does exactly that kind of effect rather than emulate the real lens. The effect is surprising kind of nice, it helps creates softer shadow and brings focus into the middle of the screen. I had initially wanted to port evan’s version which allows more controls of the amount of blurplane, unfortunately that happen in that time frame I had.

7. Snow Particles

There are 2 particles effects used here. One’s the Snow and the other’s for the disintegrating text particle effects.

I’ve used my particle library sparks.js to manage the snow here. The sprite for the particles are drawn using the Canvas element (like most of my particle examples). Perhaps it’d harder for me to express in words, so let the related code segment along with comments do the talking.

The 2 main elements for this effect is that its emitted from an rectangular area (ParallelogramZone) and its given random drift. The “heaviness” of the snow (which is controllable by the menu options) adjusts particleProducer.rate, which can simply turn the scene from no snow to a snow blizzard. (Director was also controlling the stopping and running of snow for the text scene)

8. Text Characters Generation

Before going to the text particles, a little comments on the text generation method used here. While we have already demonstrated dynamic 3d text generation in the three.js examples, there’s a slight modification used for the text effects.

Instead of regenerating the entire text mesh each time a letter was “typed”, each character’s text mesh is generated and cached (so triangulation is reduced if any letter is repeated) and added to a object3d group. This allows the ability to control, manipulate and remove each 3d character individually when needed but also more importantly created better performances. With that in place, the Text Typing recording and playback Effect was almost done.

9. Text Particles

After the end of each line, the text would then burst into particles. This was another area of experimentation for me. Converting text to particles could be done like this.

a. Paint text onto a 2d canvas.
b. Either, 1) randomly place particles and keep those that land within the painted text area, or
2) randomize particle to be part of the paint text area. Next move them along a z-distance.

Method 2 works pretty well for minimal “particle waste”.

However, I thought that matching typeface.js fonts to the 2d canvas might took me a little more time, so I decided to use the mesh to particle THREE.GeometryUtils.randomPointsInGeometry() method (first seen in @alteredq shadowmap’s demo which randomize particles to be on points of the mesh (sur)faces instead. While I preferred the previous approach, since it gave a nicer volumetric feel, the latter approach likewise worked and on the bright side, showed the better shape of the mesh when viewed at the sides.

10. Key recordings.
The animation of the 3d text messages were done using “live recording” with a Recorder class (inside http://jabtunes.com/itcameupon/textEffects.js). Every time a keypress is made, the event is pushed to the recorder which stores the running time and event. The events would then be serialized to JSON format, which can be loaded at another time. The recorder class also interfaces with the Director to push the events for playback. This is the way that user’s recordings are saved to JSON and stored on the server.

11. Audio Playback
The Jasmid library mentioned in previous post is utilized for browser based synthesizing for midi files. This version uses Web Audio API for chrome browsers and some tweaked buffer settings for better playing back when using firefox audio data API.

11. Snowman

I almost forgotten about the Snowman. That was “modelled” or procedurally generated using Three.js primitives (Spheres, cylinders etc). One revision before the final even had fingers on them. Yea, it was a little painful having to refresh the browser to see changes after changes was made so the web inspector console helped a little. Still, while animating with the Director.js class, there was even more trail and errors, and waiting for it playing back. It was only until later I added a .goto(time) for the Director class. But I made a few interesting accidental experimentations, like the impalement of the snowman was setting an object to a large negative scale.

As those who follow me on twitter might already know, I wrote the Three.js Inspector a couple weekends later, which would potentially made things much easier. Perhaps more on ThreeInspector in another post.

Concluding thoughts
Wow, you’ve read till here? Cool! It almost felt painful trying to finish this writeup (reminding me of school). If there’s anything I missed out, feel free to contact me or dig into the source. While this hasn’t been groundbreaking, they didn’t exist at a snap of a finger. So I’ve learnt to build experiments by small parts, and make modular integration. There’s room for refinements, better tools, and better integration. Signing off, stay till more updates on experiments! :)

Behind “It Came Upon”

Falling snowflakes, a little snowman, a little flare from the sun. That was the simple idea of a beautiful snow scene I thought could be turned into an interactive animated online christmas greeting card to friends. Turns out “It came upon” became one of my larger “webgl” personal project in year 2011 (after arabesque).


(screencast here if you do can’t run the demo).

The idea might have only took a second, or at most a minute, but rather a simple piece of work, the final production needed the fusion of many more experimentation done behind the scenes. I guess this is how new technologies gets developed and showcased in Pixar short films too. I wonder if a writeup of the technical bits would really lengthy or boring, so I’ll try leaving it for another post, and focus on the direction I had on the project, and share some thoughts looking back.

The demo can mainly be broken down into 4 scenes
1. splash scene – flying star fields
2. intro scene – stars, star trails and auroras (northern lights)
3. main scene – sunrise to sunset, flares, snows, text, snowman, recording playback, particles
4. end scene – (an “outro”? of) an animated “The End”

Splash scene.
Its kind of interesting as opposed to flash intro scenes, you don’t want music blasting off the moment a user enters the site. Instead, the splash scene is used to allow the full loading of files, and giving another chance in scaring users from running the demo, especially if WebGL is not enabled on their browsers. Then the thought of giving user some sense of flying through the stars while the user gets ready. (I hope nobody’s thinking starwars or the windows 3.1 screensaver) There again, flight in splash scenes are not new – we had flocking birds in wildness downtown, and flying clouds in rome.

Intro scene.
The intro scene was added originally to allow a transition into the main snow scene, instead of jumping in and ending there. Since it was daylight in the snowscene, I thought a changing sky scene from night to day would be nice, hence the startrails and Aurora Borealis (also known as Northern lights). I’ve captured some startrails before, like
or more recently in 2011, this timelapse.

a little stars from Joshua Koo on Vimeo.

But no, I haven’t seen Aurora Borealis in real life, so its must be my impression from photos and videos seen on the net, and hopefully one day I’ll get to see them in real life! Both still and time-lapsed photography appeals to me, which also explains the combination used.
Time-lapsed sequences gives the sense of earth’s rotation and
the still photographs imitates long exposures which creates beautiful long streaks of stars.

Both a wide lens and a zoom lens were also used in this sequence for a more dramatic effect. Also, I fixed the aspect of the screen to a really wide format, as if it was a wide format movie or a panorama. Oh btw, “It came upon” the comes from the title of the musicplayed in the background, the christmas carol It Came Upon a Midnight Clear.

Snow scene.

At the central of the snow scene is a 3D billboard, probably something like a hollywood sign or the fox intro. Just a static scene but the opening was meant to be more mysterious, slowly revealing the text through the morning fog while the camera moves towards it and then panning along it, while interweaving with still shots from different angles. As the sun rises, the flares hits the camera and the scene brightens up. The camera then spins around the text in an ascending manner as if it could only be done with expensive rigs and cranes or mounted on a helicopter. Snowfall have already started.

Some friends might have asked why is the camera always against sun. Having the sun infront of the camera makes the flare enter the lens, and to allows shadows falling towards the camera to be more dramatic. Part of making or breaking the rule of not shooting into the light blogged sometime ago.

Part of this big main scene was to embed a text message. There were a few ideas I had to do this. One was having 3D billboards laid up a snow mountain, and as the camera flies through the mountain, the message gets revealed. The other was firing fireworks into the sky, and forming text in the air. Pressed for time, I implemented creating text on the ground, and turning them into particles so the next line of messages can be revealed. The text playback was created by recording of keystrokes. The camera jerks every time a key is pressed to give a more impactful feedback, such like the feelings of using a typewriter. On hitting the return key, the text characters turn into particles and flyaway. This was also the interaction that I had intended to give users, allowing them to record and generate text in real-time and sharing it with others. An example of a user recorded text by chrome experiements or my new year greetings.

Outro.
And so, like all shows, it ends with “The End”. But not really so, as one might noticed a parody of the famous Pixar intro, as the snowman turns to life and eventually hops on a text to flatten it. Instead of hopping across the snow scene, it falls into the snow, swimming like a polar bear, (stalking across the snow as one has commented), to the letter T. After failing to be flattened by the snowman, it gets impaled by the “T” and gets drag down into the snow. The End.

A friend asked for blood from the snowman, but in fact, I already tried tailoring the amount of sadism, it wasn’t the least mild (at least I thought), nor I wanted it overly sadistic (many commented the snowman was cute). Interestingly as I was reviewing the snowmen in Calvin and Hobbes, I realized there’s this horror elements to them, as much as I would have loved to have snowmen as adorable as himself or hobbes. Then again it could have represent Calvin’s evil geniuses, or that the irony that snowmen never survives after winter.

Thoughts.

First of all, I realized that its no easy task to create a decent animation or any form of interactive demo, especially with pure code, and definitely not something that can be done in a night or two. Definitely I have a respect for those in the demo scene. I had the idea in a minute, and a prototype in a day having thought, what could be so difficult, there’s three.js to do all the 3D work and spark.js for particles. Boy was I wrong, and indeed have a better understanding of 1% inspiration and 99% perspiration. With my self imposed deadline to complete it before the end of the year, not everything were be up to even my artistic or technical expectations, but I was just glad I had it executed and done with.

Looking back at year 2011, its probably a year of graphics, 3D and WebGL programming for me. I started with nothing and ended with at least something (still remembered asking noob questions starting in irc). I did alot of readings (like blogs and siggraph papers), had a whole lot of experimentation (though a fair bit were uncompleted and failures), generated some wacky ideas, contributed in someways to three.js, have now a fair bit of knowledge about the library and webgl, and have at least 2 experiments featured. What’s for me in 2012?