Tag Archives: procedural

Exploring Simple Noise and Clouds with Three.js

A while back I attempt writing a simple CloudShader for use in three.js scenes. Exploring the use of noise, it wasn’t difficult creating clouds which isn’t visually too bad looking. Another idea came to add Crepuscular rays (“God rays”) to make the scene more interesting. Failing to mix them correctly together on my first tries, I left these experiments abandoned. Until one fine day (or rather one night) I decided to give it another shot and finally got it to work so here’s the example.


(Now comes with sliders for clouds control!)

In this post, I would going to explain some of the ideas generating procedural clouds with noise (these topics frequently go hand in hand). While noise might be bread and butter in the world of computer graphics, but it did took me awhile to wrap my head around it.

Noise
Noise, could be described as a pseudo-random texture. Pseudo-random means that it might appear to be totally random, but being generated by the computer, it is not. Many might also refer to noise as Perlin noise (thanks to work by Ken Perlin), but there are really different form of noise, eg. Perlin noise, Simplex noise, Value noise, Wavelet noise, Gradient noise, Worley noise, Simulation noise.

The approach that I use for my clouds could be considered Value noise. Let’s start creating some random noise by creating a DataTexture of 256 by 256 pixels.


// Generate random noise texture
var noiseSize = 256;
var size = noiseSize * noiseSize;
var data = new Uint8Array( 4 * size );
for ( var i = 0; i < size * 4; i ++ ) {
    data[ i ] = Math.random() * 255 | 0;
}
var dt = new THREE.DataTexture( data, noiseSize, noiseSize, THREE.RGBAFormat );
dt.wrapS = THREE.RepeatWrapping;
dt.wrapT = THREE.RepeatWrapping;
dt.needsUpdate = true;

Now if we were to now render this texture, it would look really random (obviously) like a broken TV channel.

noisebw
(we set alpha to 255, and r=g=b to illustrate the example here)

Let’s say if we were to use the pixel values as a height map for a terrain (another use for noise), it is going to look really disjoint or random. One way to fix it is to interpolate the values from one point to another. This is becomes smooth noise. The nice thing about textures is that these interpolation can be done on the graphics unit automatically. By default, or by setting `.minFilter` and `.magFilter` properties of a THREE.Texture to `THREE.LinearMipMapLinearFilter`, you get almost free interpolation when you try to read a point on the texture between 2 pixels or more.

Still, this isn’t enough to look anything like clouds. The next step is to apply Fractional Brownian Motion, which is a summation of successive octaves of noise, each with higher frequency and lower amplitude. This generates Fractal noise which generates a more interesting and continuous texture. I’m doing this in the fragment shader with a a few lines of code…


float fnoise(vec2 uv) {
    float f = 0.;
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
    f += texture2D(uv * scale).x / scale;
    }
    return f;
}

Given this my data texture has 4 channels (RGBA), one could pull out 4 or 3 components if needed, like


vec3 fNoise(vec2 uv) {
    vec3 f = vec3(0.);
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
        f += texture2D(uv * scale).xyz / scale;
    }
    return f;
}

Now if you were to render this, it might look similar to the perlinNoise class in flash/actionscript or the cloud filter in photoshop.

Screenshot 2014-11-08 22.16.53

2D Clouds
Although we have a procedural cloud shader, how do we integrate it into a three.js scene? One way is to texture it over a sphere or a skybox, but the approach I use is to create a paraboloid shell generated with ParametricGeometry, similar to the approach Steven Wittens used to render Auroras in his demo “NeverSeenTheSky”. The code / formula I use is simply this


function SkyDome(i, j) {
    i -= 0.5;
    j -= 0.5;
    var r2 = i * i * 4 + j * j * 4;
    return new THREE.Vector3(
        i * 20000,
        (1 – r2) * 5000,
        j * 20000
    ).multiplyScalar(0.05);
};
var skyMesh = new THREE.Mesh(
    new THREE.ParametricGeometry(SkyDome, 5, 5),
    shaderMaterial
);

Now the remaining step, but its probably the most important step to simulate clouds is to make use and tweak the values from the fractal noise to obtain the kind of clouds you want. This is done in the fragment shader where you could decide what thresholds to apply, (eg. cutting off the high or low values) or apply a curve or a function to the signals. 2 articles which gave me ideas are Hugo Elias’s clouds and Iñigo Quilez’s dynamic 2d clouds. Apart from these, I added a function (where o is opacity) to reduce the clouds of the skies nearer the horizon to make it more illusion of clouds disappearing into the distant.


// applies more transparency to horizon for
// to create illusion of distant clouds
o = 1. – o * o * o * o;

Crepuscular rays
So I’m going a break explaining some of my ideas for producing the clouds. This might be disappointing if you’re expecting more advanced shading techniques, like ray-marching or volumetric rendering, but I am trying to see how far we could go with just the basic/easy stuff. Now if adding the crepuscular rays works, it would produce a more impressive effect that we can avoid complicated stuff at the moment.

So for the “God rays”, I started with the webgl_postprocessing_godrays example in three.js, implemented by @huwb using a similar technique used by Crytek. After some time debugging why my scene didn’t render correctly, I realized that my clouds shader (a ShaderMaterial) didn’t play well in the depth rendering step (which override the scene with the default MeshDepthMaterial), that was needed to compute occluded objects correctly. For that I manually override materials for the depth rendering step, and pass a uniform to the CloudShader to discard or write depth values based on the color and opacity of the clouds.

Conclusion
I hope I’ve introduce the ideas behind noise and how simple it could be for generating clouds. One way to get started with experimenting is to use Firefox, which now have a Shader Editor with its improved web developer tools that allows experimentation of the shaders in real time. Much is up to one’s imagination or creative, for example, turning the clouds into moving fog.

Clouds is also such common and an interesting topic which I believe there is much (advanced) literature on it (like websites, blogs and papers like this). As mentioned earlier, the links I found to be good starting point is by Hugo Elias and Iñigo Quilez. One website which I found explaining noise in an easy to understand fashion is http://lodev.org/cgtutor/randomnoise.html

Before ending, I would love to point out a couple other of realtime browser-based examples I love, implemented in in a very different or creative approaches.

1. mrdoob’s clouds which uses billboards sprites – http://mrdoob.com/lab/javascript/webgl/clouds/
2. Jaume Sánchez’s clouds which uses css – http://www.clicktorelease.com/code/css3dclouds/
3. IQ Clouds which uses some form of volumetric ray marching in a pixel shader! – https://www.shadertoy.com/view/XslGRr

So if you’re interested, read up and experiment as much as you can, for which there are never ending possibilities!

Making of Boids and Buildings

Here’s my latest three.js webgl experiment called “Boids and Buildings”. Its an experimental 3d procedural city generator that will run in a form of a short demo. Turn up your volume and be prepared to overclock your CPU (& GPUs) a little.

Here’s the story – early last month I was in Japan for a little travelling. Perhaps one thing I found interesting was the architectural. The cities are large and are packed with buildings (Edo, now called Tokyo, was once the world’s largest city).


(the huge billboards are at the famous dotonbori in osaka, the bottom 2 photos were houses shot in kyoto).

Sometime ago, I saw mrdoob’s procedural city generator. We have thought about creating a 3d version of it, but only during the trip I decided I should try working on it. Some of the process was written in this Google+ post, but I’ll summarize it a little here.

Firstly, the city generator works by building a road which spins off more roads along the way. The roads stop when it intersects with another road or reaches the boundary of the land. Another way of looking at this is that the land is split into many “small lands” divided by the roads. My approach was that if I could extract the shape information of each divided land, I could run ExtrudeGeometry to create buildings to fill the shape of each land.

The original js implementation of road intersections was done looking up pixel data on the canvas. While I managed to write a version which detects faces base on pixels, detecting edges and vertices was more tedious than I thought, as if I could write or use another image processing library similar to potrace. Instead of doing that, I work on an alternative intersection detection based on mathematically calculating whether each all lines/road intersected. This is probably slower when it takes up too much memory, but points of intersections were retained.


(here this version denotes where the starting and ending points of each road with red and green dots).

However some serious information is still missing – which edges connects the points, and which edges belongs to each face/land. Instead of determining these information after processing everything, using half-edge data structures can elegantly compute them on the fly. Demoscene coders might have seen this technique for mesh generation.

Without going into an in-depth technical explanation, here’s an analogy to how half-edges is employed. Since every land is defined by the roads surrounding it, and you build fences around the perimeter of the land to enclose the area within. The enclosed area defines each face and each fence is like a half-edge. Since each road divides the land into 2 sides (left and right), 2 fences will be constructed and its denote the land belonging to both sides. If a road is build through an existing land, fences on the existing land has to be broken down to connect to each side of the new road fences. In code, each new road contains 2 half-edges, and connecting new roads requires creating new split edges and updating of linked half edge pointers.

With that a 2D version…
2d generator

… and a 3D version was done.
3D

(on some hindsight, it could be possible to use half-edges with the original pixel collision detection)

Now we’ve got 3d buildings, but it lacks the emotions and feelings that I was initially thinking of. I thought, perhaps I’ll need to simulate the view out of the train. But that might require me to simulate train rails, although the stored paths of roads could be used for the procedural motion. As I was already started to feel weary of this whole experiment, I had another idea – since there’s a examples of boids in three.js, why not try attaching a camera to the boid, not only you’ll get a bird’s eye view, but you’ll get camera animation for free! I did a quick mesh up and the effects seems to be rather pleasant.

Incidentally, mrdoob in his code named the roads Boids, and I thought the name “Boids and Buildings” would be a rather fitting theme.

Let me start jumping around on bits and pieces I can think of to write of.

Splash screen
I abstracted the map generator into a class call Land which takes in parameters, and it was reusable for the 2d and 3d version. In the splash screen, a css3d was used to translate and scale the map generator in the background.

Text fonts
I wanted to create some text animation in the splash page, with a little of the boid/map generation feeling. I use mrdoob’s line font used in Aaronetrope and experimented with various text animations.

Visual Direction
To simplify working with colors, and used random grey for buildings at the start and ended up with a “black&white” / greyscale direction. For post processing, I added just a film post-processing shader by alteredqualia found in three.js examples, with slightly high amount of noise.

Scene control and animation
Since most of the animation was procedural, so during development, much of it was by code. When it was time to have some timeline control, I used Director.js which I wrote for “itcameupon”. The Director class schedules time-based actions and tweens, and most of the random snippet of animation code was added to it. So more or less, the animation runs on a fix schedule except for randomized parts (eg. time when roads just building on lands).

Pseudorandomness
This experiment allowed me to deal with lot of randomness, but based on probability distribution, lots of randomness can actually give expect results. I used this fact to help in debugging too. For example, you would like quickly to dump values out to the control in the render loop but you’re afraid of crashing your devtools – you could do this (Math.random()<0.1) && console.log(bla); This means, take a sample of 10% of the values and spit out the results out to the console. If you are in Canary, you can even do (Math.random()<0.01) && console.clear(); to clear your debug messages ones in a while.

Buildings
Buildings height are randomized, but they follow certain rules to make it more realistic and cityscape like. If the area of land is big, the building height would be lower. If its a small area but not too small, then it could be a skyscraper.

Boid cams
The boid camera was simply to follow the first boid, based on its velocity, placed the camera position slightly higher and behind the bird. I wanted to try a spring-damper camera system but opt for this simpler implementation instead – move the camera, a factor k position closer to the target position every render. In simple code, targetX = (targetX – currentX) * k where k is a small factor eg. 0.1 – this creates a cheap damping/easing effect. This effect is apparent toward the end of the animation when the camera is slingshot to the birds as the camera mode changes to the boidcams.

Performance Hacks

Shape triangulation is probably one of the most expensive operations here. In “It came upon”, slow computers experienced a short sharp delay when text get triangulated the first time (before caching). Over here, its a greater problem due to potentially high number of triangulations. Apart from placing a limits to buildings, one way to prevent a big noticeable lag is to use web workers. However I opt to use the old trick of using setTimeouts instead. lets say there are 100 buildings to triangulate, instead of doing everything inside one event loop which would bring a big pause, I’ll do setTimeout(buildBuilding, Math.random() * 5000); – based on random probability, 100 buildings would be triangulated across 5 seconds, reducing the noticeable pauses. I supposed, this is somewhat like an incremental garbage collection technique newer javascript engines employ.

Another thing I did was to disable matrix calculations, using object.matrixAutoUpdate = false; once buildings are completed building and animating.

Music
Pretty awesome music from Walid Feghali. Nothing much else to add.

Audio
Added my Web Audio API experiment of creating wind sound to simulate wind sound during the boidcams. Wind can simply be created with random samples (white noise?), I added a lowpass filter and a delay. Aptitude of wind noise is generated by the vertical direction boids are flying at. Wanted to add a Low Frequency Oscillator to make more realistic sounding wind, but I haven’t figured that out.

Future Enhancements
Yes, there’s always inperfections. Edge handling could be better, same with triangulation. Boids could do building collision detection. Improve the wind algorithm. Create variety of buildings, or making some cool collapsing buildings (like in inception).

Conclusion
So the demo is online at http://jabtunes.com/labs/boidsnbuildings/, sources are unminified, you’re welcome to poke around.

Going back to the original idea, I still wonder if I managed to create the mood and feelings I originally thought in my head. Here are some photos I shot overseeing Osaka in Japan, maybe you can compared it with the demo and make a judgement – perhaps you might think the demo is closer to a Resident Evil scene instead. ;P

edit: after watching this short documentary on Hashima island (where a scene from skyfall was at), i think the demo could resemble hashima’s torn buildings too.