Here’s my latest three.js webgl experiment called “Boids and Buildings”. Its an experimental 3d procedural city generator that will run in a form of a short demo. Turn up your volume and be prepared to overclock your CPU (& GPUs) a little.
Here’s the story – early last month I was in Japan for a little travelling. Perhaps one thing I found interesting was the architectural. The cities are large and are packed with buildings (Edo, now called Tokyo, was once the world’s largest city).
Sometime ago, I saw mrdoob’s procedural city generator. We have thought about creating a 3d version of it, but only during the trip I decided I should try working on it. Some of the process was written in this Google+ post, but I’ll summarize it a little here.
Firstly, the city generator works by building a road which spins off more roads along the way. The roads stop when it intersects with another road or reaches the boundary of the land. Another way of looking at this is that the land is split into many “small lands” divided by the roads. My approach was that if I could extract the shape information of each divided land, I could run ExtrudeGeometry to create buildings to fill the shape of each land.
The original js implementation of road intersections was done looking up pixel data on the canvas. While I managed to write a version which detects faces base on pixels, detecting edges and vertices was more tedious than I thought, as if I could write or use another image processing library similar to potrace. Instead of doing that, I work on an alternative intersection detection based on mathematically calculating whether each all lines/road intersected. This is probably slower when it takes up too much memory, but points of intersections were retained.
(here this version denotes where the starting and ending points of each road with red and green dots).
However some serious information is still missing – which edges connects the points, and which edges belongs to each face/land. Instead of determining these information after processing everything, using half-edge data structures can elegantly compute them on the fly. Demoscene coders might have seen this technique for mesh generation.
Without going into an in-depth technical explanation, here’s an analogy to how half-edges is employed. Since every land is defined by the roads surrounding it, and you build fences around the perimeter of the land to enclose the area within. The enclosed area defines each face and each fence is like a half-edge. Since each road divides the land into 2 sides (left and right), 2 fences will be constructed and its denote the land belonging to both sides. If a road is build through an existing land, fences on the existing land has to be broken down to connect to each side of the new road fences. In code, each new road contains 2 half-edges, and connecting new roads requires creating new split edges and updating of linked half edge pointers.
(on some hindsight, it could be possible to use half-edges with the original pixel collision detection)
Now we’ve got 3d buildings, but it lacks the emotions and feelings that I was initially thinking of. I thought, perhaps I’ll need to simulate the view out of the train. But that might require me to simulate train rails, although the stored paths of roads could be used for the procedural motion. As I was already started to feel weary of this whole experiment, I had another idea – since there’s a examples of boids in three.js, why not try attaching a camera to the boid, not only you’ll get a bird’s eye view, but you’ll get camera animation for free! I did a quick mesh up and the effects seems to be rather pleasant.
Incidentally, mrdoob in his code named the roads Boids, and I thought the name “Boids and Buildings” would be a rather fitting theme.
Let me start jumping around on bits and pieces I can think of to write of.
I abstracted the map generator into a class call Land which takes in parameters, and it was reusable for the 2d and 3d version. In the splash screen, a css3d was used to translate and scale the map generator in the background.
I wanted to create some text animation in the splash page, with a little of the boid/map generation feeling. I use mrdoob’s line font used in Aaronetrope and experimented with various text animations.
To simplify working with colors, and used random grey for buildings at the start and ended up with a “black&white” / greyscale direction. For post processing, I added just a film post-processing shader by alteredqualia found in three.js examples, with slightly high amount of noise.
Scene control and animation
Since most of the animation was procedural, so during development, much of it was by code. When it was time to have some timeline control, I used Director.js which I wrote for “itcameupon”. The Director class schedules time-based actions and tweens, and most of the random snippet of animation code was added to it. So more or less, the animation runs on a fix schedule except for randomized parts (eg. time when roads just building on lands).
This experiment allowed me to deal with lot of randomness, but based on probability distribution, lots of randomness can actually give expect results. I used this fact to help in debugging too. For example, you would like quickly to dump values out to the control in the render loop but you’re afraid of crashing your devtools – you could do this
(Math.random()<0.1) && console.log(bla); This means, take a sample of 10% of the values and spit out the results out to the console. If you are in Canary, you can even do
(Math.random()<0.01) && console.clear(); to clear your debug messages ones in a while.
Buildings height are randomized, but they follow certain rules to make it more realistic and cityscape like. If the area of land is big, the building height would be lower. If its a small area but not too small, then it could be a skyscraper.
The boid camera was simply to follow the first boid, based on its velocity, placed the camera position slightly higher and behind the bird. I wanted to try a spring-damper camera system but opt for this simpler implementation instead – move the camera, a factor k position closer to the target position every render. In simple code, targetX = (targetX – currentX) * k where k is a small factor eg. 0.1 – this creates a cheap damping/easing effect. This effect is apparent toward the end of the animation when the camera is slingshot to the birds as the camera mode changes to the boidcams.
Another thing I did was to disable matrix calculations, using
object.matrixAutoUpdate = false; once buildings are completed building and animating.
Added my Web Audio API experiment of creating wind sound to simulate wind sound during the boidcams. Wind can simply be created with random samples (white noise?), I added a lowpass filter and a delay. Aptitude of wind noise is generated by the vertical direction boids are flying at. Wanted to add a Low Frequency Oscillator to make more realistic sounding wind, but I haven’t figured that out.
Yes, there’s always inperfections. Edge handling could be better, same with triangulation. Boids could do building collision detection. Improve the wind algorithm. Create variety of buildings, or making some cool collapsing buildings (like in inception).
So the demo is online at http://jabtunes.com/labs/boidsnbuildings/, sources are unminified, you’re welcome to poke around.
Going back to the original idea, I still wonder if I managed to create the mood and feelings I originally thought in my head. Here are some photos I shot overseeing Osaka in Japan, maybe you can compared it with the demo and make a judgement – perhaps you might think the demo is closer to a Resident Evil scene instead. ;P
edit: after watching this short documentary on Hashima island (where a scene from skyfall was at), i think the demo could resemble hashima’s torn buildings too.