Free Birds

Earlier this year Google announced DevArt, a code and art collaboration with London’s Barbican Centre. Chinmay (a fellow geek in Singapore) and I decided to collaborate on a visual and aural entry for the competition.

As the competition attracted a number of quality works, the short story is that we didn’t win (the long story includes how my last git push failed before the deadline while tethering internet over my mobile in Nepal). This post is however about the stuff we explored, learned and created along the way. We also shared some of that at MeetupJS at the Singapore Microsoft office not long ago. Being open is actually one aspect of the DevArt event too, which is to share some of the source code, workflow and ideas around.

I wanted to reuse some of my work done with GPU flocking birds for the entry, while making it more interactive with a creative idea. We decided to have a little bird on an ipad which you could customize its colors and free it into a bigger canvas of freed birds.

Chinmay whose passionate about acoustics, took the opportunity to dabble more with the web audio api. (He previously created auralizr a really cool web app that transport you to another place using convolution filter). Based on the excellent bird call synthesis article, Chinmay created a javascript + web audio api library for generating synthesized bird calls.


Go ahead, try it, it’s interactive with a whole bunch of knobs and controls.

As for me, here’s some stuff I dabbled in for the entry

  1. Dug out my old library contact.js for some code node.js + websockets interaction.
  2. Mainly used code from three.js gpgpu flocking example, but upgraded it to use BufferedGeometry
  3. Explored GPGPU readback to javascript in order to get positional information of birds, to be feed into bird.js for positioning audio
  4. My first use of xgui.js, which uncovered and fixed some issues, but otherwise a really cool library for prototyping.
  5. Explored more post processing effects with Non-photo Realistic Rendering, unfortunately it wasn’t good enough during that time frame.

So that for the updates for now. Here are some links

Also, another entry of notable mention is Kuafu by yi-wen lin. I thought it shared a couple of similarities with our project, was more polished, but unfortunately didn’t made it to the first place.

WebGL, GPGPU & Flocking Birds – The 3rd Movement

This would be the final section of the 3 part-series covering the long journey of I’ve added the interactive WebGL GPGPU flocking bird example to three.js after a span of almost a year (no kidding, looking at some of the timestamps). Part 1 introduces to flocking, Part 2 talks about writing WebGL shaders for accelerated simulation, and Part 3 (this post) hopefully shows I’ve put things together.

(Ok, so I got to learn about a film about Birds after my tweet about the experiment late one night last year)

Just as it feel like forever to refine my flocking experiments, it feel just as long writing this. However, watching Robert Hodgin‘s (aka flight404) inspiring NVScene 2014 session: “Interactive Simulations (where nobody has to die)” (where he covered flocking too) was a motivation booster.

So much, I would recommend you watching that video (starting at 25:00 for flocking, or just watching the entire thing if you have the time) over reading this. So much I could relate in his talk like on flocking and the gpu, but so much more I could learn from. No doubt Robert have always been one of my inspirations.

Now back to my story, we were playing with simulated particles accelerated with GPGPU about 2 years ago in three.js.

With some pseudo code, here’s what is happening

/* Particle position vertex shader */
// As usual rendering a quad
/* Particle position fragment shader */
pos = readTexture(previousPositionTexture)
pos += velocity // update position the particle
color = pos // writing new position to framebuffer
/* Particle vertex shader */
pos = readTexture(currentPositionTexture)
gl_Position = pos
/* Particle fragment shader */
// As per usual.

One year later, I decided to experiment with particles interacting with each other (using the flocking algorithm).

For a simple GPGPU particle simulation, you need 2 textures (one for the currentPosition and one for the previous, since reading and writing to a same texture could cause corruption). In flocking simulation, 4 textures/render targets are used (currentPositions, previousPositions, currentVelocities, previousVelocities).

There would be one render pass for updating velocities (based on the flocking algorithm)

/* Velocities Update Fragment Shader */
currentPosition = readTexture(positionTexture)
currentVelocity = readTexture(velocityTexture)

for every other particle,
    otherPosition = readTexture(positionTexture)
    otherVelocity = readTexture(velocityTexture)
    if otherPosition is too close
        currentVelocity += oppositeDirection // Seperation
    if otherPosition is near
       currentVelocity += followDirection // Steer
    if otherPosition is far
       currentVelocity += towardsDirection // Cohesion

color = currentVelocity

Updating position is pretty simple (almost similar to the particle simulation)

/* Particle position fragment shader */
pos = readTexture(positionTexture)
pos += readTexture(velocityTexture) 
color = pos

How good does this work? Pretty good actually, getting 60fps for 1024 particles. If we were to run the JS code (no rendering) of the three.js bird canvas example, here’s the kind of frame rates you might be getting (although it certainly could be optimized)

200 birds - 60fps
400 birds - 40fps
600 birds - 30fps
800 birds - 20fps
1000 birds - 10fps
2000 birds - 2fps

With the GPGPU version, I can get about 60fps for 4096 particles. Performance start dropping after that (depending on your graphics card too) without further tricks possibly due to the bottleneck of texture fetches.

The next challenge was trying to render something more than just billboarded particles. It was a feeble attempt at calculating the transformations so results weren’t great.

So I left the matter alone until half a year later when I suggested it was time to add a GPGPU example to three.js and I revisited the issue because of WestLangley persuasion.

Iñigo Quílez had also just written an article about avoiding trigonometry for orientating objects, but I failed to fully comprehend the rotation code and still ended up lost.

I decided finally I should try understanding and learning about matrices. The timing was also right because a really good talk “Making WebGL Dance” Steven Wittens gave at JSConfUS was made available online. Near the 12 minute mark, he gives a really good primer to Linear Algebra and Matrices. The take away is this – matrices can represent a rotation in orientation (,scale & translation), and matrices could be multiplied together.

With mrdoob’s pointer to how the rotations were done in canvas bird example, I managed to reconstruct the matrices to perform similar rotations with shader code and three.js code.

I’ve also learn’t along the way that matrices in glsl are a format from how it’s written typically mat3(col1, col2, col3) instead of mat3(row1, row2, row3).

And there we have it.

There was also one more final touch to the flocking shader which made things more interesting. To Robert’s credit again for writing the flocking simulation tutorial for cinder which I’ve decided to adopt his zone based flocking algorithm and easing functions. He covered this in his NVScene talk too.

Finally some nice words by Craig Reynold, the creator of the original “Boid” program in 1987, who seen the three.js gpgpu flocking experiment.

2014-02-24_222356

It has been a long journey, but certainly not the end of it. I’ve learn much along the way, and hopefully I have managed to share some. You may follow me on twitter for updates I have on flocking experiments.

WebGL, GPGPU & Flocking Birds Part II – Shaders

In the first part of “WebGL, GPGPU & Flocking Birds”, I introduced flocking. I mentioned briefly that simulating flocking can be a computationally intensive task.

The complexity of such simulations in Big O notation is O(n^2), that is since each bird or object has to be compared with every other object. For example, 10 objects requires 100 calculations, but 100 objects requires 10,000 calculations. You can quickly see how simulating even a couple thousand objects can quickly turn a fast modern machine to a crawl especially with just javascript alone.

It’s possible to speed this up using various tricks and techniques (eg with more efficient threading, data structures etc), but when one greed for more objects, yet another brick wall can be found quickly.

So in this second part, I’ll discuss the role of WebGL and GPGPU which i would use for my flocking birds experiments. It certainly may not be the best or fastest way to run a flocking simulation, but it was interesting experimenting with WebGL to do some of heavy lifting of the flocking calculations.

WebGL

WebGL can be seen as the cool “new kid on the block” (with its many interactive demos), and one may also consider WebGL as “just a 2d API“.

I think another way one can look at WebGL, is as an interface or a way to tap into your powerful graphics unit. Its like learning how to use a forklift to lift heavy loads for you.

Intro to GPUs and WebGL Shaders

For a long time, I understood computers had a graphics card or unit but never really understood what it is really until recently. Simply put, the GPU (Graphics Processing Unit) is a specialized piece of hardware for processing graphics efficiently and quickly. (More on Cuda parallel programming on Udacity, if you’re interested)

The design of a GPU is also slightly different from a CPU. For one, GPU can have thousands of cores compared to dozen that a CPU may have. While GPU cores may run at a lower clockrate, its massive parallel throughput may be higher than what a CPU can perform.

A GPU contains vertex and pixel processors. Shaders are code used to do to program them, to perform shading of course. That includes coloring, lighting, post-processing for images.

A shader program have a linked vertex shader and a pixel(aka fragment) shader. In a simple example to draw a triangle, a vertex shader calculates the coordinates of the 3 points of the triangle. The calculated area in between the triangle is passed on to the pixel shader where it paints each pixels in the triangle.

Some friends have asked me what language is used to program WebGL shaders. They are written in language call GLSL (Graphics Library Shading Language), a C like language also used for OpenGL shaders. At least, I think if you understand JS, GLSL shouldn’t be too difficult to be picked up.

Knowing that WebGL shaders are what used to run tasks on the GPUs, we have a basic key in unlocking powerful computation capabilities that GPUs have, even though it is primary for graphics. Moving on to the exciting GPGPU – “General-purpose computing on graphics processing units”.

Exploring GPGPU with WebGL

WebGL, instead of only rendering to the screen, has the ability to write to its own memory. These rendered bitmaps in memory or RAM is referred to Frame Buffer Objects (FBOs) and the process can sometimes be simply referred as a render-to-texture (RTT).

This may start to sound confusing, but what you need to know is that the output of shader can be a texture, and that texture can be an input for another shader. One example is an effect to rendering a scene inside a TV as part of a scene inside a room.

Frame buffers are also commonly use for post-processing effects. (Steven Wittens has a library for RTT with three.js)

Since these in-memory textures or FBOs is that they reside in the GPU’s memory, its nice to note that reading or writing to a texture within the GPU’s memory is way fast, compared to uploading a texture from the CPU’s memory in our context, javascript’s side.

Now how do we start making use or abusing this for GPGPU? First consider what would a Frame Buffer possibility represent. We know that a render texture is basically a quad / 2d array holding RGB(A) values, and we could decide that a texture could represent particles, and that we could represent each pixel as each particle’s position.

For each pixel we can assign the RGB channels to the positional component (red=x position, green=y position, blue=z position). A color channel may only have a limited range of 0-255, but if we enable floating-point texture support, each channel goes from -infinity to infinity (though there’s still limited precision). Currently, some devices (like mobile) do not support floating-point texture support, so one should decide whether to drop support for those devices or pack a number over a few channels to simulate a value type of larger range.

Now we can simulate particle positions in the fragment shaders instead of simulating particles with Javascript. In such cases, a program may be more difficult to debug, but the number of particles can be much higher (like 1 Million) and CPU can be freed up.

The next step after the simulation phase is to display the gpu simulated particles on screen. The approach is to render particles like normal, however, the vertex shader then looks up its positional information stored in the texture. Its like adding just a little more code to your vertex shaders to read the position from the texture, and “hijacking” the position of the vertex you’re about to render. This requires an extension to lookup textures in the vertex shader but likely its supported when floating point textures are.

Hopefully by now, this gives a little idea on how “GPGPU” could be done in WebGL. Notice I mentioned with WebGL, because its likely that you can perform GPGPU in other ways(eg. with WebCL) in a different approach. (Well, someone wrote a library call WebCLGPU, a WebGL library which emulates some kind of WebCL interface, but I’ll leave you to explore that).

(Some trivial: In fact, this whole GPGPU with WebGL was really confusing to me at first and I did not know what its supposed to be called. While one of the earliest articles about GPGPU with Webgl referred to this technique as “FBO Simulations”, many still refer to as GPGPU.

What’s funny what that initially I thought GP-GPU (with its repetitive acronym) is used to describe “ping-pong”-ing of textures in the graphics card but well… there may be some truth in that. 2 textures are typically used and swapped for simulating positions because its not recommended to read and writing to the same textures at the same time.)

Exploration and Experiments

Lastly, one point about exploration and experiments.

Simulating particles GPGPU are getting common these days, (there are probably many on chrome experiments), and I’ve also worked on some in the past. Not to say that they are boring, (one project which i found interesting is the particle shader toy) but I think there are many more less explored and untapped areas of GPGPU. For example, it could be used for fluid, physics simulations and applications such as terrain sculpting, cloth, hair, cloth simulation etc.

As for me, I started playing around with flocking. More about it in the 3rd and final part of these series.

WebGL, GPGPU, and Flocking – Part 1

One of my latest contributions to three.js is a webgl bird flocking example simulated in the GPU. Initially I wanted to sum up my experiences in a blog post of “WebGL, GPGPU, and Flocking”, but that became too difficult to write, and possibly too much to read in a go. So I opt to split them into 3 parts, the first part of flocking in general, and second on WebGL and GPGPU, and the third part to put them all together. So for part 1, we’ll start with flocking.

So what is flocking behavior? From Wikipedia, it is

the behavior exhibited when a group of birds, called a flock, are foraging or in flight. There are parallels with the shoaling behavior of fish, the swarming behavior of insects, and herd behavior of land animals.

So why has flocking behavior has caught my attention along the way? It is an interesting topic technically – simulating such behavior may require intensive computation which poses interesting challenges and solutions. It is also interesting for it use of creating “artificial life” – in games or interactive media, flocking activity can be use to spice up liveliness of the environment. I love it for an additional reason – it exhibits the beauty found in nature. Even if I haven’t have the opportunity to enjoy such displays at lengths in real life, I could easily spend hours watching beautiful flocking videos on youtube or vimeo.

You may have noticed that I’ve used of flocking birds previously in “Boid n Buildings” demo. The code I use for that is based on the version included in the canvas flocking birds example of three.js (which was based on another processing example). A variant of that code which was probably also used for “the wilderness downtown” and “3 dreams of black”.

However, to get closer to the raw metal, one can try to get his hands dirty implementing the flocking algorithm. If you can write a simple particle system already (with attraction and repulsion), its not that difficult to learn about and add the 3 simple rules of flocking

  1. Separation – steer away from others who are too close
  2. Alignment – steer towards where others are moving to
  3. Cohesion – steer towards where others are

I usually find it useful to try something new to me in 2d rather than 3d. It’s easier to debug when things go wrong, and make sure that new concepts are understood. So I started writing a simple 2d implementation with Canvas.

See the Pen Flocking Test by zz85 (@zz85) on CodePen

With this 2d exercise done, it would be easier to extend to 3D, and then into the Shaders (GPGPU).

If you are instead looking for guide or tutorial to follow with, you might find that this post isn’t very helpful. Instead, there are many resources, code and tutorials you can probably find online on this topic. With this, I’ll end this part for now, and leave you with a couple of links you can learn more about flocking. The next part in this series would be on WebGL and GPGPU.

Links:
http://natureofcode.com/book/chapter-6-autonomous-agents/ Autonomous Agents from Nature of code book.
http://www.red3d.com/cwr/boids/ Craig Reynolds’s page on Boids. He’s responsible for the original program named “Boid” and his original paper from 1987 of “Flocks, Herds, and Schools: A Distributed Behavioral Model”.
http://icouzin.princeton.edu/iain-couzin-on-wnyc-radiolab/ A really nice talk (video) on collective animal behavior. This was recommended by Robert Hodgin who does fantastic work with flocking behavior too.
http://www.wired.com/wiredscience/2011/11/starling-flock/ An article on Wired if you just want something easy to read

Over The Hills JS1K Postmortem

This post is a little reflection about my experience with js1k in which I submitted a little game. I spoke about a little about it at MeetupJS Singapore (on the day of the Js1K’13 results was published) and decided to follow up with this blog post. Unfortunately, it has been collecting dust in my draft for month, and I thought that its about time to flush some of my old buffers.

Here’s the slides I gave for my presentation which covered a little about what js1k was about, some byte saving tricks and techniques used, my workflows, and links.

Now if you really want to watch a good talk on a postmortem of a game in which every byte squeezing technique is used, I suggest that you spend some time watching Pitfall Classic Postmortem With David Crane Panel at GDC 2011 (Atari 2600) instead of reading this.

In case you’re still interested of what happened, here’s how it started. Couple of days before the js1k deadline, there was buzz on slashdot and twittersphere about some crazy entries on js1k, and I thought maybe I should join this craze. I thought creating a game that where there’s jumping/ski/riding along a side scrolling terrains. My colleagues pointed this game “Tiny Wings” which apparently was a hit on the IOS devices.

I then watched its promotional video and was mesmerized by it.

It exhibited such beautiful artwork, music, procedural generation, and addictive gameplay that I decided I should try doing a clone of it. It turned out to be a ball jumping “over the hills” which became the title of my entry. Now to recapped what went great, and horribly wrong.

Great
- I attempt the js1k challenge, which is indeed a challenge
- I attempt writing a game!
- I learn crazy byte saving techniques and little more about javascript
- I manage to submit a less than 1024 bytes entry
- It somewhat didn’t look too bad!

BAD!
- Trying to write a game the first time, not the best idea
- Trying to complete the js1k game entry in less than few days, bad idea! (Winners usually spent weekends optimizing their code)
- Not know exactly the best techniques to write and structure game code
- Not using the most efficient byte saving code and some premature optimizations.
- Not much game play

On hindsight, I needed to create a game the normal, and spend much more time porting it to a byte starved version. At least, there were things which I experienced and learnt and a few techniques I applied. Here’s some additional stuff not mentioned in my presentation.

1. Terrain generation
I used a formula/function base using cosines where it returns y for an x position. I thought it was easier, but in retrospective, height-maps with interpolation probably gives better control and more interesting terrain.

Bouncing beholder (one of the previous winning entries), and this tiny wings tutorial actually uses a height map approach.

2. Terrain drawing
I wanted to emulate the beautiful stripped terrain in Tiny Wings. I thought about approaches of mapping textures using canvas path, custom canvas rect painting with rotations, cropping, clipping, gradient patterns but they were all too difficult. So I went for a pixel based approach, which was simple almost like a pixel shader, but slow on canvas2d (it would probably be fast on WebGL in theory). I reduced the game canvas to 480, 320, the original iphone resolution, for speed improvements and also utilized Typed Array, which reduced bytes and brought about 50% improvement to the FPS.

3. Physics
Both “Angry Birds” and “Tiny Wings” uses the famed box2d as their physics engine. For a 1k game, I created a simplified verlet based physics engine, which does seems to work pretty well.

To sum up, the js1k experience was fun, crazy and educational, and anyone with sufficient interest should set aside time to dive into this.

Now again for references, links and resources

** Good To Read ** How to create Tiny Wings like game – http://www.raywenderlich.com/3888/how-to-create-a-game-like-tiny-wings-part-1

JS1K Site – http://js1k.com/
My Js1K Tools – https://github.com/zz85/js1k-tools
Understanding JSCrush – http://blog.nikhilism.com/2012/04/demystifying-jscrush.html
140 Byte Saving Techniques – https://github.com/jed/140bytes/wiki/Byte-saving-techniques
Javascript Golfing – http://www.claudiocc.com/javascript-golfing/
Blog of the creator of Fubree – http://www.romancortes.com/blog/

& WAY PLENTY MORE STUFF TO READ

p.s. I probably upload an improved version of this sometime, somewhere.

Three.js Bokeh Shader

(TL;DR? – check out the new three.js webgl Bokeh Shader here)

Bokeh is a word used by photographers to describe the aesthetic out of focus or blur properties of a lens or a photo. Depth-of-field (DOF) is the distance objects seems to be sharp in a photo. So while “dof” is more measurable and “bokeh” subjective, one might say there’s more bokeh in picture with a shallower dof, because the background and foreground (if there’s a subject) is usually de-emphasized by being blurred when thrown out of focus.

Bokeh seems to be derived from the Japanese word “boke” 暈け – apart from the meaning blur, it might also mean senile, stupid, unaware, or clueless. This is interesting because in Singlish (Singapore’s flavor of English), it has that same negative meaning when referring “blur” to a person (it probably comes from the literal meaning of the opposite of sharp”. And now you might now know non graphical meaning of the word blur in my twitter id (BlurSpline).

IMG_8537
Here’s a photo of the Kinetic Rain I took at Changi Airport Terminal 1. Especially if you like kinetic structures, you should check out the official videos here and here (in which there’s much use of bokeh too)

I remember the time I knew little about 3d programming when I first tried three.js 2 years ago. I wondered whether camera.near and camera.far were the ways of defining when objects in the scene gets blurred when they are at distances at the far or near points.

Turns out of course that I was really wrong, since these values are used for clipping – improving performance by not rendering objects out of the view port. I had that naive thinking three.js work like real life cameras that I was able to create cinematic like scenes. Some helpful one on three.js IRC channel then pointed me to the post-processing DOF example done by alteredqualia who ported the original bokeh shader written by Martins Upitis.

So fast forward to the present, we have seen that shader used in ROME, and Martins Upitis has updated his Bokeh Shader to make it more realistic, and I attempted to port it back to three.js/webgl.


With focus debug turned on


Testing it in a scene


The example added to three.js with glitters.

So to copy what martinsh say the new shader does, it has the flexibility to
• variable sample count to increase quality/performance
• option to blur depth buffer to reduce hard edges
• option to dither the samples with noise or pattern
• bokeh chromatic aberration/fringing
• bokeh bias to bring out bokeh edges
• image thresholding to bring out highlights when image is out of focus
• pentagonal bokeh shape (experimental)
• bokeh vignetting at screen edges

The new three.js example also demonstrates how object picking can be used and interpolated for the focal distance too. More detailed comments about the parameters were also written on github.

Of course the shader is not perfect, as DOF is something not that simple (there are quite a few in depth Graphics Gems articles on it). Much of it is post-processing smoke and mirrors, the way is usually done in rasterization, compared to Path tracing or so. Yet I think its great addition to have in WebGL, just as we have seen DOF used in the Crytech, Nvidia demos or in other high end games. (There was a also a cool video of a minecraft mod using that DOF shader – but now seemed removed as I recently looked for it).

I would love to see the tasteful use of bokeh sometime, not just because it feels cinematic or been been widely used in photography, i think its also more natural given that’s how our eyes work with our brains (more details here).

Finally it seems that the deadline for the current js1k contest is just hours away – this means I gotta head off to do some cranking and crunching, maybe more on that in a later post! :D

Traveling with Animal Friends Around the World

Its a new year! Oh what blatant lie but still, its my first post for 2013!

You know, the feeling I’m feeling is that that feeling when there’s so much to do, even more that you wish to do, but so little that you really do.

Since resuming blogging has this great inertial, I decided to share some of travel adventures around the world (LA, Japan, Taiwan) with my animal friends (guess who they are!), before writing on more technical stuff.

I guess it all started on a trip to about 4 years ago. Then I was working in the San Francisco bay are then, and with a cheap pair of air tickets, I flew down to Los Angeles for a quick get away alone, until I met Piglet.

LA’s metro

Downtown LA


Walt Disney Concert Hall


Aquarium of the Pacific


Piglet and Jellyfish


Then it was the period of H1N1 Swine influenza, and so the saying that “pigs fly” became true.

Next stop, Japan with Doraemon, the Japanese Robotic cat from the future.


Doraemon on the plane


Doraemon in Tokyo’s JR train


Doraemon and the crowded Kaminarimon in Asakusa


Doraemon in Akihabara Electric Town


Doraemon watches sunrise at Atami


Doraemon and the Shinkansen


Doraemon and a big Rubik’s cube in Osaka


Doraemon in Kaleidoscope in Osaka’s Science Museum


Doraemon goes Kyoto cycling

Doraemon at the Ryōan-ji, a Zen temple famous for its dry garden. Here Doraemon is enlightened that you need to mediate about nothing to be enlightened about nothing.


At the beautiful Hirosawa lake


Doraemon in Arashiyama bamboo forest


The skies tear as Doraemon waves goodbye to Japan

Now for the next destination, Taiwan.

Angry bird at the Sun Moon lake.


At the beautiful mountain side of Cingjing.


At the sheep farm


At Taroko Gorge (thanks to my friend yeda for the correction from “Taroko” :)


Throwing Angry bird in a cable car up Maokong, Taipei.


Angry bird hikes up Jinguashi.


Angry Bird meets his enemies, at the local pig farm.


Angry bird launches into space, returning to my home in Singapore.

Where to, who shall I go with next? Thanks for reading, and hope you enjoyed this little adventure.

THE END

p.s. a technical bit – this is the first time I experimented writing this post with zenpen. Its a pretty cool tool (Since I usually post my photos on facebook, I just drag them in). I then used $(‘section’).innerHTML to extract the html code, clean some attributes before pasting into my blog.

JSCampAsia

Its been an awesome 2 days at JsCampAsia – great talks, great people, fun events.


Group picture from above!

It was about 2 years ago when I first watch Ryan Dahl’s node.js presentation at jsconf online. I subsequently found other quality jsconf talks so when I first learnt that JsCampAsia was going to be held in Singapore, I decided it was an event I should attend.

When Thomas Gorissen the JSCamp organizer asked if would like to conduct a workshop on three.js, I was really excited and agreed. So I learnt Ricardo/@mrdoob had mentioned me earlier at JSConf.EU, and it was a pity he couldn’t come as he was attending dotJs in Paris.


Mozilla’s Michal Budzynski asking Google’s Eric Bidelman a question. JSCamp’s Thomas on the right

Its my first time giving a talk at such scale, so its really nice to have some attendees came to me telling me that they have found have enjoyed the workshop or have found three.js interesting. (I’m sure no one was nasty enough to tell me if I was bad, so I’ll probably do some self criticism when I grab a copy of my video)


With one of the JSCamps’s helpers

Apart from the excitement of speaking, it just pleasurable to listen to the talks to the speakers coming from around the world – some really interesting, some informative, some engaging, enjoyable – and some I’ve missed out, I’ll be will catching them once the videos get released.


Of the best loved talks about replacing html & css with js by Jed Schmidt

So the last thing awesome about JSCamp is about meeting people, although some really briefly. To the audiences who said hi, thanks or even asking questions, I’m thankful and appreciate that. In turn, I probably have asked other speakers some difficult or silly questions.

On returning home (at the northern end of this small island), my body decided it deserved a day of hibernation.

Overall, JSCampAsia was a great conference, and to be a small part of it is my honor and pleasure. Hope to see you all again!

Some links
Official Blog – http://blog.jscamp.asia/
Collection of Links – https://gist.github.com/4167535

Handout for 1st session – http://zop.im/start-threejs
Slides for 1st session – http://zop.im/start-threejs-slides
Sample code – http://zop.im/start-threejs-code
Slides for 2nd session – http://zop.im/start-threejs2

Some photos


Speakers lunch


A picture of the conference hall


Jan from Amsterdam who spoke on dependency injection in “The Architect Way” on our way back.


With my colleagues from zopim who attended this conference too


Only bad thing about seating in the front is the “giraffe neck”.


Panel discussion – pretty sure @divya isn’t looking at her mobile phone this time.


This is me on my 2nd session showing a visualization of a photo with three.js


Everyone getting ready for the group photo!


Everyone with my Jelly Bean’s panorama shot.


Shim Sangmin from Korea, creator of Collie . He has great slides on High Performance Mobile Web Game Development in HTML5. He bought lots of Kaya jam home too :)

Making of Boids and Buildings

Here’s my latest three.js webgl experiment called “Boids and Buildings”. Its an experimental 3d procedural city generator that will run in a form of a short demo. Turn up your volume and be prepared to overclock your CPU (& GPUs) a little.

Here’s the story – early last month I was in Japan for a little travelling. Perhaps one thing I found interesting was the architectural. The cities are large and are packed with buildings (Edo, now called Tokyo, was once the world’s largest city).


(the huge billboards are at the famous dotonbori in osaka, the bottom 2 photos were houses shot in kyoto).

Sometime ago, I saw mrdoob’s procedural city generator. We have thought about creating a 3d version of it, but only during the trip I decided I should try working on it. Some of the process was written in this Google+ post, but I’ll summarize it a little here.

Firstly, the city generator works by building a road which spins off more roads along the way. The roads stop when it intersects with another road or reaches the boundary of the land. Another way of looking at this is that the land is split into many “small lands” divided by the roads. My approach was that if I could extract the shape information of each divided land, I could run ExtrudeGeometry to create buildings to fill the shape of each land.

The original js implementation of road intersections was done looking up pixel data on the canvas. While I managed to write a version which detects faces base on pixels, detecting edges and vertices was more tedious than I thought, as if I could write or use another image processing library similar to potrace. Instead of doing that, I work on an alternative intersection detection based on mathematically calculating whether each all lines/road intersected. This is probably slower when it takes up too much memory, but points of intersections were retained.


(here this version denotes where the starting and ending points of each road with red and green dots).

However some serious information is still missing – which edges connects the points, and which edges belongs to each face/land. Instead of determining these information after processing everything, using half-edge data structures can elegantly compute them on the fly. Demoscene coders might have seen this technique for mesh generation.

Without going into an in-depth technical explanation, here’s an analogy to how half-edges is employed. Since every land is defined by the roads surrounding it, and you build fences around the perimeter of the land to enclose the area within. The enclosed area defines each face and each fence is like a half-edge. Since each road divides the land into 2 sides (left and right), 2 fences will be constructed and its denote the land belonging to both sides. If a road is build through an existing land, fences on the existing land has to be broken down to connect to each side of the new road fences. In code, each new road contains 2 half-edges, and connecting new roads requires creating new split edges and updating of linked half edge pointers.

With that a 2D version…
2d generator

… and a 3D version was done.
3D

(on some hindsight, it could be possible to use half-edges with the original pixel collision detection)

Now we’ve got 3d buildings, but it lacks the emotions and feelings that I was initially thinking of. I thought, perhaps I’ll need to simulate the view out of the train. But that might require me to simulate train rails, although the stored paths of roads could be used for the procedural motion. As I was already started to feel weary of this whole experiment, I had another idea – since there’s a examples of boids in three.js, why not try attaching a camera to the boid, not only you’ll get a bird’s eye view, but you’ll get camera animation for free! I did a quick mesh up and the effects seems to be rather pleasant.

Incidentally, mrdoob in his code named the roads Boids, and I thought the name “Boids and Buildings” would be a rather fitting theme.

Let me start jumping around on bits and pieces I can think of to write of.

Splash screen
I abstracted the map generator into a class call Land which takes in parameters, and it was reusable for the 2d and 3d version. In the splash screen, a css3d was used to translate and scale the map generator in the background.

Text fonts
I wanted to create some text animation in the splash page, with a little of the boid/map generation feeling. I use mrdoob’s line font used in Aaronetrope and experimented with various text animations.

Visual Direction
To simplify working with colors, and used random grey for buildings at the start and ended up with a “black&white” / greyscale direction. For post processing, I added just a film post-processing shader by alteredqualia found in three.js examples, with slightly high amount of noise.

Scene control and animation
Since most of the animation was procedural, so during development, much of it was by code. When it was time to have some timeline control, I used Director.js which I wrote for “itcameupon”. The Director class schedules time-based actions and tweens, and most of the random snippet of animation code was added to it. So more or less, the animation runs on a fix schedule except for randomized parts (eg. time when roads just building on lands).

Pseudorandomness
This experiment allowed me to deal with lot of randomness, but based on probability distribution, lots of randomness can actually give expect results. I used this fact to help in debugging too. For example, you would like quickly to dump values out to the control in the render loop but you’re afraid of crashing your devtools – you could do this (Math.random()<0.1) && console.log(bla); This means, take a sample of 10% of the values and spit out the results out to the console. If you are in Canary, you can even do (Math.random()<0.01) && console.clear(); to clear your debug messages ones in a while.

Buildings
Buildings height are randomized, but they follow certain rules to make it more realistic and cityscape like. If the area of land is big, the building height would be lower. If its a small area but not too small, then it could be a skyscraper.

Boid cams
The boid camera was simply to follow the first boid, based on its velocity, placed the camera position slightly higher and behind the bird. I wanted to try a spring-damper camera system but opt for this simpler implementation instead – move the camera, a factor k position closer to the target position every render. In simple code, targetX = (targetX – currentX) * k where k is a small factor eg. 0.1 – this creates a cheap damping/easing effect. This effect is apparent toward the end of the animation when the camera is slingshot to the birds as the camera mode changes to the boidcams.

Performance Hacks

Shape triangulation is probably one of the most expensive operations here. In “It came upon”, slow computers experienced a short sharp delay when text get triangulated the first time (before caching). Over here, its a greater problem due to potentially high number of triangulations. Apart from placing a limits to buildings, one way to prevent a big noticeable lag is to use web workers. However I opt to use the old trick of using setTimeouts instead. lets say there are 100 buildings to triangulate, instead of doing everything inside one event loop which would bring a big pause, I’ll do setTimeout(buildBuilding, Math.random() * 5000); – based on random probability, 100 buildings would be triangulated across 5 seconds, reducing the noticeable pauses. I supposed, this is somewhat like an incremental garbage collection technique newer javascript engines employ.

Another thing I did was to disable matrix calculations, using object.matrixAutoUpdate = false; once buildings are completed building and animating.

Music
Pretty awesome music from Walid Feghali. Nothing much else to add.

Audio
Added my Web Audio API experiment of creating wind sound to simulate wind sound during the boidcams. Wind can simply be created with random samples (white noise?), I added a lowpass filter and a delay. Aptitude of wind noise is generated by the vertical direction boids are flying at. Wanted to add a Low Frequency Oscillator to make more realistic sounding wind, but I haven’t figured that out.

Future Enhancements
Yes, there’s always inperfections. Edge handling could be better, same with triangulation. Boids could do building collision detection. Improve the wind algorithm. Create variety of buildings, or making some cool collapsing buildings (like in inception).

Conclusion
So the demo is online at http://jabtunes.com/labs/boidsnbuildings/, sources are unminified, you’re welcome to poke around.

Going back to the original idea, I still wonder if I managed to create the mood and feelings I originally thought in my head. Here are some photos I shot overseeing Osaka in Japan, maybe you can compared it with the demo and make a judgement – perhaps you might think the demo is closer to a Resident Evil scene instead. ;P

edit: after watching this short documentary on Hashima island (where a scene from skyfall was at), i think the demo could resemble hashima’s torn buildings too.




Nucleal, The Exploding Photo Experiment

Particles. Photos. Exploding motions. The outcome of experimentation of more particles done this year. Check out http://nucleal.com/


This might probably look better in motion

Without boring you with large amount of text, perhaps some pictures to help do some talking.

First you get to chose how much particles to run.


Most decent computers could do 262144 particles easily, even my 3 generation old 11″ macbook air can run 1 or 4 millions particles.

On the top bar, you get some options on how you may interact with the particles, or select which photo albums if connect to facebook.

At the bottom is a film strip which helps you view and select photos from the photo album.

Of course at the middle you view the photo particles. A trackball camera is used, so you could control the camera with the different buttons of your mouse, or press A, S or D while moving your mouse.

Instead of arranging the particles in a plane like a photo, you could assemble them as a sphere, cone, or even a “supershape”.

Static shapes by itself aren’t good, so physics forces could be applied the particle formation.

Instead of the default noise wave, you can use 3 invisible orbs to attract particles with intensity relating to its individual colors

Or do the opposite of attracting, repelling

My favorite physics motion is the “air brakes”.

This slows and stops the particles in their tracks, allow you to observe something like a “bullet time”.

While not every combinations always look good, it’s pretty fun to see how what the particles form after sometime, especially with air brakes between different combinations.

Oh btw noticed the colors are kind of greyscale? That’s the vintage photo effect I’m applying in real time to the photos.

And for the other photo effect I added, a cross-processing filter.

(this btw is my nephew, who allows me to spend less attention and time on twitter these days:)

So hopefully I’ve given you a satisfying walk-through of the Nucleal experiment, at least way simpler than the massive Higgs boson “god” particle experiment.

Of course some elements of this experiments are also not entirely new.

Before this myself have also enjoyed the particle experiments of
- Kris Temmerman’s blowing up images
- Ronny Welter’s video animation modification
- Paul Lewis’s Photo Particles

Difference is now that brilliant idea of using photos to spawn particles can reach a new level of interactivity and particles massiveness, all done in the browser.

While I’m happy with the results, this is just the tip of the iceberg. Since this being an experiment, there’s much room for improvements in both artistic and technical areas.

A big thank you again to those involved in three.js, for which this was built on, to those adventurous who have explored GPGPU/FBO particles before me, to those who blog and share their knowledge about GLSL and graphics for which much knowledge was absorb to assemble this together, and not the least to Yvo Schaap who support me and this project.

Thank you also to others who are encouraging and make the internet a better place.

p.s. this is largely a non-technical post right? Stay tune for perhaps a more in-depth technical post about this experiment :)