Tag Archives: js

Making of Boids and Buildings

Here’s my latest three.js webgl experiment called “Boids and Buildings”. Its an experimental 3d procedural city generator that will run in a form of a short demo. Turn up your volume and be prepared to overclock your CPU (& GPUs) a little.

Here’s the story – early last month I was in Japan for a little travelling. Perhaps one thing I found interesting was the architectural. The cities are large and are packed with buildings (Edo, now called Tokyo, was once the world’s largest city).


(the huge billboards are at the famous dotonbori in osaka, the bottom 2 photos were houses shot in kyoto).

Sometime ago, I saw mrdoob’s procedural city generator. We have thought about creating a 3d version of it, but only during the trip I decided I should try working on it. Some of the process was written in this Google+ post, but I’ll summarize it a little here.

Firstly, the city generator works by building a road which spins off more roads along the way. The roads stop when it intersects with another road or reaches the boundary of the land. Another way of looking at this is that the land is split into many “small lands” divided by the roads. My approach was that if I could extract the shape information of each divided land, I could run ExtrudeGeometry to create buildings to fill the shape of each land.

The original js implementation of road intersections was done looking up pixel data on the canvas. While I managed to write a version which detects faces base on pixels, detecting edges and vertices was more tedious than I thought, as if I could write or use another image processing library similar to potrace. Instead of doing that, I work on an alternative intersection detection based on mathematically calculating whether each all lines/road intersected. This is probably slower when it takes up too much memory, but points of intersections were retained.


(here this version denotes where the starting and ending points of each road with red and green dots).

However some serious information is still missing – which edges connects the points, and which edges belongs to each face/land. Instead of determining these information after processing everything, using half-edge data structures can elegantly compute them on the fly. Demoscene coders might have seen this technique for mesh generation.

Without going into an in-depth technical explanation, here’s an analogy to how half-edges is employed. Since every land is defined by the roads surrounding it, and you build fences around the perimeter of the land to enclose the area within. The enclosed area defines each face and each fence is like a half-edge. Since each road divides the land into 2 sides (left and right), 2 fences will be constructed and its denote the land belonging to both sides. If a road is build through an existing land, fences on the existing land has to be broken down to connect to each side of the new road fences. In code, each new road contains 2 half-edges, and connecting new roads requires creating new split edges and updating of linked half edge pointers.

With that a 2D version…
2d generator

… and a 3D version was done.
3D

(on some hindsight, it could be possible to use half-edges with the original pixel collision detection)

Now we’ve got 3d buildings, but it lacks the emotions and feelings that I was initially thinking of. I thought, perhaps I’ll need to simulate the view out of the train. But that might require me to simulate train rails, although the stored paths of roads could be used for the procedural motion. As I was already started to feel weary of this whole experiment, I had another idea – since there’s a examples of boids in three.js, why not try attaching a camera to the boid, not only you’ll get a bird’s eye view, but you’ll get camera animation for free! I did a quick mesh up and the effects seems to be rather pleasant.

Incidentally, mrdoob in his code named the roads Boids, and I thought the name “Boids and Buildings” would be a rather fitting theme.

Let me start jumping around on bits and pieces I can think of to write of.

Splash screen
I abstracted the map generator into a class call Land which takes in parameters, and it was reusable for the 2d and 3d version. In the splash screen, a css3d was used to translate and scale the map generator in the background.

Text fonts
I wanted to create some text animation in the splash page, with a little of the boid/map generation feeling. I use mrdoob’s line font used in Aaronetrope and experimented with various text animations.

Visual Direction
To simplify working with colors, and used random grey for buildings at the start and ended up with a “black&white” / greyscale direction. For post processing, I added just a film post-processing shader by alteredqualia found in three.js examples, with slightly high amount of noise.

Scene control and animation
Since most of the animation was procedural, so during development, much of it was by code. When it was time to have some timeline control, I used Director.js which I wrote for “itcameupon”. The Director class schedules time-based actions and tweens, and most of the random snippet of animation code was added to it. So more or less, the animation runs on a fix schedule except for randomized parts (eg. time when roads just building on lands).

Pseudorandomness
This experiment allowed me to deal with lot of randomness, but based on probability distribution, lots of randomness can actually give expect results. I used this fact to help in debugging too. For example, you would like quickly to dump values out to the control in the render loop but you’re afraid of crashing your devtools – you could do this (Math.random()<0.1) && console.log(bla); This means, take a sample of 10% of the values and spit out the results out to the console. If you are in Canary, you can even do (Math.random()<0.01) && console.clear(); to clear your debug messages ones in a while.

Buildings
Buildings height are randomized, but they follow certain rules to make it more realistic and cityscape like. If the area of land is big, the building height would be lower. If its a small area but not too small, then it could be a skyscraper.

Boid cams
The boid camera was simply to follow the first boid, based on its velocity, placed the camera position slightly higher and behind the bird. I wanted to try a spring-damper camera system but opt for this simpler implementation instead – move the camera, a factor k position closer to the target position every render. In simple code, targetX = (targetX – currentX) * k where k is a small factor eg. 0.1 – this creates a cheap damping/easing effect. This effect is apparent toward the end of the animation when the camera is slingshot to the birds as the camera mode changes to the boidcams.

Performance Hacks

Shape triangulation is probably one of the most expensive operations here. In “It came upon”, slow computers experienced a short sharp delay when text get triangulated the first time (before caching). Over here, its a greater problem due to potentially high number of triangulations. Apart from placing a limits to buildings, one way to prevent a big noticeable lag is to use web workers. However I opt to use the old trick of using setTimeouts instead. lets say there are 100 buildings to triangulate, instead of doing everything inside one event loop which would bring a big pause, I’ll do setTimeout(buildBuilding, Math.random() * 5000); – based on random probability, 100 buildings would be triangulated across 5 seconds, reducing the noticeable pauses. I supposed, this is somewhat like an incremental garbage collection technique newer javascript engines employ.

Another thing I did was to disable matrix calculations, using object.matrixAutoUpdate = false; once buildings are completed building and animating.

Music
Pretty awesome music from Walid Feghali. Nothing much else to add.

Audio
Added my Web Audio API experiment of creating wind sound to simulate wind sound during the boidcams. Wind can simply be created with random samples (white noise?), I added a lowpass filter and a delay. Aptitude of wind noise is generated by the vertical direction boids are flying at. Wanted to add a Low Frequency Oscillator to make more realistic sounding wind, but I haven’t figured that out.

Future Enhancements
Yes, there’s always inperfections. Edge handling could be better, same with triangulation. Boids could do building collision detection. Improve the wind algorithm. Create variety of buildings, or making some cool collapsing buildings (like in inception).

Conclusion
So the demo is online at http://jabtunes.com/labs/boidsnbuildings/, sources are unminified, you’re welcome to poke around.

Going back to the original idea, I still wonder if I managed to create the mood and feelings I originally thought in my head. Here are some photos I shot overseeing Osaka in Japan, maybe you can compared it with the demo and make a judgement – perhaps you might think the demo is closer to a Resident Evil scene instead. ;P

edit: after watching this short documentary on Hashima island (where a scene from skyfall was at), i think the demo could resemble hashima’s torn buildings too.




Nucleal, The Exploding Photo Experiment

Particles. Photos. Exploding motions. The outcome of experimentation of more particles done this year. Check out http://nucleal.com/


This might probably look better in motion

Without boring you with large amount of text, perhaps some pictures to help do some talking.

First you get to chose how much particles to run.


Most decent computers could do 262144 particles easily, even my 3 generation old 11″ macbook air can run 1 or 4 millions particles.

On the top bar, you get some options on how you may interact with the particles, or select which photo albums if connect to facebook.

At the bottom is a film strip which helps you view and select photos from the photo album.

Of course at the middle you view the photo particles. A trackball camera is used, so you could control the camera with the different buttons of your mouse, or press A, S or D while moving your mouse.

Instead of arranging the particles in a plane like a photo, you could assemble them as a sphere, cone, or even a “supershape”.

Static shapes by itself aren’t good, so physics forces could be applied the particle formation.

Instead of the default noise wave, you can use 3 invisible orbs to attract particles with intensity relating to its individual colors

Or do the opposite of attracting, repelling

My favorite physics motion is the “air brakes”.

This slows and stops the particles in their tracks, allow you to observe something like a “bullet time”.

While not every combinations always look good, it’s pretty fun to see how what the particles form after sometime, especially with air brakes between different combinations.

Oh btw noticed the colors are kind of greyscale? That’s the vintage photo effect I’m applying in real time to the photos.

And for the other photo effect I added, a cross-processing filter.

(this btw is my nephew, who allows me to spend less attention and time on twitter these days:)

So hopefully I’ve given you a satisfying walk-through of the Nucleal experiment, at least way simpler than the massive Higgs boson “god” particle experiment.

Of course some elements of this experiments are also not entirely new.

Before this myself have also enjoyed the particle experiments of
Kris Temmerman’s blowing up images
Ronny Welter’s video animation modification
Paul Lewis’s Photo Particles

Difference is now that brilliant idea of using photos to spawn particles can reach a new level of interactivity and particles massiveness, all done in the browser.

While I’m happy with the results, this is just the tip of the iceberg. Since this being an experiment, there’s much room for improvements in both artistic and technical areas.

A big thank you again to those involved in three.js, for which this was built on, to those adventurous who have explored GPGPU/FBO particles before me, to those who blog and share their knowledge about GLSL and graphics for which much knowledge was absorb to assemble this together, and not the least to Yvo Schaap who support me and this project.

Thank you also to others who are encouraging and make the internet a better place.

p.s. this is largely a non-technical post right? Stay tune for perhaps a more in-depth technical post about this experiment :)

Orbital – Collaborative Game Development

I attended my first 24 hour hack day organized by NUS Hackers about 2 weeks ago. I thought that it was a good chance to do something which otherwise I may never get to do. Paul Lamere, who recently did the Midem Music Machine with Mr Doob, also wrote that hackathons are productive and are not nonsense.


Screenshot of our cloud platform

I was planning to write an HTML5 Music App with WebGL, but since I had 2 friends joining me, we brainstormed and decided to work on an idea Profound7 and I discussed before. If his nick sounds familiar, that’s because he started a project to write 52 games in this year. Its a good question how well that works out, but I thought it was a good goal to start with, and that I should motiviate myself to do something similar like 52 tools or something (music/photos/apps) this year.

Along with writing 52 games, Munir (profound7’s real name) has written his own scripting language Orbit (mix of Lisp, Lua, JS), and his own game engine Lune, built ontop of Haxe and NME for cross platform computability (Flash/Windows/Mac/Linux/Android/IoS and hopefully HTML5 once Jeash becomes more stable). All these btw are open source too and can be found on Github.


Builds for multiple platform


Download and running a native Mac build

Inline with able to help up Challenge 52, what we decided to do was build a cloud based HTML5 collaborative platform to manage game asserts (including editing game code), and allow download binary builds of the game for different target devices. Most stuff was written in javascript and node.js, with bindings to Git for file system revision. I’m a little lazy to write them in details here, but this presentation I placed on slideshare should have the details!

I think this idea is pretty cool, and while few people have seem to execute this, and foreseen that others would be going in this direction (phonegap, game closure, google playN?, etc). As it turns out we didn’t win any prizes (the winning group build a UAV system) but I’m glad we set out to do what we wanted to, which is to build something. The next thing I hope is that this goes live, even if its for a small group of users or for just Challenge 52 collaborators, I see the potential that this could be useful in real life. Well, if let’s see if there’re cheap node.js hosting too.

Finally, remember check out the Challenge 52 project if you’re interested in game development and would like to participate in it! If you’ve questions, you can also contact me on twitter.

Experiments behind “It Came Upon”

It has really awhile now, but I thought ok, we need to finish this – otherwise all these would probably be left in dust, and I’ll never proceed. So here is it, trying to finish touching on some of the technical aspects continuing from the post about “It came upon”.

Not all things were probably not done the best, nor right way, but there’s some hope that documenting it anyways might benefit someone, at the least myself in future looking back. There are many elements in that demo, so let me try listing the points in the order they appeared.

1. Dialog Overlays

Image and video hosting by TinyPic

The first thing after landing on the main page is a greeting contained in a dialog. Yes, dialogs like this are common on the net, but guess so different about this? Partially correct, if you are guessing rounded border radius without images using CSS3.

What I wanted to achieve was also a minimalistic javascript integration that is, no more checking for browser dimensions and javascript calculations. I thought that using the new css3 box model would work, but it was giving me headaches across browsers that I rely on a much older technology which actually works – table style. This allows the content to be centered nicely, and the only width I had to (or maybe even not) specified was the dialog width. Javascript was just used for toggling the visibility of the dialog style.display = 'table';

To see its inner guts, its perhaps best to look at the CSS with and the JS. Or perhaps read this for a stab at the CSS3 box border yourself.

2. Starfields

Star fields are a pretty common thing. The way implemented it was using three static Three.js particle systems (which aren’t emitters) and loop them such that once a starfield cross the static camera at a particle time, it would be looped to the back to be reused for a infinite starfield scene.

3. Auroras experiments.
I learn that sometimes easier to write experiments by starting afresh and simple, in this case with javascript + 2d canvas before doing it a webgl shader. Here’s the js version on jsdo.it so you can experiment with it.

Javascript Aurora Effect – jsdo.it – share JavaScript, HTML5 and CSS

This version runs from a reasonable speed to really slow, all depending on the canvas resolution, and number of octaves. The shapes of the aurora is generated with 3d simplex noise, for which 2 dimensions correspond to where each pixel is and the 3rd dimension being time. Just simple parameters can change the effects of perlin noise drastically, so not too good an idea to have too many factors if you do not understand what’s going on. I learnt that its difficult after what I thought was a failed experiment to create procedural moving clouds, and after I reread the basics of perlin noise, and started with a really simple example all over again. In this case, I felt the playing with the X and Y scales created the effect I wanted. These “perlin” levels were mixed with a color gradient containing spectrum of the auroras (which moves a little for extra dynamics). Next, added another gradient was also added top to bottom to emulate the different light intensity vertically.


For kicks, I created a GLSL shader version which potentially runs much faster than the canvas version. However, having too much time to integrate the Aurora with the demo, I used the canvas version instead. It was reduced 128 by 128, (chrome runs 256×256 pretty well, but not so for firefox then), a single octave, then used as a texture for a plane added to the scene. I also experimented with adding the texture onto an inverted sphere, which gave an unexpected but interesting look.


Finally, I thought there aren’t many known ways to create the aurora effects after searching, so this was just my approach although it might not be the simplest or best way either. Recently though I’ve found 2 other procedural generated aurora examples, which one might wish to look into if interested. http://glsl.heroku.com/e#1680.0 Eddie Lee also coded some really beautiful aurora effects for his kyoto project with GLSL in OpenGL, using 3D hermite spline curves wrappable perlin texture (more in that video description and shader source code!)

3. Night Sky Auroras + Trails

The long exposure star trails effect is created by not clearing the buffer. This technique is shown in the trails three.js example. Then played with the toggling on “timeline” (see next point).

4. Director class.
Perhaps its now time to introduce the Director class. For time based animations, you might need something to manage your timeline and animations. Mr doob has a library call sequencer.js (not to be confused with music sequencing!) which is used in various projects he worked on. The library to help load sequences or scene based on time. The Director class I wrote is somewhat similar, except it works on a more micro level, with direct support for animation by easing functions in tween.js. Perhaps this is also more similar to Marcin Ignac’s timeline.js library, except with the support to add Actions (instead of just tweens) at particular times.

The API is something like
director = new THREE.Director();
director.addAction(0, function() { // add objects } )
.addTween(0, 4, camera.position, { x: -280, y: 280, z: -3000},
{ x: -280, y: 280, z: -2600}, 'Linear.EaseNone')
.addAction(4.0, function() {
// here's a discret action)
camera.position.set(700, 160, 1900);
callSomethingElese();
})

// To start just call
director.start();

// in your running loop just call
director.update();

// scenes also can be merged via
director.merge(scene2director);

To simply put, Director is a javascript which does time-based scheduling and tweening using for animations in a scene. I’ve planned to improve and release it but in the meantime you can look at the javascript source. (This is also integrated with point #)

5. Snow scene

The components that make up the main snow scene can actually be found in the comprehensive three.js examples. The main elements apart from snow are shadows, flares, text, and post processing.

I think I’ve covered why I’ve chosen the shadow, flare, text elements in the previous post so look into those examples linked above and I’ll just straight into the post-processing element.

6. Post processing

The only post-processing filter used here is a tilt-shift, which emulates a tile shift effect. Actually a tilt-shift lens has the ability to put items at different focal length in focus, or items at the same focal length to be blurred. Such lenses are usually used for architectural photography, but also has a reputation of creating “miniature landscapes”, which is characterized by the top and bottom blurring of a photo. This post processing filter does exactly that kind of effect rather than emulate the real lens. The effect is surprising kind of nice, it helps creates softer shadow and brings focus into the middle of the screen. I had initially wanted to port evan’s version which allows more controls of the amount of blurplane, unfortunately that happen in that time frame I had.

7. Snow Particles

There are 2 particles effects used here. One’s the Snow and the other’s for the disintegrating text particle effects.

I’ve used my particle library sparks.js to manage the snow here. The sprite for the particles are drawn using the Canvas element (like most of my particle examples). Perhaps it’d harder for me to express in words, so let the related code segment along with comments do the talking.

The 2 main elements for this effect is that its emitted from an rectangular area (ParallelogramZone) and its given random drift. The “heaviness” of the snow (which is controllable by the menu options) adjusts particleProducer.rate, which can simply turn the scene from no snow to a snow blizzard. (Director was also controlling the stopping and running of snow for the text scene)

8. Text Characters Generation

Before going to the text particles, a little comments on the text generation method used here. While we have already demonstrated dynamic 3d text generation in the three.js examples, there’s a slight modification used for the text effects.

Instead of regenerating the entire text mesh each time a letter was “typed”, each character’s text mesh is generated and cached (so triangulation is reduced if any letter is repeated) and added to a object3d group. This allows the ability to control, manipulate and remove each 3d character individually when needed but also more importantly created better performances. With that in place, the Text Typing recording and playback Effect was almost done.

9. Text Particles

After the end of each line, the text would then burst into particles. This was another area of experimentation for me. Converting text to particles could be done like this.

a. Paint text onto a 2d canvas.
b. Either, 1) randomly place particles and keep those that land within the painted text area, or
2) randomize particle to be part of the paint text area. Next move them along a z-distance.

Method 2 works pretty well for minimal “particle waste”.

However, I thought that matching typeface.js fonts to the 2d canvas might took me a little more time, so I decided to use the mesh to particle THREE.GeometryUtils.randomPointsInGeometry() method (first seen in @alteredq shadowmap’s demo which randomize particles to be on points of the mesh (sur)faces instead. While I preferred the previous approach, since it gave a nicer volumetric feel, the latter approach likewise worked and on the bright side, showed the better shape of the mesh when viewed at the sides.

10. Key recordings.
The animation of the 3d text messages were done using “live recording” with a Recorder class (inside http://jabtunes.com/itcameupon/textEffects.js). Every time a keypress is made, the event is pushed to the recorder which stores the running time and event. The events would then be serialized to JSON format, which can be loaded at another time. The recorder class also interfaces with the Director to push the events for playback. This is the way that user’s recordings are saved to JSON and stored on the server.

11. Audio Playback
The Jasmid library mentioned in previous post is utilized for browser based synthesizing for midi files. This version uses Web Audio API for chrome browsers and some tweaked buffer settings for better playing back when using firefox audio data API.

11. Snowman

I almost forgotten about the Snowman. That was “modelled” or procedurally generated using Three.js primitives (Spheres, cylinders etc). One revision before the final even had fingers on them. Yea, it was a little painful having to refresh the browser to see changes after changes was made so the web inspector console helped a little. Still, while animating with the Director.js class, there was even more trail and errors, and waiting for it playing back. It was only until later I added a .goto(time) for the Director class. But I made a few interesting accidental experimentations, like the impalement of the snowman was setting an object to a large negative scale.

As those who follow me on twitter might already know, I wrote the Three.js Inspector a couple weekends later, which would potentially made things much easier. Perhaps more on ThreeInspector in another post.

Concluding thoughts
Wow, you’ve read till here? Cool! It almost felt painful trying to finish this writeup (reminding me of school). If there’s anything I missed out, feel free to contact me or dig into the source. While this hasn’t been groundbreaking, they didn’t exist at a snap of a finger. So I’ve learnt to build experiments by small parts, and make modular integration. There’s room for refinements, better tools, and better integration. Signing off, stay till more updates on experiments! :)

Curves – Base Classes

This is a quick write up about the Curve classes added to Three.js. If I procrastinate any further, nothing at all may be written, so I’ll start off with the base classes.

The base curve classes can be found in Curve.js. Curves have been used for Text, Shapes, and even animation so the Curve classes is the core of all these.

A curve is a generalized representation of lines. Similarly, THREE.Curve is the base class which other Curve classes extends. While you may extends a curve yourself, implemented curve classes includes

* — 2d curve classes —
* THREE.LineCurve
* THREE.QuadraticBezierCurve
* THREE.CubicBezierCurve
* THREE.SplineCurve
* THREE.ArcCurve

* — 3d curve classes —
* THREE.LineCurve3
* THREE.QuadraticBezierCurve3
* THREE.CubicBezierCurve3
* THREE.SplineCurve3

Curve classes have a common set of methods derived from THREE.Curve.
getPoint(t)
getPointAt(u)
getPoints(divisions)
getSpacedPoints(divisions)
getLength()
getLengths(divisions)
getNormalVector(t)
getTangent(t)
getUtoTmapping(u)

Okay, that’s plenty of methods, but let’s break it down.

Getting Points
Now before one can start drawing curves, you need to break down the curve to points. curve.getPoint(t) returns you a point (a 2d vector for a 2d curve, and 3d vector for 3d curve), when t is the parameter along the line (expressed from 0 to 1). Now there are times, when equi-units of t, does not give you equi-distance points eg. distance from curve.getPoint(0.1) to curve.getPoint(0.2) may be 10 units but curve.getPoint(0.2) to curve.getPoint(0.3) maybe 20 units. (This happens usually to curves, where there are many more points near its bends than at its straight ends). What getPointAt(t) does is to do its magical mappings so distance between curve.getPointAt(0.1) -> curve.getPointAt(0.2) and distance between curve.getPointAt(0.2) -> curve.getPointAt(0.3) would almost equi-distance if not equal. One example use for this is to make smooth animation for a camera along splines without moving faster or slower on different parts of the curve.

getPoints() is a convenience class to return the curve entire set of points with a given subdivision length. Internally, it usuals intervals of number from 0 to 1 with getPoint, so, use getSpacedPoints(divisions) to get points as equally spaced as possible.

Getting Lengths/Distances
getLength() returns the units distance of the curve. getLengths() builds accumulative distances based on its segments, and is used internally for equi-distance mapping, so avoid this if you do not need to use this. Likewise, getUtoTmapping(u) to used to map getPointAt() to getPoint(), while its mostly used internally, you may used it for getting a value of t to use with getTangent();

Others
getTangent() returns you a tangent vector at point t. And getNormalVector() for its respsective normal. Now, all these magically works for a subclassed curve as long as it implements getPoint(), however for more accuracy, a subclass can overwrite these methods for a more exact mathematical precision (as Line, Bezier, Spline curves did). These tangents from the curves can be used for point an object in a direction, while animating the movements around it. It is also used for the Text bending examples.

That sums up the basic explanation of the core Curve class.

History: Development of curve classes appeared in here and was possibly the by product of this and also this article

Extrusion Bevels – Three Attempts in Three.js

Three.js is out now at r43 with more features. Check it out! :)

In a previous post I had highlighted my approach for creating 3d geometries extruding a 2d text font (shape) along its z-axis. There have been a couple of changes for Text too – `Text` is now `TextGeometry`, much of what has been in that class has been refactored into a couple of other classes, but since we have a generic extrude geometry class `ExtrudeGeometry` which applies not only to text fonts but to any generic closed forms shapes with support holes with `Curve` and `Path` classes. Along with TextGeometry is wrapping to splines/curved paths, maybe a topic for another post ;)


Bevel and Text Geometry with Level-of-details

In this post, I talk a little about beveling, (termed fillet caps in Cinema4D), which I mistakenly labelled it as bezel at first (the round part around rolex watches). Bevel slopes or rounds the corners of an extruded block, kind of chiseling or sandpapering a wooden block in the real life analog. Beveling also seems to be a common feature in 3d programs, and you might have probably even use it in applications like photoshop or word. While seemingly a simple feature, it almost took me almost four rewrites to get to a version which I’m satisfied, of which I’m sharing my approaches below.

To start off, I wasn’t even sure whether to keep the original shape at its extruded section or at its front and back caps. Just thinking about this stirred internal debate within myself and some amount of code rewrites. After some experimentations and observations, I decided that original shape at the caps and widen shapes at extruded bodies looks better for text. It made even more sense after watching greyscale gorilla described the usage fillet caps for giving a strong look to fonts and making edges smoother in C4D.


Interesting strange effect in my WIP bevels

Expansion Around Centroids
So the first approach I jumped in quickly to code was to scale the shapes using its contour points. To know where to scale it around, for which I used the centroid, calculated by averaging all the points of the contour. Works pretty well, but what about holes? Centroid for contour points for holes are also calculated, but instead of scaling in the direction of the shape parameters outwards, they are scaled in the opposite direction inwards. This seems to work pretty well for many shapes until… you take a careful look at the tip of “e”. this strange appearance is due to the concave nature of the shape. Drawing it on paper, its pretty easy to understand why this algorithm would not work on the inside edges of concave shapes.

Expansion via angled midpoints
So approach 1 seemed successful, but failed for certain use cases. How can we solve the problem of the first? One way is to perhaps break down the concave shapes such that only convex shapes remain, but that would have its sets of problems. So I took another approach – to extrude its corners based on the angles between edges. Each new corner will be extruded outwards with the angle averaged or equidistance from both sides of the lines connected to the corner, by the extruded amount. Again, a simple method, and seems to work pretty well, but on careful inspection one would notice strange looking corners especially with sharp edges. Since the expanded corners are determined by angles, there is no guarantee that the bevel amount from the edges are equal throughout and are affected by the shapes.

Corners based on Edges Offsets
In the 2nd approach, 2 connecting edges at each point is required to computed a single bevelled point. The 3rd approach starts out similarly till here, then lines are offset outwards at its edges normals (90 degrees from the edge’s slope) to obtained its extruded edges. From its offset positions, new points are calculated. In pseudo code

For each point,

Find the line connecting to the point
The normal left perpendicularly to this line is used, and

For the line connecting from the point,
the normal right perpendicularly to the line is used.

Offset lines a unit to its associated normals, and find the
intersections of these 2 new lines

Intersection would be the new point.

Now this seems to work much better for most of the cases, since new extruded bevel edges now has a equidistance to the original edges consistently. Unfortunately at sharp converging edges, a point from these sharps corners seems to be missing. In such rare cases, the intersection point of the extruded edge reserves its direction and pushes the point further away from where it should be. The work around for this is to revert to the algorithm of 2nd attempt to prevent such artifacts.

So here is it, at the end of at the bevel attempts, while not 100% perfect, has reached a state which I’m generally satisfied with the results.

If you are any more interested in this, check out the new examples and source on github https://github.com/zz85/three.js/blob/experimental/src/extras/geometries/ExtrudeGeometry.js

Stay tuned for more development updates. :)

disclaimer: i never gotten the chance to take any computer graphics classes, nor came across any papers on this topic, so please feel free to refer or suggest any better approaches if you may know.

p.s. for curved bevels, I used sinuous function of t. wonder if there’s a better way to go about this?

Distributed Web Rendering

(This note was the first of a series written in early May this year shared on my facebook notes. Its really hard to imagine that it hasn’t been too long since then but I’ve already ventured much deeper into the 3d and webgl world.)

The Short: Watch this short 30s video.

Fireflies rendered with RenderFlies.

The Long: A demo using web browsers to render images a before creating a final video render on the server side.

Motivations:
Just before sleeping, ideas revolved around my head, but while this idea being just one of them, ended up manifested in code as a proof of a concept last night.

I imagine that anyone who does video encoding, post processing or 3d, might find the rendering process slow and painful. Which is why I imagine, super computers and render farms are therefore used for serious work like in the movie industry.

Since web browsers are well supporting the canvas element and other html5 technologies, especially with javascript performance greatly improved, this is really an exciting time for web developers (although not too long in the past I imagine flash and actionscript was really exciting). I then had the idea is that a simple distributed rendering system with html5 can be created. Such a system can be easily deployed and widely available web browsers can be act as thin light weight render client with ease.

The how:

To start hacking this system together, I first modified my html5canvas based fireflies experiment. Instead of the usual canvas repainting at a regular interval, a few changes needs to be done. After updating particals and updating the canvas, the image on canvas is extracted with toDataURL encoding the images to a base64 string, which would be pushed to the node.js server.

The node.js basically decodes and write the file to the file system. When the images are done rendering, ffmpeg is spawn as a seperate process to encode the images to a mp4 video. Note that node.js is spectacular dealing with external process. Due to its event driven, non-blocking architectural, the http request can wait until the encoding is done before notifiying the web client. This is a good reason, apart from just thinking that its cool using javascipt on the server side.

For this fireflies demo, the images are created at full HD resolution 1920×1080 saved in the default png format. I encoded the mp4 with ffmpeg at 4Mb bitrate.

Some Notes:

Performances
I’m quite satistifed with Firefox 4 performance here, in fact, it is faster than Chrome of an average of 2 seconds every 30 frames. I noted some of the actual numbers somewhere which I’m lazy to pull out now. However, one reason why Firefox is faster is that its toDataUrl operation takes about 200ms while Chrome takes about 300ms (on the mac).

On the average for my final render of ~40 particles (I used about 500 particles for benchmarking), Firefox and node.js locally takes about 200 seconds to transfer 900 images (30 seconds, 30 frames) and takes about 1 minute for the ffmpeg processing at full HD resolution. Safari seems to be able to give a reasonable performance on the mac too.

On my windows machine, I’ve tested with IE10 preview (with backwards compatability till 9), opera, firefox, chrome. (Now you know why its called Html5 – because you always have to test 5 browsers). The new IE’s performance is pretty good too, and all browsers are almost on par, with chrome running a little slower.

PNG Transpaceny: I realized the renders if transpacy in background was used was quite bad. After encoding to MP4s, it looked ugly and tooked up 10x the size. Fix: Paint/fill the entire background instead of/in addition to clearing the canvas every refresh.

More practical use of this ideas:
Right now this demo only works for a single client-server.

qn: posible to extend this to multiple clients? Yes. Multiple clients can be rendering the seperate project or a single project, but while on the same project, they can be distributed at giving a different slice at different frames. It is even possible to have a collabrative raytracing to a single image as shown here.

qn: Can a particle system animation be rendered distributely? Possible if a set of model is unified. For random calculations, they should use the some predefined values or use the same random seed.

qn: is this distributed rendering practical? It depends. This is case, a screen video capturing program could have been used, or you can argue that its more cool to run things real-time anyway. However, this might come in useful in cases when your system cant keep up, for example attempting to run 1M particles on canvas 2d, you might opt to render it instead. This idea however can work with WebGL (canvas 3d) or more CPU intensive javascript ray tracing methods.

qn: what about untrusted clients? That depends on your application. If its a social experiment, what can use what SETI or similar distributed program do, requiring similar sets of answers for different clients to accept a solution.

qn: Would distributed rendering really work? This project is not yet at the stage, but the challenges here is about solving bottlenecks, whether its a CPU or an IO bound job. For example, the rendering performance of this demo can be improved at its network layer. Making a http request for every file incures higher overhead and lag times. One way is that a buffer be used to store a few render images, before making a POST, or that use websockets and stream the data continously at less overheads.


Some other TODOs: use ffmpeg2theora for encoding to ogv for firefox browsers.

Lastly, while I tried googling this idea some time ago, nothing came up but recently I have found out that a collabrative browser based mapreduced has been though about and discussed before.

Source code is available @ https://github.com/zz85/RenderFlies

Okay, I have more important stuff to work on, signing off geeknotes today. :)

three.js 3D Text

(Since TextGeometry has been added to r42 of three.js I decided to create a repost of my note written @ http://www.facebook.com/notes/joshua-koo/geeknotes-3d-text/10150218767153256 – warning code example api might be outdated by now)

If you haven’t heard of three.js, its an extremely cool and simple 3D library in javascript started by mrdoob. I suggest checking out the extremely cool examples like “wilderness downtown”, “and so they say”, “rome” ). Three.js supports both canvas and webgl rendering.

Procedural 3D Text Geometry is my humble contribution to this project. I’ve managed to apply the little things I knew or experimented on geometry, and learned more in the progress of adding this feature. I’m thankful to @alteredq too, who tested and clean up my code (in additional to adding cool shaders to my text demo), and is a fantastic contributor to the project.

If a demo speaks more than words, check out the webgl demo (should work with the latest chrome or firefox)

Or if you browser/graphics card doesn’t support webgl, try the software rendered (canvas 2d) implementation

Now what TextGeometry does is help to create 3d geometry to render 3d quickly in the browser without additional tools (eg. export text models from your 3d software) with just a simple call like var text = new THREE.TextGeometry( text, { size: 10, height: 5, curveSegments: 6, font: "helvetiker", weight: "normal", style: "bold" }); There was a couple of motivation to create this, but one of them was my curiosity and interest motion graphics.

For now let me dive into some technical details of how the 3D Text Geometry works.

The steps are as follows;

1) vector paths extraction from fonts
2) points extraction from 2d paths
3) triangulation of polygons for shape faces
4) extruding 2d faces to create 3d shapes.


process of creating 3d typography – points generation and triangulation

1. shape paths
like how text and fonts are rendered on computers, vector paths are needed from font data. there are 2 main open source projects which have tools for converting fonts to javascript format so it can be rendered with javascript, namely cufon and typeface.js. typeface.js data files was used here.

2. extractions of points
this step is required for triangulation in the next step. a little math and geometry understanding here is useful, but paths mainly consist of lines and bezier curves. lines are straight forward and bezier curves requires a little maths and sub dividing the to create points from the curve.

3. triangulation or tessellation here is important because that’s how faces of 3d models can be rendered. the polygon triangulation algorithm was first implemented in AS3 before my port to JS. This algorithm however, does not support holes in shape, and therefore I had to implement an additional hole slicing algorithm described in http://www.sakri.net/blog/2009/06/12/an-approach-to-triangulating-polygons-with-holes/ (page is down, so check google’s cache if interested)


creating 3d typography in javascript – wireframes

4. creating the 3d geometry
so far 80% of the hardwork is done. creating the geometry requires vertices (array of 3d points/vectors3d) and faces (list of triangles or quads describing an area using indices to the vertices). one just have to be careful with the normals, and the direction of the vertices (whether clockwise or anticlockwise) directions. the front and back faces are added with triangles calculated in the previous step and extrusion created with quads (face4)

lastly, its time to be creative, experimental using the 3d meshes generated.

[MusicGeekNotes] Creating Music With Graphics, Lights and Smokes.

Create some music by clicking or dragging @ http://jabtunes.com/labs/jtenorion.html

** Warning: Can be CPU intensive. Safari / Chrome for best results. If you do not have a fast CPU, use less effects in the checkbox options.

Check out a video playing with this.

[Music Notes]
After a while, you may find that you’re creating some chinese-like, malay-like or javanese like music.

This is a clone of the fairly expensive electronic but a fairly cheap music instrument call the Tenori-On made by Yamaha.

Anyway, this version is closer’s to Andre’s implementation call ToneMatrix, a flash implementation using notes of a pentatonic scale, which allows you to pleasant sounding music evening if you click on it randomly. Just youtube it, and you can find how many others have created music with the Tonematrix.

[Geeknotes]
First of all my implementation differs with using a HTML 5 Canvas to render the graphics instead of flash.

I use the the Sion Library for the audio, creating a JS to AS bridge to generate sounds. Howeveer, with the still-in-progress Mozilla Audio Data API, one can easily modify this program to run fully in javascript soon.

My initial visual effects was less than satisfactory compared to Andre’s implementation and I got stuck at creating the “ripple effects”. I first tried the ripple algorithm used in my previous experiment, but such animation would be almost usuable and very likely to hang the browser.

After a period of rest, while reading on perlin noise in canvas, I started having some inspirations for this project again.

Its then struck me that I could implement this with a simple particle system with gradients.
I was also inspired by the use of gradients, alpha and composites stumped upon this while looking up collaborative programming.

So right now, although its not all perfect, but its a big improvement from the past.

Go ahead, have fun, create some music, hack the code, or provide some suggestions.

p.s. I found the layout look like an iPad after I used curve borders, not that I wanted to replicate the iP* devices.

Other WIP screenshots here.

(a cross post on my facebook note)

[geeknotes] Now, Have you met.. Instant QR Codes?

(Imported from my facebook note dated Monday, 28 June 2010 at 01:37)

This is a follow up to my previous geek note “Say Hi to Instant Barcodes”. The quick story here: nothing too fanciful, just a simple html 5 mashup for instant qrcodes using javascript, jquery, and Kazuhiko Arase qrcode jscode. See http://jabtunes.com/labs/qrcode.html

QRCode

The advantage for using QRCode over my previous barcode generator is that this 2d barcode packs more information in it, specifically for this mashup implementation, 119 binary characters with 30% recovery rate (using up to QR Code version 10, can be much more if we can implement till version 40). Think of it as maybe, twitter on a picture!


A picture can tell a hundred words, and this tiny qrcode does store a hundred characters.


Yes, the barcode scanner on android works with 2d barcodes.


And I think this is also a good way to send urls, telephone numbers and other contact to each other. With a lack of a standard vcard or bluetooth protocol, I think qrcodes should work much better!

Feedbacks would be great! and yes, you can download some codes and send me messages in QR Codes! Goodnight! :)

p.s. Tested on all modern browsers (& ie9 beta) except mobile browsers (pls let me know if it works!).