Category Archives: Hobbies

Any thing relating to my hobbies goes here

Experiments behind “It Came Upon”

It has really awhile now, but I thought ok, we need to finish this – otherwise all these would probably be left in dust, and I’ll never proceed. So here is it, trying to finish touching on some of the technical aspects continuing from the post about “It came upon”.

Not all things were probably not done the best, nor right way, but there’s some hope that documenting it anyways might benefit someone, at the least myself in future looking back. There are many elements in that demo, so let me try listing the points in the order they appeared.

1. Dialog Overlays

Image and video hosting by TinyPic

The first thing after landing on the main page is a greeting contained in a dialog. Yes, dialogs like this are common on the net, but guess so different about this? Partially correct, if you are guessing rounded border radius without images using CSS3.

What I wanted to achieve was also a minimalistic javascript integration that is, no more checking for browser dimensions and javascript calculations. I thought that using the new css3 box model would work, but it was giving me headaches across browsers that I rely on a much older technology which actually works – table style. This allows the content to be centered nicely, and the only width I had to (or maybe even not) specified was the dialog width. Javascript was just used for toggling the visibility of the dialog style.display = 'table';

To see its inner guts, its perhaps best to look at the CSS with and the JS. Or perhaps read this for a stab at the CSS3 box border yourself.

2. Starfields

Star fields are a pretty common thing. The way implemented it was using three static Three.js particle systems (which aren’t emitters) and loop them such that once a starfield cross the static camera at a particle time, it would be looped to the back to be reused for a infinite starfield scene.

3. Auroras experiments.
I learn that sometimes easier to write experiments by starting afresh and simple, in this case with javascript + 2d canvas before doing it a webgl shader. Here’s the js version on jsdo.it so you can experiment with it.

Javascript Aurora Effect – jsdo.it – share JavaScript, HTML5 and CSS

This version runs from a reasonable speed to really slow, all depending on the canvas resolution, and number of octaves. The shapes of the aurora is generated with 3d simplex noise, for which 2 dimensions correspond to where each pixel is and the 3rd dimension being time. Just simple parameters can change the effects of perlin noise drastically, so not too good an idea to have too many factors if you do not understand what’s going on. I learnt that its difficult after what I thought was a failed experiment to create procedural moving clouds, and after I reread the basics of perlin noise, and started with a really simple example all over again. In this case, I felt the playing with the X and Y scales created the effect I wanted. These “perlin” levels were mixed with a color gradient containing spectrum of the auroras (which moves a little for extra dynamics). Next, added another gradient was also added top to bottom to emulate the different light intensity vertically.


For kicks, I created a GLSL shader version which potentially runs much faster than the canvas version. However, having too much time to integrate the Aurora with the demo, I used the canvas version instead. It was reduced 128 by 128, (chrome runs 256×256 pretty well, but not so for firefox then), a single octave, then used as a texture for a plane added to the scene. I also experimented with adding the texture onto an inverted sphere, which gave an unexpected but interesting look.


Finally, I thought there aren’t many known ways to create the aurora effects after searching, so this was just my approach although it might not be the simplest or best way either. Recently though I’ve found 2 other procedural generated aurora examples, which one might wish to look into if interested. http://glsl.heroku.com/e#1680.0 Eddie Lee also coded some really beautiful aurora effects for his kyoto project with GLSL in OpenGL, using 3D hermite spline curves wrappable perlin texture (more in that video description and shader source code!)

3. Night Sky Auroras + Trails

The long exposure star trails effect is created by not clearing the buffer. This technique is shown in the trails three.js example. Then played with the toggling on “timeline” (see next point).

4. Director class.
Perhaps its now time to introduce the Director class. For time based animations, you might need something to manage your timeline and animations. Mr doob has a library call sequencer.js (not to be confused with music sequencing!) which is used in various projects he worked on. The library to help load sequences or scene based on time. The Director class I wrote is somewhat similar, except it works on a more micro level, with direct support for animation by easing functions in tween.js. Perhaps this is also more similar to Marcin Ignac’s timeline.js library, except with the support to add Actions (instead of just tweens) at particular times.

The API is something like
director = new THREE.Director();
director.addAction(0, function() { // add objects } )
.addTween(0, 4, camera.position, { x: -280, y: 280, z: -3000},
{ x: -280, y: 280, z: -2600}, 'Linear.EaseNone')
.addAction(4.0, function() {
// here's a discret action)
camera.position.set(700, 160, 1900);
callSomethingElese();
})

// To start just call
director.start();

// in your running loop just call
director.update();

// scenes also can be merged via
director.merge(scene2director);

To simply put, Director is a javascript which does time-based scheduling and tweening using for animations in a scene. I’ve planned to improve and release it but in the meantime you can look at the javascript source. (This is also integrated with point #)

5. Snow scene

The components that make up the main snow scene can actually be found in the comprehensive three.js examples. The main elements apart from snow are shadows, flares, text, and post processing.

I think I’ve covered why I’ve chosen the shadow, flare, text elements in the previous post so look into those examples linked above and I’ll just straight into the post-processing element.

6. Post processing

The only post-processing filter used here is a tilt-shift, which emulates a tile shift effect. Actually a tilt-shift lens has the ability to put items at different focal length in focus, or items at the same focal length to be blurred. Such lenses are usually used for architectural photography, but also has a reputation of creating “miniature landscapes”, which is characterized by the top and bottom blurring of a photo. This post processing filter does exactly that kind of effect rather than emulate the real lens. The effect is surprising kind of nice, it helps creates softer shadow and brings focus into the middle of the screen. I had initially wanted to port evan’s version which allows more controls of the amount of blurplane, unfortunately that happen in that time frame I had.

7. Snow Particles

There are 2 particles effects used here. One’s the Snow and the other’s for the disintegrating text particle effects.

I’ve used my particle library sparks.js to manage the snow here. The sprite for the particles are drawn using the Canvas element (like most of my particle examples). Perhaps it’d harder for me to express in words, so let the related code segment along with comments do the talking.

The 2 main elements for this effect is that its emitted from an rectangular area (ParallelogramZone) and its given random drift. The “heaviness” of the snow (which is controllable by the menu options) adjusts particleProducer.rate, which can simply turn the scene from no snow to a snow blizzard. (Director was also controlling the stopping and running of snow for the text scene)

8. Text Characters Generation

Before going to the text particles, a little comments on the text generation method used here. While we have already demonstrated dynamic 3d text generation in the three.js examples, there’s a slight modification used for the text effects.

Instead of regenerating the entire text mesh each time a letter was “typed”, each character’s text mesh is generated and cached (so triangulation is reduced if any letter is repeated) and added to a object3d group. This allows the ability to control, manipulate and remove each 3d character individually when needed but also more importantly created better performances. With that in place, the Text Typing recording and playback Effect was almost done.

9. Text Particles

After the end of each line, the text would then burst into particles. This was another area of experimentation for me. Converting text to particles could be done like this.

a. Paint text onto a 2d canvas.
b. Either, 1) randomly place particles and keep those that land within the painted text area, or
2) randomize particle to be part of the paint text area. Next move them along a z-distance.

Method 2 works pretty well for minimal “particle waste”.

However, I thought that matching typeface.js fonts to the 2d canvas might took me a little more time, so I decided to use the mesh to particle THREE.GeometryUtils.randomPointsInGeometry() method (first seen in @alteredq shadowmap’s demo which randomize particles to be on points of the mesh (sur)faces instead. While I preferred the previous approach, since it gave a nicer volumetric feel, the latter approach likewise worked and on the bright side, showed the better shape of the mesh when viewed at the sides.

10. Key recordings.
The animation of the 3d text messages were done using “live recording” with a Recorder class (inside http://jabtunes.com/itcameupon/textEffects.js). Every time a keypress is made, the event is pushed to the recorder which stores the running time and event. The events would then be serialized to JSON format, which can be loaded at another time. The recorder class also interfaces with the Director to push the events for playback. This is the way that user’s recordings are saved to JSON and stored on the server.

11. Audio Playback
The Jasmid library mentioned in previous post is utilized for browser based synthesizing for midi files. This version uses Web Audio API for chrome browsers and some tweaked buffer settings for better playing back when using firefox audio data API.

11. Snowman

I almost forgotten about the Snowman. That was “modelled” or procedurally generated using Three.js primitives (Spheres, cylinders etc). One revision before the final even had fingers on them. Yea, it was a little painful having to refresh the browser to see changes after changes was made so the web inspector console helped a little. Still, while animating with the Director.js class, there was even more trail and errors, and waiting for it playing back. It was only until later I added a .goto(time) for the Director class. But I made a few interesting accidental experimentations, like the impalement of the snowman was setting an object to a large negative scale.

As those who follow me on twitter might already know, I wrote the Three.js Inspector a couple weekends later, which would potentially made things much easier. Perhaps more on ThreeInspector in another post.

Concluding thoughts
Wow, you’ve read till here? Cool! It almost felt painful trying to finish this writeup (reminding me of school). If there’s anything I missed out, feel free to contact me or dig into the source. While this hasn’t been groundbreaking, they didn’t exist at a snap of a finger. So I’ve learnt to build experiments by small parts, and make modular integration. There’s room for refinements, better tools, and better integration. Signing off, stay till more updates on experiments! :)

Behind “It Came Upon”

Falling snowflakes, a little snowman, a little flare from the sun. That was the simple idea of a beautiful snow scene I thought could be turned into an interactive animated online christmas greeting card to friends. Turns out “It came upon” became one of my larger “webgl” personal project in year 2011 (after arabesque).


(screencast here if you do can’t run the demo).

The idea might have only took a second, or at most a minute, but rather a simple piece of work, the final production needed the fusion of many more experimentation done behind the scenes. I guess this is how new technologies gets developed and showcased in Pixar short films too. I wonder if a writeup of the technical bits would really lengthy or boring, so I’ll try leaving it for another post, and focus on the direction I had on the project, and share some thoughts looking back.

The demo can mainly be broken down into 4 scenes
1. splash scene – flying star fields
2. intro scene – stars, star trails and auroras (northern lights)
3. main scene – sunrise to sunset, flares, snows, text, snowman, recording playback, particles
4. end scene – (an “outro”? of) an animated “The End”

Splash scene.
Its kind of interesting as opposed to flash intro scenes, you don’t want music blasting off the moment a user enters the site. Instead, the splash scene is used to allow the full loading of files, and giving another chance in scaring users from running the demo, especially if WebGL is not enabled on their browsers. Then the thought of giving user some sense of flying through the stars while the user gets ready. (I hope nobody’s thinking starwars or the windows 3.1 screensaver) There again, flight in splash scenes are not new – we had flocking birds in wildness downtown, and flying clouds in rome.

Intro scene.
The intro scene was added originally to allow a transition into the main snow scene, instead of jumping in and ending there. Since it was daylight in the snowscene, I thought a changing sky scene from night to day would be nice, hence the startrails and Aurora Borealis (also known as Northern lights). I’ve captured some startrails before, like
or more recently in 2011, this timelapse.

a little stars from Joshua Koo on Vimeo.

But no, I haven’t seen Aurora Borealis in real life, so its must be my impression from photos and videos seen on the net, and hopefully one day I’ll get to see them in real life! Both still and time-lapsed photography appeals to me, which also explains the combination used.
Time-lapsed sequences gives the sense of earth’s rotation and
the still photographs imitates long exposures which creates beautiful long streaks of stars.

Both a wide lens and a zoom lens were also used in this sequence for a more dramatic effect. Also, I fixed the aspect of the screen to a really wide format, as if it was a wide format movie or a panorama. Oh btw, “It came upon” the comes from the title of the musicplayed in the background, the christmas carol It Came Upon a Midnight Clear.

Snow scene.

At the central of the snow scene is a 3D billboard, probably something like a hollywood sign or the fox intro. Just a static scene but the opening was meant to be more mysterious, slowly revealing the text through the morning fog while the camera moves towards it and then panning along it, while interweaving with still shots from different angles. As the sun rises, the flares hits the camera and the scene brightens up. The camera then spins around the text in an ascending manner as if it could only be done with expensive rigs and cranes or mounted on a helicopter. Snowfall have already started.

Some friends might have asked why is the camera always against sun. Having the sun infront of the camera makes the flare enter the lens, and to allows shadows falling towards the camera to be more dramatic. Part of making or breaking the rule of not shooting into the light blogged sometime ago.

Part of this big main scene was to embed a text message. There were a few ideas I had to do this. One was having 3D billboards laid up a snow mountain, and as the camera flies through the mountain, the message gets revealed. The other was firing fireworks into the sky, and forming text in the air. Pressed for time, I implemented creating text on the ground, and turning them into particles so the next line of messages can be revealed. The text playback was created by recording of keystrokes. The camera jerks every time a key is pressed to give a more impactful feedback, such like the feelings of using a typewriter. On hitting the return key, the text characters turn into particles and flyaway. This was also the interaction that I had intended to give users, allowing them to record and generate text in real-time and sharing it with others. An example of a user recorded text by chrome experiements or my new year greetings.

Outro.
And so, like all shows, it ends with “The End”. But not really so, as one might noticed a parody of the famous Pixar intro, as the snowman turns to life and eventually hops on a text to flatten it. Instead of hopping across the snow scene, it falls into the snow, swimming like a polar bear, (stalking across the snow as one has commented), to the letter T. After failing to be flattened by the snowman, it gets impaled by the “T” and gets drag down into the snow. The End.

A friend asked for blood from the snowman, but in fact, I already tried tailoring the amount of sadism, it wasn’t the least mild (at least I thought), nor I wanted it overly sadistic (many commented the snowman was cute). Interestingly as I was reviewing the snowmen in Calvin and Hobbes, I realized there’s this horror elements to them, as much as I would have loved to have snowmen as adorable as himself or hobbes. Then again it could have represent Calvin’s evil geniuses, or that the irony that snowmen never survives after winter.

Thoughts.

First of all, I realized that its no easy task to create a decent animation or any form of interactive demo, especially with pure code, and definitely not something that can be done in a night or two. Definitely I have a respect for those in the demo scene. I had the idea in a minute, and a prototype in a day having thought, what could be so difficult, there’s three.js to do all the 3D work and spark.js for particles. Boy was I wrong, and indeed have a better understanding of 1% inspiration and 99% perspiration. With my self imposed deadline to complete it before the end of the year, not everything were be up to even my artistic or technical expectations, but I was just glad I had it executed and done with.

Looking back at year 2011, its probably a year of graphics, 3D and WebGL programming for me. I started with nothing and ended with at least something (still remembered asking noob questions starting in irc). I did alot of readings (like blogs and siggraph papers), had a whole lot of experimentation (though a fair bit were uncompleted and failures), generated some wacky ideas, contributed in someways to three.js, have now a fair bit of knowledge about the library and webgl, and have at least 2 experiments featured. What’s for me in 2012?

Music Colour Particles

Prelude


My latest demo uses particles using WebGL and Audio Apis for audio analysis.

Experience it @ http://jabtunes.com/labs/arabesque

A decent machine with a recent (chrome or firefox) browser recommended. If your browser cant play this, check out the screen capture video here.

Prologue

Somehow I’m attracted to the arts, both audible and visual (and programming a scientific art?). Perhaps it was a way I could express myself, and I picked up music over the age most kids do, and started learning these 3d stuff after graduating without attending any graphics classes.

What’s interesting is that long ago I could admire without understanding how music visualizers (eg. Milkdrop in winamp) worked. Even after knowing some form of programming, the amount of work to create such graphics and audio processing would look really scary. Fast forward to today, all these is actually possible with javascript running in browsers, and with the vast knowledge online, I just wanted to try it. What better time to release this demo after the weekend Chrome 14 would have hopefully push to the masses, shipping with web audio api.

Inspiration

Needed lots of them and tried getting them from everywhere I could – flash blogs, papers, and even cutting age html5/webgl demos, etc. But perhaps the great concentration of work done with music visualizations with particles I found are videos created with visual effects/motion graphics/3d authoring tools, for example Adobe After Effects, and plugins like red giant’s trapcode suite (form, particular for particles and soundkeys for audio) and Krakatoa. Among the top hits of such videos is a brilliant music visualization done by Esteban Diacono

Particles

So start off with particles first. I probably did some simple experiments in actionscript after reading “Making things move”. Then I experimented with some javascript dom based particles effects for dandelions. Then some for the jtendorion experiment. Not too long ago, I revisited different javascript techniques (dom, css, canvas2d) with my simple fireflies particles also used for Renderflies.

So it started out with really lousy stuff, but I learnt just a little more each time. Then I learn’t about three.js and also looked at particles demos done in actionscript. I wanted an easy way to create a particle systems in three.js so I’ve wrote Sparks.js which is like the flint or stardust particle engines in flash and their 3d engines. (Okay, this should have been another post).

Part of my experiments with three.js and sparks particle was to moving emitters along paths. I started created random paths (bezier, splines etc), moving the emitter and chasing it with the camera. Deciding there were too variables and things was getting messy, I decide to fix the x,y movements of the emitter, control the direction of particles movement. With particles moving away from the emitter, it also created a illusion of movement, and I stuck with it till the end of this demo.


Experimenting bezier paths and stuff.

Before continuing on particles, let’s look at the music and audio aspects.

Music

Picking a music for the demo was tough – I thought picking a piece too short might not be worth the effort, but having a music too long would be much extremely difficult technically and artistically. I decided if I could record some music first, I could try, before requesting friends to help me do so, or sourcing creative commons music online or at the last resort purchasing some. So I went with the first option, and my friends at the NUS orchestra kindly allowed me to use the practice room for the recording.

Out of the few pieces I narrowed to record, turned out the Arabesque seemed the most suitable despite the mistakes made. Oh well, needs more practice. The composer who wrote the piece is Claude Debussy, a impressionist French composer who also composed Clair de Lune, as heard in Ocean 11′s ending and several other music visualizations. I happened to stumble upon Arabesque looking through a piano book I have, but kinda grew to love this piece too. Anyways, feel free to download the mp3 or ogg music.

Audio Analysis
Most demos apart from flash doing music visualizations either take prebake audio data or uses the audio data api which is available only on firefox. The web audio api for webkit browsers uses the realtimeanalyzer to perform fft (fast fourier transformation for turning into frequency-aptitude domain) on it. It seems like there aren’t any libraries out there which does spectrum analyzing on both chrome and firefox yet (although there are solutions to fallback from ff audio api to flash), so I thought I had to learn both APIs and see what I could do. Thanks to Mohit’s posts on the Audio API, it wasn’t difficult using the web audio API. Although there were kinks too, for example not being able to seek through an audio correctly, or return its playing time accurately.

The result of this work to simplify the api is a javascript class I call AudioKeys.js. What AudioKeys does is simply to load a audio files, playback and return its levels. Usage of AudioKeys is pretty simple.


var audio = new AudioKeys('arabesque1.ogg');
audio.load(function() {
// When audio file is ready to play
audio.play();
});

AudioKey’s API allows you to “watch” or monitor portions of the frequency spectrum so this means you could get different amplitude levels from monitors watched in the different parts of the spectrum (eg. 1 for bass, 1 for treble, 1 for mids).

Add monitors and retrieve audio levels.


// adds an audio monitor
audio.addWatchers(startFreqBin, endFreqBin, lowLimit, highLimit);

// in your application loop
audio.process();
// returns the average amptitude level of the monitored section of audio

levels = audio.getLevels(0);

I probably got inspirations for doing this from some of the video tutorials using Trapcode SoundKeys, and I created a tool to help view the frequency spectrum. Not only for spectrum viewing, this tool allows creation of monitor bins.


Drag a range on the spectrum to monitor specified parts.
http://jabtunes.com/labs/arabesque/audiotests.html

On firefox browsers, AudioKeys falls back to MozAudioKeys, a firefox implementation for a compatible AudioKeys api, but internally uses dsp.js for its fft processing. However, getting both implementations to provide the same levels is a pain (somehow the levels are not analyzed or normalized the same manner) and I hacked MozAudioKeys to give some usable levels for the demo. If anyone have a more elegant or correct way to do this, please let me know!

Audio Levels with Particles
Now we have to combine audio to interact with the particle systems. Even without audio, a particle systems would have plenty of parameters to tweak around with. You could check out my developer version of the demo at jabtunes.com/labs/arabesque/developer.html where it has controls to play around with the parameters of the particle systems. This version also shows the particle count, current audio time, and average fps.

Now sparks.js has its own engine loop. It’s decouple from the render loop for which data is processed for rendering to webgl or the screen. The render loop uses Paul Irish’s wrapper of browsers’ requestAnimationFrame() to paint the screen when there are resources. For example if the rendering is heavy, the render loop might only run times per seconds. However, during this time, the sparks’s engine would continue to run perhaps 50 or 100 times a seconds, ensuring that particles birth, spawning, actions and death takes place.

Since the particle engine runs at a more predictable interval, I’ve added audio.process(); for performing fft to the particle engine loop callback. After processing, one could use these values to alter different parameters of the particles engine, let’s say the velocity, colour, direction, acceleration etc. Since I used 2 watchers – 1 for the base, and 1 for the mids, the levels are add up to range from 0 to 1.2. This is multiplied by the maximum height I like my emitter to reach, in case 300 units in y. Now as audio levels moves up and down quickly, the peakHeight is recorded and allowed to falls a little slower if the audio level drops. Now, this is targetHeight to archive. The difference between this and the current height is the difference in y-position. The y-diff values can be added to the y-component velocity of the new particles, giving a bubbling flows of particles when there is amplitude in the music.

This would be the most major factor used in the demo. Other minor effects include change in emission rate and slight vignetting flicker. Now lets look at some other effects employed in the demo.

Effects
Hue change – This is a technique which first saw on wonderfl, and I kept using it for the past particles demos. Based on the Hue Saturation Levels systems, the hue is rotated through the entire color circle. Perhaps I should do a color ramp the next time?

Colors – Subtractive blending is used here, so its feels a little like some water color effects rather than fireworks which uses additive blending.

Vignette – as mentioned earlier, this is done with a fragment/pixel shader, created using alterdq’s new EffectComposer to simply post processing.

Blur – I started experimenting with some blur processing shader pass filters, but since I was generating the particles’ sprite procedurally using 2d canvas and radial gradients, they looked soft and blurred enough that there was no need to add blur in post. Which is good, because that could could improve some bit of frame rates. Particle sizes are randomly between 40 and 160, so there’s a variety of small beats and large clots of ink.

Turbulence
I wanted to make the particles more interesting by spiraling around the x axis pole. I tried adding some wind effort blowing in a circle, but they didn’t turn out well. So I learn’t that turbulence can be created with perlin noise or simplex noise which is a improvement of perlin’s original algorithm. I used Sean Bank’s port of Simplex Noise in javascript and added 4D noise not knowing if I needed it.


An undesired effect which looked like a spiral.

I then wrote a Turbulence action class for sparks.js and one could use it doing sparksEmitter.addAction(new Turbulence()); Now the particles appear to be more agitated depending on the parameters you use. So this seemed to work quite well actually, and it runs pretty fast in chrome, perhaps at most 5fps penalty. Firefox, however is more less forgiving and suffers 10fps loss. So what to do? I’ve decided to try use shader based perlin noise for that, and I used the same vertex shader used in this turbulent cloud demo. Maybe its not exactly accurate, since the turbulence doesn’t affect the positions of particle in sparks engine, but at least, there’s some illusion of turbulence and it runs really quickly, with almost no penalty in fps.

More optimizations.
Now some final optimization to make sure things run more smoothly, esp. with firefox using 50% cpu running dsp in javascript. In three.js, a set of particles are reserved for the particle pool for Webgl. This is so new webgl particle systems buffers need not be created, at least from my understanding. So we have 2 pools of particles, one for the webgl rendering and one for sparks particle engine. Unused particles by the engine would be “hidden” placing them at infinite positions and changing the color (to black). When they are required by the engine, they would be assigned to a particle’s target, and reflect the true colours, size and position. Now, while a webgl particle pool of 200,000 seems to run reasonably well, the time for each render loop takes longer, since it takes more time to transfer the buffer from CPU to the GPU each loop. Reducing them would be a great idea, but not by too much in case the pools run out. So I reduced three.js particle pool to 8000 by observing the level of particles which peaked only at 1k+, there was almost enough particles 4 fold. The frame rates now average 30 to 60 frames per seconds (with firefox peaking 70-80fps).

Final touches – Text Captions
In case a single particle trail and a bad recording can only capture limited attention, I decided to experiment with adding text captions before releasing quickly. First, pick a nice font from Google Web Fonts, next experiment with CSS3 Effects. Text Shadows are used for some blurring, inspired by this, add some css3 transitions animations and do cross browsers testings(! – at least i’m picking only ff and chrome this time). To script text captions through the duration of the music, I’ve decided to add text in html. For example, let’s take this.

<div class="action" time="20 28">
Please enjoy.
</div>

This means the message “Please enjoy” would appear at the 20s and exit at 28 seconds of the music. I created the function getActions() to the render loop. This methods accepts 2 parameters, the current and the last playing time of which the values could be taken from audiokeys.getCurrentTime();

getActions() would handle not just the entrance and exits, but at half a second before the actual time, make it to a “prepare to enter stage” positions and property using a css class. At entrance time, the fade in css class for transition properties would be enabled and there’s where the fadeIn happens. At ending time, the “fade out” class with animation transitions would be enabled and the text would start to fade out. 2 seconds later, “actionHide” class would be applied which basically adds a “display:none;” style to hide the element. Turns out it works pretty well (although text shadows are inconsistent across browsers – ff’s lighter than chrome), and potentially this could be a javascript library too. Pretty cool css3 effects right? Just one additional point to note here, it seems that the frame rates drops to almost half when the css3 animations are running. My guess is that drop shadows, text shadows, gradients are generated as images internally by the browser, and could be just a bit intensive, despite running natively. Anyways, that’s my guess.

Coda
So that’s all I could think of for now. Let me know if you have any questions or comments! I’m not sure if such a long write up would have bored anyone. I’m pretty glad to have finished this demo, perhaps just 6 months ago, I might not even imagined how I could do this and have learn’t quite a bit along the way. I was also worried that I would be spending lots of time on something that didn’t turn out well, but I’m glad by the retweets and nice comments so far! Google Analytics also tells me that the average time on site is 4 minutes, the length of the song which I worried was also too long and that many would close their browsers 30 seconds through the piece, so I’m quite relived that those numbers. \(^.^)

Requiem
Despite coming a long way here, it only tells me that there’s a greater journey to travel. I’ve been thinking of implementing shadows, and perhaps try out other particle and fluid techniques done more in the GPU. For example, mrdoob points me to a fairlight/directtovideo music particles demo. That’s really way to go! For now though, I might need to take a break to rest, since I’ve fallen sick over the past days (not sure too much webgl or chocolates), I’ll also need concentrate on my day job, before continuing exploring on this journey!

3D Text Bending & UV Mapping

Another back post on updates of some new features added to three.js ExtrudeGeometry a couple of weeks ago.

———-

The 2 most notable changes are
1. 3D Text Bending – This allows text to be bend or wrapped along a spline path. Curve classes shares a common API so straight lines, bezier curves and catmull-rom splines are supported. The initial inspiration for this work came from this neat article http://www.planetclegg.com/projects/WarpingTextToSplines.html

2. UV mapping – Finally, managed to add UV mapping for extrude geometries classes.

3D Text Bending

Curve classes contain methods to get tangents and normal vectors at a point t, and one use case is for path wrapping/bending. CurvePath is now a connected set of curves with the Curve interface/API. It shares the same interfaces with the base Curve class, therefore a bend parameter could take in a curve, an array of curve (curvepath), or a path (for drawing api).

To apply path bending to a shape, use shape.addWrapPath(path). Calling getTranformedPoints() would run transformation on extractAll(Spaced)Points() internally;

[edit] one important key to how path bending works is that the text is transformed according to the tangents of the spline path. I wrote the base curve class to calculate the tangent based on gradient of a tiny difference of t (kinda like using limits to derive differential. While this potentially can calculate the tangent of any generic sub-curve, a sub-curve should overwrite itself to have faster and more accurate differentiation. I should not forget to thank Puay Bing for helping me with the derivative of the cubic bezier spline the other day!


Bend in text example (http://i53.tinypic.com/fyjgz.png)


Bend with spline (http://i55.tinypic.com/2m7vjt3.png)


Bend with connected curves (http://i54.tinypic.com/33u8xa9.png)


Cubic and Quadratic Beziers (http://i51.tinypic.com/30ws1t4.jpg)

UV Mapping
UV mapping applies a texture onto a material to wrapped around the object.

ExtrudeGeometry now takes in material and extrudeMaterial, material for front/back faces, extrudeMaterial for extrusion and beveled faces

Using these 2 gradient textures
textures material

can produce
uv three

with sufficient subdivisions, you can get a smooth looking model..
close up on 20 subdivisons

advice learn from mrdoob to “experiment, experiment, experiment”
experimented and flipped

i think i just fell in love with uv and gradients, until some transparency came
some transparency

Finally for some demos – TextBending & Text LOD’s on Github, UV Mapping at http://jsdo.it/zz85/h5EM and remake of mr.doob’s “Hello” at http://jsdo.it/zz85/hello

The end of the weekends means the end of my code twinkling for the week again. A lot more exciting/crazy stuff to explore and work on, unfortunately good things may have to wait.

At least I think I have a little small checks on my never ending TODO list. (^^)

Signing off geeknotes.

——-
after adding uv mapping, alteredq added smooth shading to the webgl text example. Pretty cool right?

Extrusion Bevels – Three Attempts in Three.js

Three.js is out now at r43 with more features. Check it out! :)

In a previous post I had highlighted my approach for creating 3d geometries extruding a 2d text font (shape) along its z-axis. There have been a couple of changes for Text too – `Text` is now `TextGeometry`, much of what has been in that class has been refactored into a couple of other classes, but since we have a generic extrude geometry class `ExtrudeGeometry` which applies not only to text fonts but to any generic closed forms shapes with support holes with `Curve` and `Path` classes. Along with TextGeometry is wrapping to splines/curved paths, maybe a topic for another post ;)


Bevel and Text Geometry with Level-of-details

In this post, I talk a little about beveling, (termed fillet caps in Cinema4D), which I mistakenly labelled it as bezel at first (the round part around rolex watches). Bevel slopes or rounds the corners of an extruded block, kind of chiseling or sandpapering a wooden block in the real life analog. Beveling also seems to be a common feature in 3d programs, and you might have probably even use it in applications like photoshop or word. While seemingly a simple feature, it almost took me almost four rewrites to get to a version which I’m satisfied, of which I’m sharing my approaches below.

To start off, I wasn’t even sure whether to keep the original shape at its extruded section or at its front and back caps. Just thinking about this stirred internal debate within myself and some amount of code rewrites. After some experimentations and observations, I decided that original shape at the caps and widen shapes at extruded bodies looks better for text. It made even more sense after watching greyscale gorilla described the usage fillet caps for giving a strong look to fonts and making edges smoother in C4D.


Interesting strange effect in my WIP bevels

Expansion Around Centroids
So the first approach I jumped in quickly to code was to scale the shapes using its contour points. To know where to scale it around, for which I used the centroid, calculated by averaging all the points of the contour. Works pretty well, but what about holes? Centroid for contour points for holes are also calculated, but instead of scaling in the direction of the shape parameters outwards, they are scaled in the opposite direction inwards. This seems to work pretty well for many shapes until… you take a careful look at the tip of “e”. this strange appearance is due to the concave nature of the shape. Drawing it on paper, its pretty easy to understand why this algorithm would not work on the inside edges of concave shapes.

Expansion via angled midpoints
So approach 1 seemed successful, but failed for certain use cases. How can we solve the problem of the first? One way is to perhaps break down the concave shapes such that only convex shapes remain, but that would have its sets of problems. So I took another approach – to extrude its corners based on the angles between edges. Each new corner will be extruded outwards with the angle averaged or equidistance from both sides of the lines connected to the corner, by the extruded amount. Again, a simple method, and seems to work pretty well, but on careful inspection one would notice strange looking corners especially with sharp edges. Since the expanded corners are determined by angles, there is no guarantee that the bevel amount from the edges are equal throughout and are affected by the shapes.

Corners based on Edges Offsets
In the 2nd approach, 2 connecting edges at each point is required to computed a single bevelled point. The 3rd approach starts out similarly till here, then lines are offset outwards at its edges normals (90 degrees from the edge’s slope) to obtained its extruded edges. From its offset positions, new points are calculated. In pseudo code

For each point,

Find the line connecting to the point
The normal left perpendicularly to this line is used, and

For the line connecting from the point,
the normal right perpendicularly to the line is used.

Offset lines a unit to its associated normals, and find the
intersections of these 2 new lines

Intersection would be the new point.

Now this seems to work much better for most of the cases, since new extruded bevel edges now has a equidistance to the original edges consistently. Unfortunately at sharp converging edges, a point from these sharps corners seems to be missing. In such rare cases, the intersection point of the extruded edge reserves its direction and pushes the point further away from where it should be. The work around for this is to revert to the algorithm of 2nd attempt to prevent such artifacts.

So here is it, at the end of at the bevel attempts, while not 100% perfect, has reached a state which I’m generally satisfied with the results.

If you are any more interested in this, check out the new examples and source on github https://github.com/zz85/three.js/blob/experimental/src/extras/geometries/ExtrudeGeometry.js

Stay tuned for more development updates. :)

disclaimer: i never gotten the chance to take any computer graphics classes, nor came across any papers on this topic, so please feel free to refer or suggest any better approaches if you may know.

p.s. for curved bevels, I used sinuous function of t. wonder if there’s a better way to go about this?

Distributed Web Rendering

(This note was the first of a series written in early May this year shared on my facebook notes. Its really hard to imagine that it hasn’t been too long since then but I’ve already ventured much deeper into the 3d and webgl world.)

The Short: Watch this short 30s video.

Fireflies rendered with RenderFlies.

The Long: A demo using web browsers to render images a before creating a final video render on the server side.

Motivations:
Just before sleeping, ideas revolved around my head, but while this idea being just one of them, ended up manifested in code as a proof of a concept last night.

I imagine that anyone who does video encoding, post processing or 3d, might find the rendering process slow and painful. Which is why I imagine, super computers and render farms are therefore used for serious work like in the movie industry.

Since web browsers are well supporting the canvas element and other html5 technologies, especially with javascript performance greatly improved, this is really an exciting time for web developers (although not too long in the past I imagine flash and actionscript was really exciting). I then had the idea is that a simple distributed rendering system with html5 can be created. Such a system can be easily deployed and widely available web browsers can be act as thin light weight render client with ease.

The how:

To start hacking this system together, I first modified my html5canvas based fireflies experiment. Instead of the usual canvas repainting at a regular interval, a few changes needs to be done. After updating particals and updating the canvas, the image on canvas is extracted with toDataURL encoding the images to a base64 string, which would be pushed to the node.js server.

The node.js basically decodes and write the file to the file system. When the images are done rendering, ffmpeg is spawn as a seperate process to encode the images to a mp4 video. Note that node.js is spectacular dealing with external process. Due to its event driven, non-blocking architectural, the http request can wait until the encoding is done before notifiying the web client. This is a good reason, apart from just thinking that its cool using javascipt on the server side.

For this fireflies demo, the images are created at full HD resolution 1920×1080 saved in the default png format. I encoded the mp4 with ffmpeg at 4Mb bitrate.

Some Notes:

Performances
I’m quite satistifed with Firefox 4 performance here, in fact, it is faster than Chrome of an average of 2 seconds every 30 frames. I noted some of the actual numbers somewhere which I’m lazy to pull out now. However, one reason why Firefox is faster is that its toDataUrl operation takes about 200ms while Chrome takes about 300ms (on the mac).

On the average for my final render of ~40 particles (I used about 500 particles for benchmarking), Firefox and node.js locally takes about 200 seconds to transfer 900 images (30 seconds, 30 frames) and takes about 1 minute for the ffmpeg processing at full HD resolution. Safari seems to be able to give a reasonable performance on the mac too.

On my windows machine, I’ve tested with IE10 preview (with backwards compatability till 9), opera, firefox, chrome. (Now you know why its called Html5 – because you always have to test 5 browsers). The new IE’s performance is pretty good too, and all browsers are almost on par, with chrome running a little slower.

PNG Transpaceny: I realized the renders if transpacy in background was used was quite bad. After encoding to MP4s, it looked ugly and tooked up 10x the size. Fix: Paint/fill the entire background instead of/in addition to clearing the canvas every refresh.

More practical use of this ideas:
Right now this demo only works for a single client-server.

qn: posible to extend this to multiple clients? Yes. Multiple clients can be rendering the seperate project or a single project, but while on the same project, they can be distributed at giving a different slice at different frames. It is even possible to have a collabrative raytracing to a single image as shown here.

qn: Can a particle system animation be rendered distributely? Possible if a set of model is unified. For random calculations, they should use the some predefined values or use the same random seed.

qn: is this distributed rendering practical? It depends. This is case, a screen video capturing program could have been used, or you can argue that its more cool to run things real-time anyway. However, this might come in useful in cases when your system cant keep up, for example attempting to run 1M particles on canvas 2d, you might opt to render it instead. This idea however can work with WebGL (canvas 3d) or more CPU intensive javascript ray tracing methods.

qn: what about untrusted clients? That depends on your application. If its a social experiment, what can use what SETI or similar distributed program do, requiring similar sets of answers for different clients to accept a solution.

qn: Would distributed rendering really work? This project is not yet at the stage, but the challenges here is about solving bottlenecks, whether its a CPU or an IO bound job. For example, the rendering performance of this demo can be improved at its network layer. Making a http request for every file incures higher overhead and lag times. One way is that a buffer be used to store a few render images, before making a POST, or that use websockets and stream the data continously at less overheads.


Some other TODOs: use ffmpeg2theora for encoding to ogv for firefox browsers.

Lastly, while I tried googling this idea some time ago, nothing came up but recently I have found out that a collabrative browser based mapreduced has been though about and discussed before.

Source code is available @ https://github.com/zz85/RenderFlies

Okay, I have more important stuff to work on, signing off geeknotes today. :)

three.js 3D Text

(Since TextGeometry has been added to r42 of three.js I decided to create a repost of my note written @ http://www.facebook.com/notes/joshua-koo/geeknotes-3d-text/10150218767153256 – warning code example api might be outdated by now)

If you haven’t heard of three.js, its an extremely cool and simple 3D library in javascript started by mrdoob. I suggest checking out the extremely cool examples like “wilderness downtown”, “and so they say”, “rome” ). Three.js supports both canvas and webgl rendering.

Procedural 3D Text Geometry is my humble contribution to this project. I’ve managed to apply the little things I knew or experimented on geometry, and learned more in the progress of adding this feature. I’m thankful to @alteredq too, who tested and clean up my code (in additional to adding cool shaders to my text demo), and is a fantastic contributor to the project.

If a demo speaks more than words, check out the webgl demo (should work with the latest chrome or firefox)

Or if you browser/graphics card doesn’t support webgl, try the software rendered (canvas 2d) implementation

Now what TextGeometry does is help to create 3d geometry to render 3d quickly in the browser without additional tools (eg. export text models from your 3d software) with just a simple call like var text = new THREE.TextGeometry( text, { size: 10, height: 5, curveSegments: 6, font: "helvetiker", weight: "normal", style: "bold" }); There was a couple of motivation to create this, but one of them was my curiosity and interest motion graphics.

For now let me dive into some technical details of how the 3D Text Geometry works.

The steps are as follows;

1) vector paths extraction from fonts
2) points extraction from 2d paths
3) triangulation of polygons for shape faces
4) extruding 2d faces to create 3d shapes.


process of creating 3d typography – points generation and triangulation

1. shape paths
like how text and fonts are rendered on computers, vector paths are needed from font data. there are 2 main open source projects which have tools for converting fonts to javascript format so it can be rendered with javascript, namely cufon and typeface.js. typeface.js data files was used here.

2. extractions of points
this step is required for triangulation in the next step. a little math and geometry understanding here is useful, but paths mainly consist of lines and bezier curves. lines are straight forward and bezier curves requires a little maths and sub dividing the to create points from the curve.

3. triangulation or tessellation here is important because that’s how faces of 3d models can be rendered. the polygon triangulation algorithm was first implemented in AS3 before my port to JS. This algorithm however, does not support holes in shape, and therefore I had to implement an additional hole slicing algorithm described in http://www.sakri.net/blog/2009/06/12/an-approach-to-triangulating-polygons-with-holes/ (page is down, so check google’s cache if interested)


creating 3d typography in javascript – wireframes

4. creating the 3d geometry
so far 80% of the hardwork is done. creating the geometry requires vertices (array of 3d points/vectors3d) and faces (list of triangles or quads describing an area using indices to the vertices). one just have to be careful with the normals, and the direction of the vertices (whether clockwise or anticlockwise) directions. the front and back faces are added with triangles calculated in the previous step and extrusion created with quads (face4)

lastly, its time to be creative, experimental using the 3d meshes generated.

[MusicGeekNotes] Creating Music With Graphics, Lights and Smokes.

Create some music by clicking or dragging @ http://jabtunes.com/labs/jtenorion.html

** Warning: Can be CPU intensive. Safari / Chrome for best results. If you do not have a fast CPU, use less effects in the checkbox options.

Check out a video playing with this.

[Music Notes]
After a while, you may find that you’re creating some chinese-like, malay-like or javanese like music.

This is a clone of the fairly expensive electronic but a fairly cheap music instrument call the Tenori-On made by Yamaha.

Anyway, this version is closer’s to Andre’s implementation call ToneMatrix, a flash implementation using notes of a pentatonic scale, which allows you to pleasant sounding music evening if you click on it randomly. Just youtube it, and you can find how many others have created music with the Tonematrix.

[Geeknotes]
First of all my implementation differs with using a HTML 5 Canvas to render the graphics instead of flash.

I use the the Sion Library for the audio, creating a JS to AS bridge to generate sounds. Howeveer, with the still-in-progress Mozilla Audio Data API, one can easily modify this program to run fully in javascript soon.

My initial visual effects was less than satisfactory compared to Andre’s implementation and I got stuck at creating the “ripple effects”. I first tried the ripple algorithm used in my previous experiment, but such animation would be almost usuable and very likely to hang the browser.

After a period of rest, while reading on perlin noise in canvas, I started having some inspirations for this project again.

Its then struck me that I could implement this with a simple particle system with gradients.
I was also inspired by the use of gradients, alpha and composites stumped upon this while looking up collaborative programming.

So right now, although its not all perfect, but its a big improvement from the past.

Go ahead, have fun, create some music, hack the code, or provide some suggestions.

p.s. I found the layout look like an iPad after I used curve borders, not that I wanted to replicate the iP* devices.

Other WIP screenshots here.

(a cross post on my facebook note)

[geeknotes] Now, Have you met.. Instant QR Codes?

(Imported from my facebook note dated Monday, 28 June 2010 at 01:37)

This is a follow up to my previous geek note “Say Hi to Instant Barcodes”. The quick story here: nothing too fanciful, just a simple html 5 mashup for instant qrcodes using javascript, jquery, and Kazuhiko Arase qrcode jscode. See http://jabtunes.com/labs/qrcode.html

QRCode

The advantage for using QRCode over my previous barcode generator is that this 2d barcode packs more information in it, specifically for this mashup implementation, 119 binary characters with 30% recovery rate (using up to QR Code version 10, can be much more if we can implement till version 40). Think of it as maybe, twitter on a picture!


A picture can tell a hundred words, and this tiny qrcode does store a hundred characters.


Yes, the barcode scanner on android works with 2d barcodes.


And I think this is also a good way to send urls, telephone numbers and other contact to each other. With a lack of a standard vcard or bluetooth protocol, I think qrcodes should work much better!

Feedbacks would be great! and yes, you can download some codes and send me messages in QR Codes! Goodnight! :)

p.s. Tested on all modern browsers (& ie9 beta) except mobile browsers (pls let me know if it works!).

[geeknotes] Say Hi to Instant Barcodes

(This is repost of my facebook note dated 25 June 2010. Apparently there’s more notes to be imported, but maybe I’ll leave that for another time)

It seems to be ages since I last touched javascript but here’s a latest addition to my html 5 canvas experiments, a really simple (1d) barcode generator.

Type some words or numbers, and a barcode appears immediately (using the simple code 39 implementation without checksum- drawbacks, the not suitable for long data due to low density). Admire the barcodes, save it, otherwise artistically modify it (or have fun writing coded messages to each other).

To skip the talk and get to the action, see http://jabtunes.com/labs/bars.html

Barcode scanner on the android is a nifty cool feature.

Scanning my name. You could use it for telephone numbers too.

Yes it works.

Lastly here’s a barcode if you’d like some practice.

Goodnight :)

p.s. tested in Chrome, Firefox, Opera (barcodes without text for IE). Javascript lovers, the html file is self contained (except for jquery). Use/hack as you like