3 Weekends, 3 Projects

In this post, I would like to share 3 projects that I managed to work on 3 recently weekends. For a variety of reasons and excuses I haven’t spent as much time as I would love working on my side projects this year. You might even notice that I’ve been way less active on twitter and on my blog (last post was a year ago?!). Anyways let’s start.

#3 – Slow Fast Slow in JS

TLDR: A web app that adjusts the video speed however you want.

Demo: https://zz85.github.io/chop-chop-video/slowfast.html#[[0,0],[0.329,-0.35],[0.642,0.24],[1,0]]

The first small project is an implementation of an IOS app that I love called Slow Fast Slow. Not only it is slick looking, it helps adjust the speed of videos in a intutive manner, not just for the entirity but in sections while interpolating playback rates. Typically to do so you might use a feature called Time Remapping in After Effects that creates hollywood style bullet time scenes, and if you tried that before you know it’s not so simple to use. That’s a reason why I love the app is that you can easily do that, and it became a challenge for me to reimplement it in JS in the browser.

One of the interesting parts of the implementation is figuring out how the curves work. (Thankfully I had some experience playing with curves and splines). An initial approach was to use catmull rom splines however on observation it was ruled out since moving a point never affects more than 2 segments in the original app. Using an approach of rendering bezier curve lines between 2 blocks in a diagram such as mind maps, (using mid points for the bezier control points), I had voila moment that the behaviour match that of the original app. It’s simple, yet it’s really neat that it can work.

Next was integrating the UI with the video in the browser. Thankfully the html video element handles the speedup / slowdown with playback rate. It works well despite some limitations – for example not being able to play in reverse and buffering.

This was an interesting project, and apart from some technical bits I appreciated the UI/UX quality of the Slow Fast app (and wish I find more apps like this). What’s next would be to think about videos could potential be exported as a real video (eg. using ffmpeg.js, server side, or using the new MediaRecorder).

For those interested, check out related source code here.

#2 – Terminal Rendering for Three.js

TLDR: A Three.js Renderer that runs in the terminal.

Project: https://github.com/zz85/threejs-term

3 years ago, I wrote ASCII Effect Adapter for three.js. That was a fun project, it was such an idea so simple I wondered why no one else did that (maybe it was because it wasn’t useful, but at least I’m glad I get to do it). However I believe it wasn’t a totally new idea because if you were to look up some old demoscenes videos, you might just find some 3D rendering in ASCII.

This year I learnt about a really cool nodejs module call blessed. It is a pure js library that helps build terminal applications, abstracting lower level details like terminal encodings. I first learnt about it reading about the webpack dashboard and subsequently tried it at work to build some terminal based tools. And for those curious why the name blessed, it is with constrast the commonly known (n)curses library used for terminal apps.

Running threejs with node.js (& npm/browserify) turns out to be something looked at before. Many class (eg Vector*) in threejs are pretty generic and have little problems running in nodejs. Classes like Renderers that touches the DOM are the more tricky parts.

The straight forward approach would be to mock the dom (as done as by others) using jsdom (quite a cool library too). However, I decided to mock the minimal parts needed (to avoid emulating the entire dom emulation, and lessen dependenices and overheads).

The first polyfill I did was to emulate the 2d canvas and for that I used node-canvas (that uses cario internally) need. The nice thing was that blessed has an AsciiImage widget, and to convert the canvas to the terminal screen all needed was to add this line icon.setImage(canvas.toBuffer()). As I continued working on the lib, I did refactor that code to do manual conversion of pixel to ascii, that gave me more control but strangely slightly slower.

Other challenges include figuring out what canvas size to use. I used a combination of using the number of columns and rows that the terminal reports, and the actual pixel size (if the terminal supports the command), together with some ratios calculations.

The next interesting challenge was to get TrackballControls to work. The trick to emulate the window, converting terminal mouse and keyboard events to DOM like events, which I thought was pretty cool it was possible.

Another idea I had was to eliminate native bindings for threejs-term. One approach was use Arrays instead of Cario to mock SoftwareRenderer. That managed to work but performance wasn’t too great.

The last idea I should mention before I should stop talking about this project is using Braille for rendering. Turns out some brilliant people have figured out how to abuse unicode braille characters to give you 8 subdivisions in a terminal character. Turn out it is not difficult to implement it, and it gives an interesting effect in threejs-term, the only limitation being you can’t get different colours for each “sub-pixel”. I even got a little side-track to reimplement the XKCD braille comics in js.

In closing, I think had fun with threejs-term. Terminal seems to be such old and limited technology yet I’ve learnt different cool stuff doing this. Going forward, I think I would love to explore node-gl or headless-gl for threejs-term, as well as rendering more (CJK and emoticons) unicodes. As for the blessed library, I think there are so much more use cases for it, and I would love to see someone implement Vim or Tmux using it one day.

#1 – Mr.doob’s Code Style™

TLDR: MDCS is now available as an ESLint rule (and npm module). MrDoob approves Editor now supports ES6 and also powered by Eslint now.

Checkout: https://github.com/zz85/mrdoobapproves

It’s been almost more than a year since I’ve updated Mr Doob approves. Once a while, it might be used by a threejs contributor using it to format code in PRs, and sometimes for the fun, I like to toggle between code styles. On another weekend while trying to update the JSCS formatting rules, I realized that JSCS have sunseted and I took the opportunity to migrate the rules to EsLint.

Not a bad timing. The contributors of JSCS have now joined Eslint, and given the popularity of EsLint (I guess in addition to wider spread use of ES6), ESLint has became more matured too. The JSCS editor was fairly mature, so I think it was good for it to have some usage in real life before another migration.

It was also good that there were a couple of test cases for mdcs in JSCS, so after adding a couple more test case, the migration to ESLint became easier. What’s funny was that only after I hand migrated the rules, did I realize there was a migration guide from JSCS to EsLint though.

Apart from using EsLint to power “mrdoob approves”, the fact that Eslint is supported by many editors and project led me to question why not expose the eslint rule as an npm module instead? Therefore to now use the rule, one can simply run

npm install –save-dev eslint-config-mdcs

And add a line `{ “extends”: “mdcs” }` to .eslintrc.

After adding .eslintrc to the three.js project, I learnt a rule can also be added via package.json.

This is a good milestone with my experiences with code formatting, something that seems so trivial yet in many ways complex and subjective. We started with an itch, a vision, till executing that the vision.

Code formatting has also brought me down the route playing with, experimenting, learning about tools like sed, code painter, jscs, autoformatting patches, esprima till eslint. Despite the journey, it’s nice to see tools like npm and eslint in action. One recent related interesting article I was reading, was the journey behind Feross’s standard style.

Next steps: Getting more code styles to run in the editor (mainly figuring how to browserify the plugins) and more exploring more funky code styles (semi-colons style indentations).

Final thoughts: I’m not sure if I jabbering down this what might be quite lengthy post, but it’s always nice getting a blog post of out my system. It’s about time for me to write less and more often, and continue working on interesting stuff that matters.

quirc.js, an alternative javascript qrcode decoder

5 years ago, I was starting to discover the use of Canvas in browsers. I first wrote a simple barcode generated canvas and later decided that it was time that QR codes be generated realtime with Canvas too (IIRC, most qrcode generators then generated images server side or or used divs/tables/ascii with they were done in JS).

Back to the present recently, Ilmari has been telling me a little about his idea of using qrcodes in JS augmented reality projects. There’s jsqrcode, a JS port of the popular ZXing lib, but he said the code wasn’t the most optimized as there were caught exception blocks. Anyways I thought that QRCode would probably been around so I could find some C libraries to port it to Javascript with emscripten.

I quickly narrow down a couple of libraries I could.
1. zxing
2. libqrcode
3. quirc

I started with the C++ version zxing. I managed to compiled it with emscripten but I was too lazy to look at converting to its buffer format. Libqrcode had some dependencies so I skipped over it. Quirc looked like it was designed for embed systems, has a really small codebase with no dependencies so I thought I would use it.

So in the hours when I turned 3 decade young, I whipped up quirc.js with some contribution from Iimari. Using some optimization flags in emscripten, the resulting js library is <70KB gzipped, which I would think is my smallest emscripten port thus far.

Decoding a frame seems to happen under 25ms, which I think is pretty decent since you could probably decode in realtime on a 30fps video. quirc also seems to be able to decode multiple qr codes at the same time (at the expense of higher processing time).

After this little success, I started having a little fun with QRCodes, and added a mashup with get user media (which I incorrectly refer as WebRTC). To think that many actually do not have a QRCode reader app, this 80kb app is just a link away that runs on even on (android) mobile phones.

I started having wild ideas like trying out ASCII qrcodes, and creating a RPC library that communicates over QRCode. Thankfully, I simply stopped at generating “QArt” codes using images with this online generator.

About a day after, Ilmari starting working on his custom QRCode decoder.

Then after, he also ported ZXing-cpp with emscripten to js. I gave it a try and added the user media example. Turns out it is also pretty efficient (seems to be faster than quirc.js although a bigger library),
but it can also decode a couple more encodings.

So that’s that for QR codes for now. If you’re looking at alternate javascript qrcode libraries, hope this post helped!

Why I think the Three.js CSS3DRenderer is Awesome

The Three.js library is widely known to be a WebGL library. I would like to share some thoughts on one of Three.js lesser known gems, the CSS3DRenderer, and why I think it’s great (even though it’s not even in the current documentation page yet).

Three.js is not simply a WebGL library
With WebGL widely adopted on most platforms, and it would seem natural that WebGL would be the preferred rendering choice on three.js. However, it’s worth noting that three.js has beginnings being a DOM Renderer. It also have a Canvas Renderer before WebGL support was added to the library (and it took until recently to have WebGLRenderer refactored). The simplicity of three.js design that allows other renderers (SoftwareRenderer, RaytracingRenderer etc) is beautiful. I’m sure pluggable rendering architectural isn’t nothing new, but at the top of my head I don’t recall libraries that that comes with better support and extensiblilty than three.js. So while CSS3DRenderer is likely not going to get as much attention and development as the WebGLRenderer, I would try explaining why CSS3DRenderer is a good thing.

Why CSS3DRenderer is Great

1. CSS rendering is widely supported, even on mobile browsers
Apart from some incompatibility issues with older IE browsers (which you could use CSSRendererIE for preserve-3d workarounds), I would think using the CSS3DRenderer would have the highest compatibility across browsers (and used on some high traffic websites until iOS shipped WebGL).

2. CSS rendering can be fast
Browsers leverage GPU for renderering 3d transforms, which gives you 60fps renderering (unless you’re animating > 1000 elements).

3. Stop worrying about Material / Shaders / Geometries
It’s easy to get tied up with geometries, materials, shaders, meshes with the WebGL or Canvas Renderers. CSSRenderer deals with DOM elements, which allows you can forget about texturing for a moment. Which brings the following points:

4. Rich Web Content Transformation
Because you’re dealing with DOM elements, you get to reuse HTML Elements you already have on your website. Now stretch your imagination and means you now can use any rich web content without having to do much.(eg. Images/GIFs/Documents/Videos/Iframes)

5. CSS-ful
Instead of having to handle lower level graphics programming like shading, you could use CSS to style your 3D objects. Maybe at some time you may even want to go crazy with CSS Shaders.

6. Alternative ray packing
Instead of having to raypick (fire a ray into the scene graph to determine what 3d object you’re interacting with), you could simply leverage dom events.

7. It’s a supplement to other Renderers
It doesn’t mean that using the CSS3DRenderer means giving up on the others as you can easily compliment to other renderers. For example, you could integrate a CanvasRenderer (and more) in a CSS3DRenderer, or overlay synchronized CSS3DRenderer with WebGLRenderer (commonly used to render text over the WebGL scene).

8. Extremely lightweight
If could get a customized CSS3D rendering only build of three.js which is a fraction of the normal three.js builds.

Enough of Buzzfeed-like reasons, here’s some situations when to use CSS3DRenderer.

Reasons when to use it
1. Rapid prototyping: Let’s say you wish to get some 3d effects fast with little code, CSS3DRenderer allows you to do that.
2. Creating interesting effects: the ability to creating a different visual effects with a mixture of DOM elements. Another way to look at CSS3DRenderer is that it’s a tool for CSS 3D transforms that you otherwise have to do yourself.

CSS3DRenderer Limitations
1. Thousand Elements: Several benchmarks I’ve done would have me recommend me to keep animated DOM elements at 500 or less.
2. Rasterization: The browser rasterizes an element before transforming it, that means if you scale up a text element too much, you may see jagged edges. The workaround is to create a element with huge font-size (at an expense of increased memory probably). If you’re animating SVG, you may which to transform the SVG points instead of allowing it to be rasterized before transformation.

Cascading Perspectives on CSS3DRenderer
This post was originally named “Exploring CSS3D” I started penning almost 3 years on experiments with the CSS3DRenderer that almost never saw the light of the day. It then became the presentation “Cascading Perspectives on CSS3DRenderer” I gave at CSSConf.Asia 2014 almost a year ago. Although I typically think of myself as a someone whose more familiar with JS than CSS, I thought it would be interesting to share about this topic when given the opportunity to speak. Here’s the slides (that probably has more content) for those interested.

Which brings me to the point
I need to improve my presentation skills
And that because of some knee-jerk reaction listening to my to my own speaking, I’ve only recently took the courage to watch the talk myself. If you’re more courageous than myself, then feel free to scan thru the video:

I think that CSS3DRenderer is cool and it would have a place in my heart, whether or not it would (still) be (widely) used.

Emscripten Experience Part II – Optimizing the GLSL-Optimizer

There are many ways to love Emscripten. You could take an existing piece of C code or library and turn that into Javascript code that has benchmark speeds close to compiling and running native (thanks non garbage collected asm.js). You could use it to code with the type-safety of C. You could bring an application which previously requires compilation or installation to anyone on the internet simply by loading a webpage. You could even use browser developer tools to profile C++ code.

One reason I’ve been intrigued by Emscripten is the possibilities that happen with an application’s portability by the JS cross-compilation target. That has also spark off more interest in me to learn more about languages, C, LLVM or even Javascript itself.

Despite all the interest, I had a slow start with Emscripten – which was the topic in my last post. Overcoming some of the difficulties was I able to enjoy some fun and success with it. Still challenges lurked along the journey, and at end of the previous post I had 2 problems to solve:

a) getting link-time optimizations on glsl-optimizer

b) difficultly of porting a wavetable midi synthesizer to the web browser

One day @thespite mentioned to incorporate my emscripten port of GLSL-optimizer into his cool WebGL Shader Editor extension for Chrome, and that prompted me to revisit my project and Emscripten. So I continued the journey with a couple more tales to tell but as these stories become slightly unwieldy, I decided to cover the second section in another post.


Have you tried playing old games on emulators? A decade or more ago, I used to follow development on emulators (eg. playstation) to check out what additional new game compatibility they have made with new releases. Somehow the idea of being able to emulate and play many games from a different platform amazed me.

This draws a parallel to application of Emscripten. Huge codebase originally written in C or C++ could be “emulated” for the web. It excites me to see stuff like server libraries, desktop applications, graphics platform, codecs, even programming languages ported to JS that runs either on node.js or in the browser. 2 examples of projects I thought was really interesting is emscripten-QT and pypy.js! This wiki page has a great list of projects to check out!

Excess Code

While Emscripten has been known to work of huge amount of existing code, it might also mean that it could potentially generate huge amount of code too. Well, like many other cross-compilers or machine generated code, this isn’t new.

But is that really a big deal? Considering being able re-use most or all of an existing codebase for javascript with relatively little effort and time (compared to re-writing in js). Considering that the equivalent binaries aren’t that small either, especially if you include about shared libaries and dependencies already installed on your system. Considering that different binaries are needed for different platforms, now they would now run on the most universal platform (the web) with additional compilation. Considering without binaries, no installation is needed and runs almost instantaneously. Considering without the web as a platform, how much more difficult it is to acquire users who needs to go the hassle try the software on their computer. Considering these could justify the download wait.

Yet a casual visitor on your website loading up that fat javascript file wouldn’t appreciate that. The javascript may be so huge the browser is taking time downloading it. The browser may be look it has stop responding taking additional time to parse, interpret or compile the huge script files. The visitor may be wondering if the site is broken, the network is down, whether the code works, and it’s definitely not the best experience.

Which is why emscripten isn’t always the magic pill, there are other alternatives like hand porting or using other tools – bonsai-c, cheerp (though from some of my initial tests cheerp seems to be generating a bigger code ).

GLSL Optimizer Releases

When I first announced my emscripten port of GLSL-Optimizer, the build was 8MB. That’s almost the size of 10 floppy disks. These days though, homes have 1Gbp internet, but running a huge javascript file still takes additional time to get the code running.

Of course the file size wasn’t satisfactory. One thing I wasn’t doing was to run it at -O1, -O2 or -O3 optimizations. Running with those flags can activate link-time optimizations and runs the resulting code through JS minification which would improve overall file size and performance. Strangely, using these optimization flags resulted in infinite loops during runtime.

Why weren’t the builds optimized
One factor I suspected why optimizations were failing was because I was using Embind for bridging JS and C world. I observed that for some strange reasons I was able to run at -O1 instead of -O0 when I had embind disabled, so I decided to revert to non-embind bindings and use function return values to pass back successful and failure values back to JS land.

But that seem only to be able to get me as far as -O1 optimizations. The resulting JS weren’t even minified so I wanted to check if I could do closure minification without O2. I ended up filing an issue in github because the flags didn’t allow me to do that with emscripten. So even though that might be a bug, minification without link-time optimizations would also have limited effectiveness (besides, running huge code through the minifier tends to end up with crashes).

Alon Zakai aka kripken asked why the compiler optimizations and pointed me to some compiler settings I could use to trace memory segmentation faults. Those settings turns out to be really useful to debug emscripten code in general.

Tweaking around the flags, I still couldn’t solve the problem. I started thinking that there’s a possibility it was a bug emscripten. I lay this matter to rest till some time later, there were new versions of emscripten and I decided to give it another shot. I upgraded to 1.30.0, no luck. I decided to git clone master version and try again, still it did not work run time problems.


Since I was on the master branch, I decided to check out the latest developments. Emterpretor was in. Asyncify was out. Interesting, but what was that?

I ran Emscripten + Emterpretor anyways and O2 still fails. On the bright side, through the network pane I found the resulting code to be much more compact. Wow! 8.7MB→1.8MB (1.7MB→720KB gzipped) So what happened?

“We take C/C++ code and convert it to asm.js a subset of JavaScript. Then we convert the asm.js to byte code and spit out an interpreter for it written in JavaScript.

The JavaScript interpreter is loaded in the usual way into the web page and the bytecode is loaded as data. The big advantage of this scheme is that the byte code doesn’t have to be parsed and this speeds up the entire load process.” (source)

So from my understanding the Emterpretor is like a virtual machine which interpret and runs your emscripten code. It load code as a binary / bytecode format (like JVM) that is more compressed and concise than Javascript itself. It also allows code to be run even before the entire code can be parsed (unlike asm.js or JS). The virtual machine would also possibly allow some cool stuff like saving the state of the machine, and running virtual threads (green threads). The idea of an interpreted language running interpreted stuff is cool, isn’t it? There are drawbacks though, like preventing AOT optimizations (because instructions would interpreted evaluated at runtime). There’re ways around that, eg using whitelisting, something I may bring up in the next post.

While uploading the tinier builds to github, I realized something. Github pages weren’t enabling gzip compression with emscripten.js.mem. As a hack, I renamed that to mem.js, make emscripten load the custom memory file sogithub-pages would pull a gzip served asset. Yah!


At this point, I’m tempted again to derail my topic to talk about the not-too-long ago announced WebAssembly/WebASM/WASM. It has some of the ideas of Emterpretor, AST byte code format solving faster load times and allowing optimizations in the JS engine. I think it’s an exciting topic. It seems like a natural progression for asm.js -> emterpretor -> WebASM which is at the same time backward compatible with polyfills. It’s even greater that all vendors agree on this standard. I believe it is also something that can make the people asm.js bothers happy. It is also awesome that emscripten can also support that with a flick of a switch. But let me go back on topic.

Real Fix
Up to now, I’ve been trying to get around fixing the optimization problem looking everywhere, except one place – the code base (one of my emscripten lessons I shared was to understand the original code base, apparently I didn’t heed my own advice). In an issue in three.js, Ben Adams mentioned my project in a thread that leads @tschw to the original optimizing issue I’ve open for glsl-optimizer. With his sorcery (which he denies), he was able to trace to an issue upstream from glsl-optimizer to the MESA glsl compiler that was causing how the optimizations not to work. Whoever tschw is, awesome work there!

Finally we can perform -O3 optimizations!!!!

If we stop here for a moment, I think we have a happy ending. The GLSL-optimizer has also been added to the Shader Editor Extension.

So here we have a little milestone, but I believe there’s more work that can be done. thespite suggested for a tree-shaking pruning (aka dead-code elimination) feature that doesn’t alters the original GLSL variables. I think that’s a absolutely good feature to have, unfortunately it may be difficult to alter MESA GLSL parser or GLSL optimizer to do so. Others have mentioned other tools: peg.js, glsl-unit, glslprep.js, glsl-simulator and the StackGL set of tools: glsl-tokenizer, glsl-parser etc. These suggestions are great, someone just have to look into them and apply them. Maybe one day when I have too much time on my own, I might also write my own parser in JS to play around with the GLSL code. Well, maybe as always.

So that’s all for “Optimizing the GLSL-Optimizer” in this post, and I’ll try to write about “Setting WildMidi Wild on the Web” into the next post, till then feel free to drop me comments @blurspline 😀

Related Links / Readings

https://kripken.github.io/mloc_emscripten_talk/gindex.html – Slides on “Compiling C/C++ To Javascript GDC 2013” by Alon Zakai
https://brendaneich.com/2015/06/from-asm-js-to-webassembly/ From asm.js to Web Assembly
The Emterpreter: Run code before it can be parsed
https://twitter.com/BlurSpline/status/568271236632956929 Original announcement of glsl-optimizer on twitter
– Web Assembly (Google, Microsoft, Mozilla And Others Team up, Design FAQ, prototype)
Javascript is C, not assembly of the Web
https://twitter.com/search?q=emscripten Twitter News on Emscripten

Emscripten Experimentation Experience

At one point Emscripten was the hottest thing in the JS world and I decided to try it. I started writing about the experience in this post but the procrastinator in me didn’t finish it. Fortunately, the content went into these slides and gave a talk about it in Feb earlier this year. The slides should cover most of the content here but feel free to read on if you have the time.


I talked about a struggle with Emscripten, on how the going wasn’t easy especially for someone who wasn’t familiar with C. Still various times the idea of using emscripten was incepted (eg. this twitter thread). Among the stuff that I wanted to try porting was GLSL Optimizer. I gave it my first shot and wasn’t successful with that. GLSLOptimizer was also written because HLSL2GLSLfork (for unity3d) generated GLSL code that could be optimized.

Emscripten probably wasn’t the most complex piece of software (besides it have probably been used successfully with millions lines of code) but why did I failed again and again? In retrospective there were a few reasons

1) these projects weren’t trivial
2) my Emscripten environment was probably messed up
3) I wasn’t familiar with C or C++ world (even though I’ve programmed in c and objc)
4) I wasn’t familiar with the build systems automake/make/cmake/premake, its ecosystems and workflows

These factors made it like learning to jump before crawling. So I’d put these on my backburner. Once in a while I try getting my hands dirty again, till I hit another brick-wall. Repeat.

One day I was reading some GLSL code online and I mistaken it as HLSL. Thinking about how to convert it to GLSL, I end up attempting emscripten-ization again. HLSL2GLSL project came to mind but this time found out a new hlsl-to-glsl project, which turns out to be a newer and simpler shader project.

This project uses premake, so I learn about it. I had progress building stuff, and despite some errors, I could fix. I realized that there were also forks of that project (eg. being used by the witness), and I tried those code too. I also figure that premake doesn’t have to be used. I could run a simple compilation command.

/usr/lib/emsdk_portable/emscripten/1.29.0/emcc src/*.cpp -o hello.html -s EXPORTED_FUNCTIONS=”[‘_parseHLSL’]” –bind -O3

There were other pitfalls and lessons learn’t along the way. Especially with how the JS and emscripten are bridged. There were different approaches: CCall could be used, or using marcos, eval and bindings but one have to be careful the datatypes gets converted correctly through the bridge too.

So yeah, I got emscripten working for the HLSL converter demo and source code

That gave me some experience and confidence boost, and I went on to make glsl-optimizer work. Glsl-optimizer has more code and slightly more complexted, but I follow the previous approach of rolling my own build scripts and and had it work too. See Demo and Code.

There’s one thing I haven’t manage to solve, that is getting data corruption issues using link-time optimizations, so it’d cool if anyone managed to fix that, that would bring down the generated JS even further.

And there’s has been recent update with these projects – Jaume (aka thespite) has attempted integrating GLSL optimizer in his really cool Chrome WebGL Shader Editor extension.

One of the ideas still lingering in my mind would be porting a good sounding midi / wavetable synthesizer like fluidsynth to JS. Stay tuned!

MrDoob Approves – A Javascript CodeStyle Editor+Validator+Formatter Project

Near the close of the year 2014, I had an idea (while in the shower): write a little webpage which gives you the answer to “does mrdoob approve your code style”.

Screenshot 2015-01-13 09.28.24

Often times three.js gets decent pull requests that requires a little more formatting to match the project’s code style. Myself would have been found many times guilty not adhering to code style, but in the past when there were no guidelines on what it was, mrdoob and alteredq who would reformat the code their own.

Today we have slightly better documentation on contributing and code style but code style offences still happen pretty regularly I guess.

Therefore the idea was to simply use a browserfied build of node-jscs together with Mr.doob’s Code Style™(MDCS) preset (developed earlier in the year), and make it super accessible on a website. One thought was to buy a domain name like is-this-mrdoob-approved.com, along the style of some “questionable” domains eg.

  1. http://caniuse.com/ (html5 features in browser)
  2. http://www.isitdownrightnow.com/ (popular webservices)
  3. http://www.willitblend.com/ (almost anything)

So that’s how name and github project “mrdoobapproves” came about and I tweeted about it shortly.

But I thought there were room for improvements from this initial idea, so I created a github repository and applied some “Open open source” approach I learn’t from Mikeal Rogers.

Gero3 who is another awesome three.js contributor (who had previously contributed the mdcs preset to node-jscs) hopped on board and in just a couple of days (over the new year especially), we have merged 20 pull requests adding more awesome features like autoformatting and releasing version 1.0 – https://github.com/zz85/mrdoobapproves/releases/tag/v1.0.0

So I guess that’s the long story short. I could possible talk more about the history, but if you’re really interested you could piece things together reading https://twitter.com/mrdoob/status/463709502853103617, https://gist.github.com/zz85/e929503387cdc597b4f7, https://twitter.com/BlurSpline/status/463863644933992449/photo/1 and https://github.com/mrdoob/three.js/issues/4802. I could also go into why people would love or hate Mr.doob’s Code Style™, but for now I would say if you read too much dense code, trying out MCDS might be a fresh change for you.

There is also more to talk about the implementation of this project, but for now I’ll just say it is built with codemirror and node-jscs (which uses esprima) which are really really awesome libraries. There are also slight codemirror plugin additions and auto-formatting is based on gero’s branch of node-jscs since auto-formatting is coming to node-jscs.

For those who are interesting in js code formatting, be sure to check out

  • jsbeautifier – Almost defacto online JavaScript beautifier
  • JSNice – Statistical renaming, Type inference and Deobfuscation
  • jsfmt – another tool for renaming + reformating javascript using esprima and esformatter.

In conclusion, I think this has been a really interesting project to me and great thanks to Gero3 who has been a great help. I think this is also an example that when a topic (code styling) is usually be contentious among programmers, rather than complaining or debating too much, it is much useful to build tools to fix things and focus on things which matter.

So what’s next? Its nice to see some usage of this tool on three.js and perhaps one additional improvement is to hook up three.js with travis for code style checking.
Also if anyone’s interested to improve this tool, check out the enhancement list and feel create new issues and pull requests.

Finally, in case anyone missed the links, checkout

1. The demo http://zz85.github.io/mrdoobapproves
2. Github project https://github.com/zz85/mrdoobapproves
3. The 1.0 release https://github.com/zz85/mrdoobapproves/releases

Better Cubic Bezier Approximations for Robert Penner Easing Equations

To create motion on the screen, there are various approaches. You could hardcode or use a physics engine for example. One of the popular and proven approach is animation with Robert Penner’s easing equations. Now, animation using easing functions with cubic-bezier coordinates is gaining popularity, especially for web development with CSS. In this post I’ll would like to share how I’ve “eased” Robert Penner Easing equations into the cubic-bezier coordinates easing functions.

TL;DR? Check out the interactive example here.
Screenshot 2014-12-26 23.31.01


Screenshot 2014-12-26 23.29.04

Easing (also referred to as Tweening) is an interesting topic, and there is a great deal of good articles on the net. Likely you are familiar with this if you have done some animation (be it javascript, actionscript, css, glsl, flash, after effects), otherwise check out this good read.

Easing Equations
So you understand about easing but you may not know who Robert Penner is and what his easing equations are. For that, I would suggest reading the chapter from his book about Motion, Tweening, and Easing. It may be old, but this was probably what popularized or influenced the way programmers go about coding or thinking about animation. Penner’s equations has been implemented in various languages and for various platforms (even if it’s not, it’d be easy to). Pick any popular animation library and it already be using his easing equations.

For myself, I love tween.js implementation of the easing equations, not just because mrdoob uses it, but because it concisely simplifies the equations to a single factor k which you can easily use it anywhere (and there are others who are rediscovering that it can be done).

Cubic Bezier Curves
Cubic bezier curves are used in many applications especially in graphics, motion graphics and animation software (to list some: illustrator, sketch, after effects).For a very simple introduction to Cubic Bezier Curves, you can watch this video. For some reasons, I find cubic bezier both simple and complex – huge amount of possibilities can be created with just 2 control points.

Cubic Bezier Tools
Which would explain the number of popular cubic bezier tools for creating css animations. It was probably Matthew Lein’s Ceaser tool that introduced the concept of approximated Penner Equations with cubic beziers (which was tweaked by hand if I recall him correctly in a twitter conversation with Lea Verou).

I eventually found that the Ceaser easing functions made it to places like LESS css mixins, Sass/SCSS and Easings.net. I started extracting these values into a JSON format so it’d be easy to use in javascript.

The next thing I did was try plotting the cubic bezier on canvas and add animations for visualizations. What I observed was that Ceaser’s parameters typically followed the shape of Penner’s equations, however the resulting values were still an approximation. I stacked the graphed Robert Penner’s equations from tween.js with ceaser’s and there was some amount of deviations, which is amplified with side-by-side animations or when charted on a larger canvas. One way to improve these easing functions is to fix and adjust by importing it into a tool to tweak the control points until it match up more.

Fortunately, I found another Objective-C library called CustomMediaTimingFunction that also tries approximating Robert Penner’s easing equations. My guess is that native cubic bezier support with CAMediaTimingFunction in IOS/Mac was another reason why someone wanted to convert Penner’s equations to cubic bezier functions.

As with the previous Ceaser’s equations, I extracted them to javascript, and draw them on the canvas with Penner’s equations. There were still some noticeable differences but I was pleasantly surprise to find them more accurate to Penner’s equations. I also found that KinkumaDesign’s equations were pretty off for the ease-out equations (possibly due to a different definition on what ease-out is). Another difference was that in addition to ease-in-out, there was ease-out-in easing functions. So generally KinkumaDesign’s was pretty good and perhaps what I needed to do is to hand tweak their values to make it better.

Curve Fitting
But I wasn’t satisfied with the thought of hand tweaking this values, even with a tool, and started thinking of how I could fit a bezier curve to match Penner’s equations. I remembered about a curve fitting algorithm by Philip J. Schneider (From 1990 Graphics Gems “An Algorithm for Automatically Fitting Digitized Curves”). I actually ported his C code to JS before but I just grabbed this gist, generated points from Penner’s equations and tested if the curve fitting algorithm worked. The result: the curve fitting seemed to generate bezier curve almost similar in shape to the original, but the differences was too great to be accurate. It might be possible that I could tweak the curve fitting parameters, but I didn’t want to go down that route and decided to do the brute force approach.

Brute Force
I decided that humans may not be precise when it comes to adjusting parameters, but the computer may be fast enough to try all the different combinations. The idea that I had for brute forcing is this: There are 2 control coordinates, which has a total of 4 values. If we subdivide each value to say 10, we will have 10×10 = 100 possibility for each control point. The estimated total combinations to check for is 100 * 100 = 10,000, which isn’t too bad. For each generated cubic bezier coordinates combination, I will generate the range of values and compare with the range of values generated with Penner’s equations. For comparing the ranges, the differences are squared individually and then sum. Comparing different combinations, only the coordinates which gives the least sum of squared differences will be kept. I ran this with node.js, and the subdivisions of 10-20 ran pretty quickly. Subdivisions of 25 (giving precision of up to 0.04) started give pretty accurate approximation, although the process starts to be slightly slower. For subdivision of 50 units (0.02 precision), it started to take a couple minutes to finish. In case you’re interested, the related node.js script (which is just a couple hundred of lines) is here.

So here are the bruteforce results which I think are pretty satisfactory

QuadIn: [ 0.26, 0, 0.6, 0.2 ]
QuadOut: [ 0.4, 0.8, 0.74, 1 ]
QuadInOut: [ 0.48, 0.04, 0.52, 0.96 ]
CubicIn: [ 0.32, 0, 0.66, -0.02 ]
CubicOut: [ 0.34, 1.02, 0.68, 1 ]
CubicInOut: [ 0.62, -0.04, 0.38, 1.04 ]
QuartIn: [ 0.46, 0, 0.74, -0.04 ]
QuartOut: [ 0.26, 1.04, 0.54, 1 ]
QuartInOut: [ 0.7, -0.1, 0.3, 1.1 ]
QuintIn: [ 0.52, 0, 0.78, -0.1 ]
QuintOut: [ 0.22, 1.1, 0.48, 1 ]
QuintInOut: [ 0.76, -0.14, 0.24, 1.14 ]
SineIn: [ 0.32, 0, 0.6, 0.36 ]
SineOut: [ 0.4, 0.64, 0.68, 1 ]
SineInOut: [ 0.36, 0, 0.64, 1 ]
ExpoIn: [ 0.62, 0.02, 0.84, -0.08 ]
ExpoOut: [ 0.16, 1.08, 0.38, 0.98 ]
ExpoInOut: [ 0.84, -0.12, 0.16, 1.12 ]
CircIn: [ 0.54, 0, 1, 0.44 ]
CircOut: [ 0, 0.56, 0.46, 1 ]
CircInOut: [ 0.88, 0.14, 0.12, 0.86 ]

If you prefer to to keep values clipped to 0..1, here are the parameters

QuadIn: [ 0.26, 0, 0.6, 0.2 ]
QuadOut: [ 0.4, 0.8, 0.74, 1 ]
QuadInOut: [ 0.48, 0.04, 0.52, 0.96 ]
CubicIn: [ 0.4, 0, 0.68, 0.06 ]
CubicOut: [ 0.32, 0.94, 0.6, 1 ]
CubicInOut: [ 0.66, 0, 0.34, 1 ]
QuartIn: [ 0.52, 0, 0.74, 0 ]
QuartOut: [ 0.26, 1, 0.48, 1 ]
QuartInOut: [ 0.76, 0, 0.24, 1 ]
QuintIn: [ 0.64, 0, 0.78, 0 ]
QuintOut: [ 0.22, 1, 0.36, 1 ]
QuintInOut: [ 0.84, 0, 0.16, 1 ]
SineIn: [ 0.32, 0, 0.6, 0.36 ]
SineOut: [ 0.4, 0.64, 0.68, 1 ]
SineInOut: [ 0.36, 0, 0.64, 1 ]
ExpoIn: [ 0.66, 0, 0.86, 0 ]
ExpoOut: [ 0.14, 1, 0.34, 1 ]
ExpoInOut: [ 0.9, 0, 0.1, 1 ]
CircIn: [ 0.54, 0, 1, 0.44 ]
CircOut: [ 0, 0.56, 0.46, 1 ]
CircInOut: [ 0.88, 0.14, 0.12, 0.86 ]



  • ease-in and ease-out typically fits pretty well
  • It is more difficult to fit the higher powered ease-in-out eg. QuintInOut. Sometimes the best fit creates a little bounce at the edges, which is then better to used the clipped parameters or adjust the control points a little. There are also certain easing equations eg. bounce which are not directly portable. For these cases, it might be better to multiple cubic-bezier-easing-functions.
  • getYforX() for a cubic bezier function is not simply P0 * ( 1 - u )3 + P1 * 3 * u * ( 1 - u )2 + P2 * 3 * u2 * ( 1 - u ) + P3 * u3. X needs to be re-parameterize to t before getting Y. A couple of npm modules are available if you’re lazy to implement this.

Possible Improvements

  1. smarter brute force – If we were to create more subdivisions for more accurate results, the entire brute force process would take exponentially longer. To reduce this time, a) we could run on multiprocessors, b) run it on the gpu eg.https://github.com/andyhall/glsl-projectron c) be a little smarter on which coordinates run brute forces on
  2. smarter fitting – perhaps there are better curve fitting algorithm that would make this bruteforce approach less relevant eg. this

While this may be something academic to do, I don’t think its pretty necessary right now. It’s probably more important to think what we can do with it, and what we can do going forward.

Conclusion & Going Forward
So in this post I’ve written a new cubic bezier tool, suggested a new set of Cubic Bezier approximations for Robert Penner’s easing equations, and showed a little of that process.

What I’ve done here might simply just a improvement from the Ceaser’s and KinkumaDesign’s parameters, or you could think of it as a “purer” form of Penner’s implementation with cubic bezier. Also I started thinking about these because of the possibilities of using Penner’s equations while making it editable easily using curves for my animation tool Timeliner. There might also be other curves and splines to consider, but allowing Penner’s equations to map to cubic bezier curves is a start. There is also a thread on better integrating editable and non-editable easing in blender which is worth considering.

Hopefully, someone finds this useful. The code is on github. Sorry if it looks like mess, it was hacked overnight.

Merry Christmas and have a happy new year!

Resizing, Moving, Snapping Windows with JS & CSS

Imagine you have a widget written in CSS, how would you add some code so it would get some ability to resize itself? The behaviour is so ingrained with our present windows managers or GUI that its quite easily taken for granted. While, it’s quite possible that there might be some plugins or framework which does this, but the challenge I gave myself was to do it in vanilla javascript, and to handle the resizing without adding more divs to the dom. (I thought adding additional divs to use as a draggable bar is pretty common).

Past work
Which reminds me, I wanted this similar behaviour for ThreeInspector, and while hacking the idea, I went with the approach of using the CSS3 resize property for the widget. The unfortunate thing was that min-width and min-height was broken for a really long time in webkit (the bug was filed long ago in 2011, and I’m not entirely sure what the status is now). Being bitten by the bug, I become hesitant every time I think of the css3 resize approach.

Screenshot 2014-11-15 09.31.08

JS Resizing
So, for own my challenge, I start with a single purple div and add a bit of js.


Done within 100 lines of code, this turns out not to be difficult. The trick is adding a mousemouse handler to document (not document.body, as it fails in FF), and calculate when the mouse is within a margin from the edge of the div. Another reason to always add handlers to document instead of a target div is when you need mouse events even if the cursor moves out of the defined boundary. This is useful for dragging and resizing behaviours, and especially in resizing, you wouldn’t want to waste time hunting bugs because the events and divs resizing are not in sync.

Also for my first time, I made extensive use of document’s event.clientX, event.clientY, together with div.getBoundingClientRect(). It does get me almost everything I need to deal for handling positions, size and events, although it’s a possibility that getBoundClientRect might not be as performant as getting offsets.

What’s nice about using JS vs a pure CSS3-resize is that you get to decide which sides of the div you wish to allow resizing. I went for the 4 sides and 4 corners, and the fun just started, so next I started implementing moving.

Handling basic moving / dragging just needs a few more lines of code. Pseduocode: Mousedown, check that cursor isn’t on the edge (reserved for resizing), store where the cursor and the bounds of the box is. Mousemove, update the box’s position.

Still simple, so let’s try the next challenge of snapping the box to the edges.

Despite the bad things Mac might say to PC, one thing that is pretty good since Windows 7 is its snap feature. On my mac I use Spectacle, which is the replacement for Window’s windows docking management. I took inspiration of this feature in Windows and implemented this with JS and CSS.


One sweet detail in Snap is the way a translucent window shows where the window would dock or snap into place before you release your mouse. So in my implementation, I used an additional div with a slight transparency one z-index lower than the div I’m dragging. Css animation transition property was used for a more organic experience.

There’s slight deviations to the actual Aero’s experience that Windows users may notice. In Windows, dragging a window to the top snaps the window full screen, while dragging the window to the bottom of the screen has no effect. In my implementation, the window can be docked to the upper half or lower half, or the fullscreen if the window get dragged further beyond the edge of the screen. In Windows, a vertical half is only possible with the keyboard shortcut.

Another difference is that Windows snaps happen when the cursor touch the edge of the screen. My implementation snaps when the div’s edge touches the browser window edge. I thought this might be a better, because users typically use less movements for non-operating-system gesutures. One last difference is that Windows’ implementation sends tiny ripples at the point the cursor touches the screen. Ripples are nice (I noticed they are an element used frequently in Material Design), but I’ll leave it to be an exercise for another time.

As after thoughts, I added touch support and limit mousemove updates to requestAnimationFrame. Here’s the demo, feel free to try and check out the code on codepen.

See the Pen Resize, Drag, Snap by zz85 (@zz85) on CodePen.

Exploring Simple Noise and Clouds with Three.js

A while back I attempt writing a simple CloudShader for use in three.js scenes. Exploring the use of noise, it wasn’t difficult creating clouds which isn’t visually too bad looking. Another idea came to add Crepuscular rays (“God rays”) to make the scene more interesting. Failing to mix them correctly together on my first tries, I left these experiments abandoned. Until one fine day (or rather one night) I decided to give it another shot and finally got it to work so here’s the example.

(Now comes with sliders for clouds control!)

In this post, I would going to explain some of the ideas generating procedural clouds with noise (these topics frequently go hand in hand). While noise might be bread and butter in the world of computer graphics, but it did took me awhile to wrap my head around it.

Noise, could be described as a pseudo-random texture. Pseudo-random means that it might appear to be totally random, but being generated by the computer, it is not. Many might also refer to noise as Perlin noise (thanks to work by Ken Perlin), but there are really different form of noise, eg. Perlin noise, Simplex noise, Value noise, Wavelet noise, Gradient noise, Worley noise, Simulation noise.

The approach that I use for my clouds could be considered Value noise. Let’s start creating some random noise by creating a DataTexture of 256 by 256 pixels.

// Generate random noise texture
var noiseSize = 256;
var size = noiseSize * noiseSize;
var data = new Uint8Array( 4 * size );
for ( var i = 0; i < size * 4; i ++ ) {
    data[ i ] = Math.random() * 255 | 0;
var dt = new THREE.DataTexture( data, noiseSize, noiseSize, THREE.RGBAFormat );
dt.wrapS = THREE.RepeatWrapping;
dt.wrapT = THREE.RepeatWrapping;
dt.needsUpdate = true;

Now if we were to now render this texture, it would look really random (obviously) like a broken TV channel.

(we set alpha to 255, and r=g=b to illustrate the example here)

Let’s say if we were to use the pixel values as a height map for a terrain (another use for noise), it is going to look really disjoint or random. One way to fix it is to interpolate the values from one point to another. This is becomes smooth noise. The nice thing about textures is that these interpolation can be done on the graphics unit automatically. By default, or by setting `.minFilter` and `.magFilter` properties of a THREE.Texture to `THREE.LinearMipMapLinearFilter`, you get almost free interpolation when you try to read a point on the texture between 2 pixels or more.

Still, this isn’t enough to look anything like clouds. The next step is to apply Fractional Brownian Motion, which is a summation of successive octaves of noise, each with higher frequency and lower amplitude. This generates Fractal noise which generates a more interesting and continuous texture. I’m doing this in the fragment shader with a a few lines of code…

float fnoise(vec2 uv) {
    float f = 0.;
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
    f += texture2D(uv * scale).x / scale;
    return f;

Given this my data texture has 4 channels (RGBA), one could pull out 4 or 3 components if needed, like

vec3 fNoise(vec2 uv) {
    vec3 f = vec3(0.);
    float scale = 1.;
    for (int i=0; i<5; i++) {
        scale *= 2.;
        f += texture2D(uv * scale).xyz / scale;
    return f;

Now if you were to render this, it might look similar to the perlinNoise class in flash/actionscript or the cloud filter in photoshop.

Screenshot 2014-11-08 22.16.53

2D Clouds
Although we have a procedural cloud shader, how do we integrate it into a three.js scene? One way is to texture it over a sphere or a skybox, but the approach I use is to create a paraboloid shell generated with ParametricGeometry, similar to the approach Steven Wittens used to render Auroras in his demo “NeverSeenTheSky”. The code / formula I use is simply this

function SkyDome(i, j) {
    i -= 0.5;
    j -= 0.5;
    var r2 = i * i * 4 + j * j * 4;
    return new THREE.Vector3(
        i * 20000,
        (1 – r2) * 5000,
        j * 20000
var skyMesh = new THREE.Mesh(
    new THREE.ParametricGeometry(SkyDome, 5, 5),

Now the remaining step, but its probably the most important step to simulate clouds is to make use and tweak the values from the fractal noise to obtain the kind of clouds you want. This is done in the fragment shader where you could decide what thresholds to apply, (eg. cutting off the high or low values) or apply a curve or a function to the signals. 2 articles which gave me ideas are Hugo Elias’s clouds and Iñigo Quilez’s dynamic 2d clouds. Apart from these, I added a function (where o is opacity) to reduce the clouds of the skies nearer the horizon to make it more illusion of clouds disappearing into the distant.

// applies more transparency to horizon for
// to create illusion of distant clouds
o = 1. – o * o * o * o;

Crepuscular rays
So I’m going a break explaining some of my ideas for producing the clouds. This might be disappointing if you’re expecting more advanced shading techniques, like ray-marching or volumetric rendering, but I am trying to see how far we could go with just the basic/easy stuff. Now if adding the crepuscular rays works, it would produce a more impressive effect that we can avoid complicated stuff at the moment.

So for the “God rays”, I started with the webgl_postprocessing_godrays example in three.js, implemented by @huwb using a similar technique used by Crytek. After some time debugging why my scene didn’t render correctly, I realized that my clouds shader (a ShaderMaterial) didn’t play well in the depth rendering step (which override the scene with the default MeshDepthMaterial), that was needed to compute occluded objects correctly. For that I manually override materials for the depth rendering step, and pass a uniform to the CloudShader to discard or write depth values based on the color and opacity of the clouds.

I hope I’ve introduce the ideas behind noise and how simple it could be for generating clouds. One way to get started with experimenting is to use Firefox, which now have a Shader Editor with its improved web developer tools that allows experimentation of the shaders in real time. Much is up to one’s imagination or creative, for example, turning the clouds into moving fog.

Clouds is also such common and an interesting topic which I believe there is much (advanced) literature on it (like websites, blogs and papers like this). As mentioned earlier, the links I found to be good starting point is by Hugo Elias and Iñigo Quilez. One website which I found explaining noise in an easy to understand fashion is http://lodev.org/cgtutor/randomnoise.html

Before ending, I would love to point out a couple other of realtime browser-based examples I love, implemented in in a very different or creative approaches.

1. mrdoob’s clouds which uses billboards sprites – http://mrdoob.com/lab/javascript/webgl/clouds/
2. Jaume Sánchez’s clouds which uses css – http://www.clicktorelease.com/code/css3dclouds/
3. IQ Clouds which uses some form of volumetric ray marching in a pixel shader! – https://www.shadertoy.com/view/XslGRr

So if you’re interested, read up and experiment as much as you can, for which there are never ending possibilities!

Rendering Lines and Bezier Curves in Three.js and WebGL

It’s relatively simple to draw lines and curves with CanvasRenderingContext2D. Not so with WebGL. In this post, I’ll explore some ways to draw lines and (quadratic bezier) curves in three.js with webgl (and on the gpu).

It was probably Firefox 3 that led me to explore the use of the Canvas element and the 2d context. Then it was Firefox 4 that led me to explore the webgl context, followed by discovering three.js. That was probably how I got involved with 3d graphics. The point is that there’s some connection between the 2nd and 3rd dimensions. Enough with my history and let’s start exploring some vector graphics.

Straight lines
A straight line connects 2 points.

2D Canvas – Here’s how we draw a line from (x1, y1) to (x2, y2) in canvas 2d.

ctx = canvas.getContext(‘2d’);
ctx.lineWidth = 2;
ctx.strokeStyle = ‘black’;
ctx.moveTo(x1, y1);
ctx.lineTo(x2, y2);

Three.js/WebGL – Now here’s how we do it in three.js/WebGL.

geometry = new THREE.Geometry();
geometry.vertices.push(new Vector2(x1, y1, 0));
geometry.vertices.push(new Vector2(x2, y2, 0));
material = new THREE.LineBasicMaterial( { color: 0xffffff, linewidth: 2 } );
line = new THREE.Line(geometry, material);

Drawing straight lines in webgl requires a little more code (even with three.js), but not really too much additional code for now.

Quadratic Bézier
A quadratic bézier curve connects points (x0, y0) to (x2, y2) interpolating through control point (x1, y1).

2D Canvas – Just one function name change from a straight line.

ctx = canvas.getContext(‘2d’);
ctx.lineWidth = 2;
ctx.strokeStyle = ‘black’;
ctx.moveTo(x0, y0);
ctx.quadraticCurveTo(x1, y1, x2, y2);

Three.js/WebGL – Now it gets a little different.

geometry = new THREE.Geometry();
curve = new THREE.QuadraticBezierCurve3();
curve.v0 = new THREE.Vector3(x0, y0, 0);
curve.v1 = new THREE.Vector3(x1, y1, 0);
curve.v2 = new THREE.Vector3(x2, y2, 0);
for (j = 0; j < SUBDIVISIONS; j++) {
   geometry.vertices.push( curve.getPoint(j / SUBDIVISIONS) )

material = new THREE.LineBasicMaterial( { color: 0xffffff, linewidth: 2 } );
line = new THREE.Line(geometry, material);

Unlike canvas 2d approach, we can't yet simply change one function call, because three.js / webgl doesn't support bezier curves as a primitive geometry. We need to subdivide the curve and draw them as line segments. With sufficient line segments, they would almost represent a nice curve.

Here are a couple of drawbacks though.

1. There needs to be sufficient subdivisions or the curve segments might look like straight lines instead
2. More objects and draw calls are created in the process.
3. ANGLE does not render lineWidth > 1 length by WebGL specification.

For #1, number of divisions can be estimated so the resulting curve still look pleasant at a particular zoom level. For #2, one can switch to BufferGeometry to reduce overheads.
For #3, its the least easily fixed. I encountered this with my bezier lights experiment last year.

WebGL LineWidth Limitations

Screenshot 2014-09-07 16.12.06

My "fix" then was to alter the experiment a little.

Screenshot 2014-09-07 16.13.37

Of course that doesn't exactly fix the lineWidth issue and someone tweeted about the approach they used for drawing lines in WebGL.

So this was last year. This year I revisited this thinking about how to render lines in WebGL for a visualization project I'm working on.

Screenshot 2014-09-07 16.19.20
(the screenshot here shows employs the easier canvas 2d approach)

The handy ParticleGeometry class

Earlier I created a class called ParticleGeometry for rendering the leaves in a cherry blossom experiment.

ParticleGeometry (probably not the best name again) is a wrapper around BufferGeometry (for performance reasons). It was created to be render rotatable sprites/particles in a more optimized fashion without being too difficult to use. 2 triangles are used for each sprite/texture which are rotated in the vert shaders (instead of in js) with values passed in via attributes. I find this approach to be more efficient than typical approaches in three.js (eg. using multiple plane meshes or using SpritePlugin). Later I discovered I could easily modify this class to work with drawing lines and curves.
ParticleGeometry for Lines

Let's start with initalizing a ParticleGeometry. LINES is the number of lines or sprites we allocate for our line pool. Internally, it would create 2x amount of triangles and other attributes required.

lineGeometry = new THREE.ParticleGeometry( LINES, size );

Straight Lines in WebGL.
the first modification to ParticleGeometry is a setLine method.

THREE.ParticleGeometry.prototype.setLine = function(line, x1, y1, x2, y2, LINE_WIDTH) {

Based on the line width, this method updates the 2 triangles for the referenced line from the starting to the ending point. With this, we solve the problem of rendering lines with line widths greater than 1. In fact, we could have custom individual line widths, which is probably more difficult to do with the default LineMaterial.

Demo: testing lines

Benchmarking Canvas 2D performances

Before we continue, I wanted to see how fast canvas 2d rendering lines and curves.

2000 lines - 60fps
5000 lines - 38fps
10000 lines - 20fps.

500 quadratic curves - 60fps
1000 quadratic curves - 40fps
2000 quadratic curves - 25fps
5000 quadratic curves - 12fps

These numbers look pretty decent to me. However these are stroked with lineWidth = 1. When I increase lineWidth to 2, the frame rates drop.

2000 lines - 38fps
500 quadratic curves - 6fps

In contrast with ParticleGeometry.setLine(), I get 30fps for 10000 lines. A tad faster, but not really faster than canvas 2d. The biggest difference come when I increased lineWidths to 5, I could still get 25fps, and 18fps at lineWidth 10.

The script used for running these numbers can be found in this gist.

Rendering Bezier curves with WebGL
So if we are able to draw straight lines with variable widths, how do we tackle bezier curves? I started experimenting with a couple of concepts.

Concept 1 - Place a 2d canvas as an overlay above the webgl context and use the 2d context api. Depending on your needs, this might not be a bad idea given canvas 2d performance isn't bad at all. Just an additional bit of effect projecting the 3d points back into 2d points for rendering.

Iteration 2 - extend ParticleGeometry.setLine() technique with Bezier curves. Now based on the performance numbers, if we have a pool of 4000 lines to work with at 60fps, we are could calculate how many line segments we want per curve. Let's say we are satisfied with 20 segments per curve, we could draw 200 curves. This could work, but based on our canvas 2d numbers, this doesn't look very promising.

Iteration 3 - Blinn-Loop approach.
A pretty well known technique developed by Charles Loop and Jim Blinn in Microsoft used for rendering vector art on the gpu. Their paper is called Resolution Independent Curve Rendering using Programmable Graphics Hardware.

I've used a similar approach earlier with this three.js vector font experiment.

In this approach, each triangle is used to fill a quadratic curve segment. However to stroke a bezier curve, we need to draw a line instead of filling an area. So I modify the frag shader glsl code with a threshold.

float inCurve(vec2 uv) {
float d = uv.x * uv.x - uv.y;
if (d >- 0.05 && d < 0.) return -1.;
return 1.;

Demo: stroke bezier with modified inCurve function

performance: 5000 curves - 30fps, 10000 curves - 20fps, 1000 curves - 60fps. So far, this looks like best performance for bezier curve so far. However, controlling the lineWidth is difficult in this approach.

So let turn on GL_OES_standard_derivatives to be able to estimate stroke bezier with modified sdCurve function

Now we have slightly better control of the lineWidth, however there's still an issue. Because we are only using 1 triangle, rendering becomes a little problematic when the stroke width is thicker than the height of the triangle.

4rd Iteration. Render bezier curve stroke in the fragment shader using a distance function.

A couple of days ago I watched the talk about GLyphy. In short GLyphy allows you to render vector fonts by representing the vectors on the texture to be rendered on the GPU. What was interesting was the way it had to convert bezier curves to arc segments so it'd be easier to compute in the shaders. It is also interesting how the original concept came from research for rendering vector graphics.

Similarly Taylor Holliday ported the hlsl code found in this paper, to produce this glsl shader code for rendering bezier curve by distance approximation and I was excited to be able to use this.

Screenshot 2014-09-08 08.01.27

The next challenge for me is how to employ this technique with my ParticleGeometry class. More specifically is how to position triangles so they could be used to draw the bezier lines. I wrote added a method just to do this.

THREE.ParticleGeometry.prototype.setBezierGrid = function(line, v0, v1, v2, LINE_WIDTH)

Basically I use 2 triangles to cover a grid over where the bezier stroke would be drawn. Based on the line width, I giving sufficient padding on the left, right, top and bottom so that the strokes do not get clipped. I calculate the transformed control point to the shaders via attributes, and this works pretty well!

Demo: stroke bezier with fast distance estimation

There are however some caveats with this technique:
1. It breaks for thicker line widths. However, it should be sufficient for most small line widths.
2. It breaks when the control point is colinear with the start and end points. For that iq has a fix which draws a straight line in these scenarios.
3. It breaks when the control point almost colinear with the start and end points. This disturbs me a little, but could be "fixed" by drawing a straight line with a bigger threshold.

5rd Iteration - Solve for actual distance. To fix the minor issue with the 4th iteration, I found an exact distant to bezier curve solver here.

The nice thing is that I just need to swap one glsl function call from the 4th iteration and we are done.

Demo: stroke bezier with distance solver

Now for some numbers:

4th iteration
1000 curves (~line width 1-3) = 45fps
2000 curves (~line width 1-3) = 30fps

5th iteration
1000 curves (line width 1) = 45fps
1000 curves (line width 3) = 30fps
2000 curves (line width 1) - 23fps

Here we see that 5th approach is slightly slower but not too much. The values are pretty similar to canvas2d bezier speeds at lineWidth 1, but the webgl performance is more scalable for high line widths.

From thin straight lines
Screenshot 2014-09-08 09.42.15

Thick bezier lines
Screenshot 2014-09-08 09.44.38
in WebGL. (alright, aesthetically isn't better, but it's a WIP).


Hopefully I have shown you in this post how you could render lines and bezier curves in both canvas 2d and webgl.

I have shown how the Canvas 2D API is relative easy, and the performance of 2d canvas is nothing shaddy. For most use cases, why not use it?

I have also shown that despite challenges with working with webgl at a lower level, it is still possible to render lines and curves at great quality and speed.

The additional effort might be able to get you some performance gains for large quantity of curves with thickness. There are also some other reasons why you may opt to draw bezier curves the webgl way.

- make use of post processing
- integrate gpgpu
- keep things in the webgl workflow
- control other little details (eg. shading)

I'll leave you to consider these tradeoffs when deciding what to use.

Lastly, I'll leave you with some inspirations on some cool demos that can be created with bezier curves.

- Bateria by grgrdvrt
- Fat cat by Roxik
- Fluid Jelly by Fabien Bizot
- Muscular-Hydrostats by soulwire.