A Barnes-Hut tree gravity benchmark

In one of my last posts (An interactive Barnes-Hut tree) I talked briefly about one of the “fun” projects I’m working on, When Giants Collide (work in progress, GitHub repo), and promised myself to blog about its development as I went along. I just finished refining the algorithm for building the tree and calculating the gravitational force.

The small app above is a benchmark pitting the Barnes-Hut algorithm for computing gravity (an O(N log(N)) algorithm) against a brute-force direct summation (an O(N^2) algorithm). It calculates the gravitational field of a random collection of particles using both methods for N = 256 to N = 16,384; a lower amount of time spent indicates a faster algorithm. The time used to compute the gravitational force is averaged over 12 iterations to minimize fluctuations. Results are plotted in real time.

Lastly, it calculates an overall “score” for the JavaScript interpreter by only running the Barnes-Hut algorithm for N = 16,384. You can see a table of scores for a few different browsers and devices I have access to (lower is better). If you’d like, send me your score!

Some observations about JavaScript optimization

Chrome turned out to be the fastest browser at this particular benchmark. Surprisingly, a previous version of the same code was actually the  slowest on my MacBook — almost 6x as slow as Safari! That was quite unexpected, as in my (limited) experience building web apps Chrome tends to edge out other browsers in terms of JavaScript execution speed.

So I waded a little bit more into my code to understand what was making my code so inefficient. This Google optimization guide and this post on HTML5Rocks (specifically talking about optimizing for V8, the just-in-time compiler embedded in Chrome) proved very useful. What I learned:

  1. Use the idiomatic JavaScript style for creating classes (using prototypes, new, straightforward constructors etc.) instead of using an object factory and closures.
  2. Avoid creating closures, when possible.
  3. Use node.js to profile the application and identify functions that are not getting optimized (using –trace-opt).
  4. Both Safari and Firefox had good baseline scores even before these optimizations. I found it quite surprising that V8 was much more fastidious about my code than the other JavaScript engines.

Another finding was how much slower alternative browsers (e.g. Chrome, Mercury) are on iOS. Alternative browsers use the same engine as Safari, but they don’t have access to Nitro’s Just-In-Time compilation — this means that they will be quite a bit slower than Safari on a computationally-intensive benchmark. How much slower? On my iPhone 5S, almost a factor of 10!

Web workers are awesome

The benchmark runs in a different thread, so that the page itself remains responsive. This is accomplished using Web Workers, a relatively new technology that allows the page to spin off threads to do computation-heavy work. It’s quite well supported, and I found it pretty easy to learn (aside from some surprising quirks). I plan on spinning off some of the tasks in Systemic Live — which currently either block the interface or use timers — into Web Workers (it’ll be a quite a bit of work, so don’t hold your breath).

Link: Why I’m betting on Julia

Evan Miller: Why I’m betting on Julia

Julia looks to me like a very promising programming language.  Its main appeal is bringing C-like speed to a high-level, well-designed language that is tailored to scientific computation. From the article:

I hesitate to make the comparison, but it’s poised to do for technical computing what Node.js is doing for web development — getting disparate groups of programmers to code in the same language. […]

If you work in a technical group that’s in charge of a dizzying mix of Python, C, C++, Fortran, and R code — or if you’re just a performance-obsessed gunslinging cowboy shoot-from-the-hip Lone Ranger like me — I encourage you to download Julia and take it for a spin.

The “dizzying mix” is very much how I would describe a lot of the code I use and write on a daily basis.  R is too slow for intensive computations, so I drop down to C to write most of my algorithms and then wrap the C code in a higher-level R interface (via FFI); a similar state of affairs exists for Python. Getting communication right between the two worlds (the low-level C and the high-level language) can be daunting, and frankly, the mental context switch is exhausting. (Let’s not even bring Fortran into the discussion.)

I really hope Julia will succeed. What it needs right now is a first-class plotting facility and a bit more widespread adoption, but once that is achieved, I think that Julia will be a serious contender for mindshare.