asm.js on status.modern.ie

I was excited to see that asm.js has been added to status.modern.ie as “Under Consideration”. Since asm.js isn’t a JS language extension like, say, Generators, what this means is that Microsoft is currently considering adding optimizations to Chakra for the asm.js subset of JS. (As explained in my previous post, explicitly recognizing asm.js allows an engine to do a lot of exciting things.)

Going forward, we are hopeful that, after consideration, Microsoft will switch to “In Development” and we are quite happy to collaborate with them and any other JS engine vendors on the future evolution of asm.js.

On a more general note, it’s exciting to see that there has been across-the-board improvements on asm.js workloads in the last 6 months. You can see this on arewefastyet.com or by loading up the Dead Trigger 2 demo on Firefox, Chrome or (beta) Safari. Furthermore, with the recent release of iOS8, WebGL is now shipping in all modern browsers. The future of gaming and high-performance applications on the web is looking good!

5 Responses to asm.js on status.modern.ie

  1. I’m still in two minds about asm.js. Here’s my reasoning:

    Positives:
    – asm.js code runs without modification in existing browsers, but runs faster in browsers that take advantage of the code’s structure
    – It provides a nice target for non-javascript languages, without adding too much overhead
    – ‘Native’ javascript can communicate with asm.js quite directly
    – It can still be hacked/inspected with web dev tools and you can do prototype-like overloading just like any other JS code by the end-user
    – It can help fend off atrocities like PNaCl

    Negatives:
    – There’s one huge negative for me: It’s (almost always) illegible by end-users. No sense of the author’s intent is preserved. “asm.js” suggests that it is at least as readable as assembly code, but really, it’s just machine code — intended purely for machines and not humans at all.

    Legibility is crucial to ‘openness’ (and ‘freedom’ — if you can’t read it, you can’t manipulate it). Openness on the web is under constant threat from all sorts of things, and this includes a desire for speed in the *short-term*. This is a very harmful long term strategy, though, because in the long-term, any differences between interpreted and native code are dwarfed by successive improvements in hardware and fundamental algorithms. So we’ll get our fast code either way, the only difference is that with asm.js (or anything similar) we may end up with a web filled with code no human understands.

    • I appreciate the basis for your concern; preserving openness and freedom on the web is something we care intensely about. A few points on this:
      – I think the real issue you are getting at isn’t concerned with asm.js per se, but rather handwritten JS vs compiled JS. In this context, performance isn’t the only reason people use compiled JS: if you already have a huge C++ codebase, rewriting to idiomatic JS can be prohibitively expensive so the compile-to-JS option can (and has been) the difference between targeting the web and just targeting native.
      – I’d argue that JS minifiers and obfuscators applied to handwritten JS produce code that is almost as illegible as asm.js and that (beautified) asm.js is actually readable once you have a little practice ;)
      – I don’t think “because in the long-term, any differences between interpreted and native code are dwarfed by successive improvements in hardware and fundamental algorithms” is true for all applications. In the 90s it looked like that may be the case, but then single-core performance stopped doubling every 18 months ;) For many applications, a 2-8x performance penalty just isn’t acceptable. If the open web can’t provide the necessary performance, these applications may turn to proprietary or non-standard technologies.

      • “I think the real issue you are getting at isn’t concerned with asm.js per se, but rather handwritten JS vs compiled JS.”

        Yes, sort of. My concern is with non-human readable code on the web of any sort, but I’m perfectly happy for readability to be reconstructed algorithmically. So “lossless” minification & compression isn’t a problem, but obfuscation and “lossy” minification is. (Source maps don’t cut it, because people have to choose to include them — and default choices matter.) My concern with asm.js specifically is that it may become a widespread approach to deploying code and even information on the web (ala Java/Flash in the past). I’m fairly confident asm.js won’t be used this way (precisely because it’s quite hard to do so). I’m just not entirely confident, because I can imagine some ‘genius’ coming along and developing a workflow that everyone jumps on and which includes compilation by default.

        Now, ‘readability’ is not binary, but rather sits a scale. Machine code can only be read by a human with great difficulty. Assembly code is much easier to read, but is still not very accessible. C — a “high-level” language — is easier again, but sometimes much more difficult to read than the equivalent Javascript. I’m sure asm.js becomes highly readable with practice, but the required practice is a barrier to understanding that I expect almost no-one (except perhaps JS engine optimisers) will overcome.

        What I would like to see is that the web remains filled with highly readable code (required for openness and in turn freedom) — the way it (mostly) is today. As I’ll argue below, we’ll get speed one way or another.

  2. “because in the long-term, any differences between interpreted and native code are dwarfed by successive improvements in hardware and fundamental algorithms”

    This is false, especially with the rise of smartphones as next Bell’s Law device class. Native code prevails, C++ is having its third (or is it fourth?) life. I wish it were true, but it’s not.

    (For “interpreted”, assume JITs in several tiers, all winning for sure [I <3 JIT compilers] until their heuristics start to work against one anothers', and the curve flattens out.)

    Java bet this way in the '90s and in part, failed on the client due to a perf gap with native (which resulted in too many native methods). It was better on the server side, and productivity is a sweet apple to bite at some runtime orange juice.

    In the very long run, safe-native via some evolved JS VM or PNaCl runtime might be the norm, and the overhead would be low enough. Problem is, to borrow from Keynes, the native-beats-interpreted market may stay (ir)rational longer than the interpreted-wins-in-the-end investor can stay solvent.

    /be

  3. Yes, I take ‘interpreted performance’ to mean ‘the best any current engine can do with unmodified hand-written code’ (whether using JITs or high performance APIs).

    Humour me for a moment. I wrote the following loop in both JS (Fx) and C (using -O3 in gcc):

    int val = 0;
    for (int i=0; i 50) val = i + 1;

    and they took almost exactly the same amount of time (both about 1.5s, the JS is 10ms (<1%) slower on average). What does that mean? It means that at a very basic level, JS performance = native performance. (Assuming C is considered 'native'.) And since this example demonstrates turing completeness, it might even mean JS performance = native performance everywhere if you just write it correctly. Woohoo!

    Sorry for that. As is the case with most benchmarks, in reality this example means almost nothing. (Although, anecdotally I do roughly find that JS perf is comparable to native perf if I avoid memory allocations and use typed arrays.) The *key* issue when dealing with performance is to make sure the code performs fast enough for a given purpose. (int i = 1000000000, val = 1000000000; would have had the same effect on memory (i.e. fulfilled the same presumed purpose), but be much faster in any language!) As hardware has gotten faster, more and more ‘purposes’ have become possible for all languages, both interpreted and native. However, the gap between interpreted and native languages isn’t growing (in fact, it’s becoming smaller) — hence the tasks-feasible-with-interpreted/tasks-feasible-with-native proportion is converging towards 1. Conceptually:


    Past
    ----
    Interp tasks possible: ==|
    Native tasks possible: ====|

    Now
    ---
    Interp tasks possible: =======|
    Native tasks possible: =========|

    Future
    ------
    Interp tasks possible: =================|
    Native tasks possible: ===================|

    So the absolute gap in possibilities is the same, but the percentage gap in possibilities is shrinking. Keep in mind, this remains true even if the raw speed difference is always 8x (or whatever else).

    Games are a unique beast. New games have always tried to push the possibility-envelope, which is why they always seem out of reach for anything but the fastest systems. But most non-game software is purpose-driven, rather than possibility-driven, which is why the proportion of interpreted/native possibilities is moving towards 1 and will continue to do so. That’s true for desktops, mobiles, raspberry pis, vacuum cleaning robots and whatever else you care to imagine.

    However, it’s not just faster hardware that is making the gap between interpreted and native seem insignificant. The gap itself is being narrowed and in some cases bridged by changes in fundamental algorithms and processes. Improved JITs have done a spectacular job. ‘Thinner’ APIs that provide a more direct (but nonetheless protected and reasonably human-readable) window into hardware such as WebGL, TypedArrays and SIMD are having and will continue to have a significant impact. (Perhaps an analogy can be drawn between these and the DirectX APIs, which were so crucial in getting gamers to transition to Windows from DOS.) The shift to parallel architectures will be (actually, IS) huge — slow code is loop code, and the vast majority of loop code can be parallelised (as is done in WebGL, SIMD, etc.)

    So yes, we will get our fast code one way or the other. (The Scottys of the world may protest they “cannae push it any faster!”, but they seem to always find a way.) The only question is whether or not (due to historical accident and short-term-optimising behaviour) that code will be open and free to inquisitive minds, or inscrutable and magical to all but a chosen few.

    Sorry, that’s a lot of words for an outcome I consider so improbable.

Leave a Reply

Your email address will not be published. Required fields are marked *