Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1. asm.js makes running C++ code that would render to OpenGL in the browser a possibility, and with relatively good performance. Kudos to Mozilla.

> 2. It runs on browsers that don't know about asm.js, but it's effectively an emulated machine, and it's slow.

Do you have benchmark numbers to support that? In my experience, asm.js code is quite fast even without special asm.js optimizations. It depends on the benchmark obviously, but look at

http://kripken.github.io/mloc_emscripten_talk/#/27

Many of those benchmarks are very fast in browsers without special asm.js optimizations.

All they need to do to be fast on asm.js code is to optimize typed array operations and basic math, and those are things browsers have been doing for a long time. Google even added a Mandreel benchmark to Octane for this reason.

Emscripten and Mandreel output, with or without asm.js, tends to be quite fast, generally faster than handwritten code. asm.js is often faster than that, because it's easier to optimize, even without special optimizations for it. Those special optimizations can help even more, but they are not necessary for it to run, nor are things "slow/emulated" without them.



That link is misleading in the context you present it. Those are microbenchmarks, which a JIT can optimize relatively well. But if you look at the very next slide at http://kripken.github.io/mloc_emscripten_talk/#/28 you will see that for a larger application, non-optimized asm.js performs abysmally, as does JS in general. A performance penalty of ~1000% of native is what you should expect for a nontrivial JS application, and asm.js does not do much (if anything) to alleviate that unless you run odinmonkey.


I was purposefully linking to that slide + the one after it.

Yes, there is a factor of around 4x slowdown between asm.js optimizations and without them, on the next slide. But even 4x slower than asm.js is quite fast, it's enough to run the Epic Citadel demo for example (try it on a version of firefox without those optimizations, like stable release - as others reported in comments, it runs ok). A lot of the work in Epic Citadel is on the GPU anyhow, say it's about half, so the difference is then only something like 2x.

2x is not that much, we have similar differences on the web anyhow because of CPU differences, JIT differences, etc. That's within the expected range of variance.

Also, asm.js code is faster even without special optimizations compared to handwritten JS (see for example http://blog.j15r.com/blog/2011/12/15/Box2D_as_a_Measure_of_R... and http://blog.j15r.com/blog/2013/04/25/Box2d_Revisited ). So even without those optimizations it is worthwhile.


It's faster than "handwritten JS" because it's using a subset of JS that's already tuned to near the peak level of performance that you can achieve in JITting JS, which is tightly looped arithmetic with everything inlined working on SMIs. In V8, that's ~2.5x slower than the equivalent C code with optimizations. That's basically best case.

The problem is that it's still dynamic, and at any point a null can come along and force you to throw away a JITted function, so you have to have these checks everywhere just in case. Further, ECMA compliance does not require JIT compilation, which means to expect performance, especially on the web, is dubious at best. I can see merit to expecting performance metrics in something like Node.JS since it assumes V8, but the web does not have 1 JS engine, and the standard does not require such performance metrics.

The "benefits" of building a bytecode in that's represented in some subset of JS are essentially red herrings since they're effectively lies and the downsides are a lot more significant, IMO. Instead, we should be focusing on actually building a bytecode for the web such that all implementations are expected to have certain performance metrics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: