Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I view it a bit more from the lens of it's initial evolution (Asm.JS) being a wall-breaker to force Safari,etc to keep up. It was done by adding a MVP that was easy to add for all parties (Asm.JS already ran on top of WebGL buffers and JS.. it was mostly a matter of standardizing a way to optimize this in a less hacky way).

In the same way, simd128 was a low hanging fruit with more or less _universal_ hardware support (being a good MVP to bring benefits, more or less fulfilling 99% of what games and other 2d/3d applications need.. important point).

Now as for being future-proof, even today only simd256 would be usable on desktop (so we're only losing half the _potential_ performance) due to how spotty Intel's AVX512 support has been (crashy, P/E core differences, etc), the full potential bought by flexibility in SVE or RV's VE being a thing to look for in the future.

Now, if webassembly had an neural-net or other AI/large-vector heavy focus I'd agree that the omission of a future-proof option is bad, but they've decided to focus on what can be used today and standardize on that since we will actually benefit from that for the forseeable future.

Vector lengths really has been stagnant compared to core counts for hardware makers since it's been more "bang for the buck", the AI focus might still shift that back (even if NPU's has taken the front-seat) but I wouldn't hold my breath for flexible vectors until Intel or AMD jumps on the bandwagon (or ARM and RV chips with really wide vectors takes enough marketshare that it becomes untenable to not support it).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: