Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think one the big things with web assembly is it's shear potential is huge.

In theory, WASM could be a single cross platform compile target, which is kind of a CS holy grail. It's easy to let your mind spin up a world where everything is web assembly, a desktop enivornment, a server, day to day software applications.

After I've imagined all of that, being told web assembly helps some parts of Figma run faster feels like a big let down. Of course that isn't fair, almost nothing could live up to the expectations we have for WASM.

Its development is also by committee, which is maybe the best option for our current landscape, but isn't famous for getting things going quickly.



Like the gifted kid who lives with his mom at 30, at some point in time, we have to stop talking about potential and start talking about results.

Theory and practice doesn't match in this case, and many people have remarked that companies that sit on the WhatWG board have vested interest in making sure their lucrative app stores are not threatened by a platform that can run any app just as well.

I remember when Native Client came to the scene and allowed people to compile complex native apps to the web that run at like 95% of native speed. While it was in many ways an inelegant solution, it worked better than WebAssembly does today.

Another one of WebAssembly's killer features was supposed to be native web integration. How JS engines work is that you have an IDL that describes the interface of JS classes which is then used to generate code to bind to underlying C++ implementations. You could probably bind those to Webassembly just as well.

I don't think a cross-platform as in cross CPU arch matters that much, if you meant 'runs on everything' then I concur.

Also the dirty secret of WebAssembly is that it's not really faster than JS.


I start to think that's why there is still no DOM for the WASM and we have to pingping over JS

> Also the dirty secret of WebAssembly is that it's not really faster than JS.

That is near purely due to amount of work it took to make that shitty language run fast. Naive webassembly implementation will beat interpreted JS many times over but modern JIT implementations are wonder.


For WASM, the performance target isn't Javascript - but native code and NaCl. Considering WASM has had tremendously more time and effort invested into it, and still underperforms NaCl (and JS) signals to me that this is not the right approach.

The WASM runtime ended up from something that ingests pseudo-assembly,validates it and turns it into machine code, into a full-fledged multi-tiered JIT, like what JS has, with crazy engineering complexity per browser, and similar startup performance woes (which was one of the major goals of Nacl/Wasm to alleviate the load time issues with huge applications).


well, it could definitely be designed better.

Starting from not only single-threaded but memory-limited target was... weird decision


What does NaCl (seems to be some random crypto library?) have to do with this?



I don't dislike JS but the reason why it's fast is because billions were poured into making that happen.

V8 is a modern engineering marvel.


Yeah, and like many engineering marvels, it was instantly misused for purposes its creators didn't intend and became a scourge on humanity (looking at you NodeJS & co).


And WASM hasn't been around for as long, so WASM implementations are not as mature.

There is no reason why WASM couldn't be as fast, or faster than JS, especially now with WASM 3.0. Before, every programs in a managed language had to be shipped with its own GC and exception handling framework in WASM which was probably crippled by size constraints.


They still need to, because WASM GC is a MVP that only covers a subset.

Any language with advanced GC algorithms, or interior pointers, will run poorly with current WASM GC.

It works as long as their GC model overlaps with JS GC requirements.


It's also currently only a subset of JS GC requirements at that. It's the bare minimum to share references between JS and WASM to byte arrays like Int32Array. It's like basic OS-level memory page sharing only for now.

Some of the real GC tests will be strings support (because immutability/interning) and higher-level composite objects, which is all still in various draft/proposal states.


Oh, even worse than I thought.


Yep. These things have been solved by massive investments. The question is, can WASM as a language (not an implementation) do something JavaScript can't?


Wasm can do 64-bit integers, SIMD and statically typed GC classes.


JS could have had support for SIMD and 64 bit it's by now, and progress was actually being made (mostly just through the asm.js experiments), but it was deprioritized specifically to work on WASM.


WASM can even do 32-bit integers, which JavaScript can't, so uses floats instead.


JS has had byte arrays like Int32Array for a while. The JS engines will try to optimize math done into them/with them as integer math rather than float math, but yeah you still can't use an integer directly outside of array math.


The answer to that is no. But innovating at the language level was never a goal for WASM; quite the opposite, as simple as possible so it can be compiled and run anywhere.


> I start to think that's why there is still no DOM for the WASM and we have to pingping over JS

I don't think you need conspiracy theories for that. DOM involves complex JS objects and you have to have an entirely working multi-language garbage collection model if you are expecting other languages to work with DOM objects otherwise you run the risk of memory leaking some of the most expensive objects in a browser.

That path to that is long and slow, especially with the various committees' general interest being in not requiring non-JS languages to entirely conform to JS GC (either implementing themselves on top of JS GC alone or having to implement their own complex subset of JS GC to interop correctly), so the focus has been on very low level tools over complex GC patterns. The first basics have only just been standardized. The next step (sharing strings) seems close but probably still has months to go. The steps after that (sharing simple structs) seem pretty complex with a lot of heated debate still to happen, and DOM objects are still some further complexity step past that (as they involve complex reference cycles and other such things).


WASM is way way way faster if you need explicit memory management. It's only 100% a wash if you're doing DOM stuff.


Not necessarily. I found a benchmark that you can run yourself, that's doing pretty much just raw compute (JS vs C/C++ in Wasm):

https://takahirox.github.io/WebAssembly-benchmark/

Js is not always faster, but in a good chunk of cases it is.


It is easy to make benchmarks where JS is faster. JS inlines at runtime, while wasm typically does not, so if you have code where the wasm toolchain makes a poor inlining decision at compile time, then JS can easily win.

But that is really only common in small computational kernels. If you take a large, complex application like Adobe Photoshop or a Unity game, wasm will be far closer to native speed, because its compilation and optimization approach is much closer to native builds (types known ahead of time, no heavy dependency on tiering and recompilation, etc.).


I would take these benchmarks with a pinch of salt. Within a single function, it's very easy to optimize JS because you know every way a single variable will be defined. When you have to call a function, the data type of the argument can be anything the caller passes to the function, which makes optimization far more complex.

In practice, WASM codebases won't be simply running a single pure function in WASM from JS but instead will have several data structures being passed around from one WASM function to another, and that's going to be faster than doing the same in JS.

By the way, if I remember correctly V8 can optimize function calls heuristically if every call always passes the same argument types, but because this is an implementation detail it's difficult to know what scenarios are actually optimized and which are not.


Things might be getting better for JS, but just looking over those briefly, they don't look memory constrained, which is the main place where I've seen significant speedups. Also, simpler code makes JIT optimizations look better, but that level of performance won't be consistent in real world code.


You might be right in your use case, but still, JS is not the benchmark to beat. Native Client was already almost as fast as native code, started up almost instantly, and didn't get a decade of engineering with who knows how much money behind it invested into it.

Webassembly that was supposed to replace it needs to be at least as good, that was the promise. We're a decade in, and still Wasm is nowhere near while it has accumulated an insane amount of engineering complexity in its compiler, and its ability to run native apps without tons of constraints and modifications is still meh as is the performance.


To be fair, Native Client achieved much of its speed by reusing LLVM and the decades of work put into that excellent codebase.

Also, Native Client started up so fast because it shipped native binaries, which was not portable. To fix that, Portable Native Client shipped a bytecode, like wasm, which meant slower startup times - in fact, the last version of PNaCl had a fast baseline compiler to help there, just like wasm engines do today, so they are very similar.

And, a key issue with Native Client is that it was designed for out-of-process sandboxing. That is fine for some things, but not when you need synchronous access to Web APIs, which many applications do (NaCl avoided this problem by adding an entirely new set of APIs to the web, PPAPI, which most vendors were unhappy about). Avoiding this problem was a major principle behind wasm's design, by making it able to coexist with JS code (even interleaving stack frames) on the main thread.


I think youre referring to PNaCl(as opposed to Native Client), which did away with the arch-specific assembly, and I think they shipped the code as LLVM IR. These are 2 completely separate things, I am referring to the former.

I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today, and I think managing that level of complexity is tenable, considering the monster the current Wasm implementation became, which is still lacking in key ways.

As for out of process sandboxing, I think for a lot of things it's fine - if you want to run a full-fat desktop-app or game, you can cram it into an iframe, and the tab(renderer) process is isolated, so Chrome's approach was quite tenable from an IRL perspective.

But if seamless interaction with Web APIs is needed, that could be achieved as well, and I think quite similarly to how Wasm does it - you designate a 'slab' of native memory and make sure no pointer access goes outside by using base-relative addressing and masking the addresses.

For access to outside APIs, you permit jumps to validated entry points which can point to browser APIs. I also don't see why you couldn't interleave stack frames, by making a few safety and sanity checks, like making sure the asm code never accesses anything outside the current stack frame.

Personally I thought that WebAssembly was what it's name suggested - an architecture independent assembly language, that was heavily optimized, and only the register allocation passes and the machine instruction translation was missing - which is at the end of the compiler pipeline, and can be done fairly fast, compared to a whole compile.

But it seems to me Wasm engines are more like LLVM, an entire compiler consuming IR, and doing fancy optimization for it - if we view it in this context, I think sticking to raw assembly would've been preferable.


Sorry, yes, I meant PNaCl.

> I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today,

That is true today, but it would prevent other architectures from getting a fair shot. Or, if another architecture exploded in popularity despite this, it would mean fragmentation.

This is why the Portable version of NaCl was the final iteration, and the only one even Google considered shippable, back then.

I agree the other stuff is fixable - APIs etc. It's really portability that was the sticking point. No browser vendor was willing to give that up.


As someone with no intersection with the web, it is disconcerting to attend the burial of wasm when she was still getting her first compiler backends.


Also WebAssembly is meant to be a compiler target with the biggest advantage being it's sandboxed. The problem is that the JS engines can do that too. Just like JS engines, WebAssembly can run outside browsers. I think in theory Wasm is better then JS in those areas, but not better enough.


> Like the gifted kid who lives with his mom at 30, at some point in time, we have to stop talking about potential and start talking about results.

This is an entirely unnecessary jab. There’s a whole generation dealing with stuff like this because of economic and other forces outside their control.


WASM not taking over the world is probably also due to forces outside its control; I guess that's only relevant if money was being spent to accomplish that goal.


Actually the great secret of wasm (that will piss off a lot of people on HN I am sure) is that it is deterministic and can be used to build decentralized smart contracts and Byzantine Fault Tolerant distributed systems :)

Some joker who built Solana actually thought Berkeley Packet Filter language would be better than WASM for their runtime. But besides that dude, everyone is discovering how great WASM can be to run deterministic code right in people’s browsers!


I don't think you need WASM for that, I'm sure you can write a language that transpiles to JS and is still deterministic.


They tried their best: https://deterministic.js.org/

No, WASM is deterministic, JS is fundamentally not. Your dislike of all things blockchain makes you say silly things.


But WASM already exists and has many languages that are able to compile to it, why reinvent the wheel?


JS isn't deterministic in its performance


Well - the problem is... the "in theory" means that nobody will bet on WASM if it is not really going to be useful. People use HTML, CSS, JavaScript - that has been shown to be very useful. WASM is not useless but how can people relate to it? It is like an alien stack for most people.


It is totally fine if most people don't relate to wasm - it's good for some things, but not most things. As another example, most web devs don't use the video or audio tag, I'd bet, and that's fine too.

Media, and wasm, are really important when you need them, but usually you don't.


The way things usually gain traction is when a big tech company has success experimenting with it. it happened with node way back and happening with rust now.

the fact we haven't heard much about was use is probably because it isnt as valuable as we think, or no one has played around with it yet to find out


> when a big tech company has success experimenting with it

TFA has many examples of big tech companies using Wasm in production. It's not exhaustive either, e.g. the article doesn't mention:

- Google using it as a backend for Flutter and to implement parts of Google Maps, Earth, Meet, Sheets, Keep, YouTube, etc

- Microsoft using it in Copilot Studio

- eBay using it in their mobile app

- MongoDB using it for Compass

- Amazon supporting it in EKS

- 1Password using it in their browser extension

- Unity having it as a build target

(And this was just what I found with some quick web searches; I'm sure there are many other examples.)

---

> the fact we haven't heard much about was use is probably because it isnt as valuable as we think

One of the conclusions of the article is that it's mostly used in ways that aren't very visible.


> WASM could be a single cross platform compile target, which is kind of a CS holy grail.

The JVM says "Hello!" from 1995.


Hello back!

The JVM is a great parallel example. Anyone listening to the hype in the early days based around what the JVM could be would surely be disappointed now. It isn't faster than C, it doesn't see use everywhere due to practical constraints, etc.

But you'd be hard pushed to say the JVM is a total failure. It's used by lots all round the world, and solves real problems, just not the ones we were hoping it would solve. I suspect the future of WASM looks something like that.


Now JVM's sole purpose is to solve Larry Ellison's problems, so if you're not Larry Ellison and you don't have the same problems he does, then it's a total failure caging you, but a predatory trap serving him.

None of the technical arguments for JVM matter any more. It's just bait to trick you into sticking your hand under the lawnmower and helping Larry Ellison solve his problems.


Except JetBrains, Red-Hat, SAP, Azul, Oracle, Google, Microsoft, PTC, Aicas, Cisco, Ricoh, microEJ, Bluejay,... are also part of the Java party.


Microsoft largely cloned the Java Runtime to create the .NET Runtime and similarly cloned Java to create C#.

The two are so similar that Java bytecode to .NET bytecode translators exist. With some, it is possible to take a class defined in Java, subclass it with C#, call it from Java, etc...


I was in 2001 at a MSFT gold partner, you are skipping quite a few relevant steps on that timeline.


There’s no reason we shouldn’t be replacing our containers with WASI. Containers are absolutely miserable things that should just be VMs (in the WASM sense, not in the “run Linux in a virtual X86” sense)

The tooling is just not there yet. Everyone is just stuck on supporting Docker still.


There are a thousand reasons, which is why nobody is doing it. They're orthogonal. Problems WASM/WASI doesn't solve:

- Building / moving file hierarchies around

- Compatibility with software that expects Linux APIs like /proc

- Port binding, DNS, service naming

- CLI / API tooling for service management

And about a gazillion other things. WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it. It's not meaningfully portable in any way outside of UNIX so you might as well just write a real Linux app. WASI buys you nothing.

WASM is heavily overfit to the browser user case. I think a lot of the dissipated excitement is due to people not appreciating how much that is true. The JVM is a much more general technology than WASM is which is why it was able to move between such different use cases successfully (starting on smart TV boxes, then applets, then desktop apps, then servers + smart cards, then Android), whereas WASM never made it outside the browser in any meaningful way.

WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO. Before WASM Google also had a less well known proposal to formally extend the web with JVM bytecode as a first class citizen, which would have allowed fast DOM/JS bindings (Java has an official DOM/JS bindings API for a long time due to the applet heritage). The bytecode wouldn't have had full access to the entire Java SE API like applets did, so the security surface area would have been much smaller and it'd have run inside the renderer sandbox like V8. But Mozilla rejected that too.

So we have WASM. Ignoring the new GC extensions, it's basically just regular assembly language with masked memory access and some standardized ABI stuff, with the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense. A strange animal, not truly excellent at anything except pleasing the technical aesthetic tastes of the Mozillians. But if you don't have to care about what Mozilla think it's hard to come up with justifications for using it.


> WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it.

WASI fixed well-known flaws in the POSIX API. That's not a bad thing.

> the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense.

WASM was designed to be JIT-compiled into its final form at the speed it is downloaded by a web browser. JS JIT-compilers in modern web browsers are much more complex, often having multiple compilers in tiers so it spends time optimising only the hottest functions.

Outside web browsers, I'd think there are few use-cases where WASM couldn't be AOT-compiled.


> WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO.

No, Mozilla's concerns at the time were very concrete and clear:

- NaCl was not portable - it shipped native binaries for each architecture.

- PNaCl (Portable Native Client, which came later) fixed that, but it only ran out of process, making it depend on PPAPI, an entirely new set of APIs for browsers to implement.

Wasm was designed to be PNaCl - a portable bytecode designed to be efficiently compiled - but able to run in-process, calling existing Web APIs through JS.


I don't think their concerns were concrete or clear. What does "portable" mean? There are computers out there that can't support the existing feature set of HTML5, e.g. because they lack a GPU. But WebGPU and WebGL are a part of the web's feature set. There's lots of stuff like that in the web platform. It's easy to write HTML that is nearly useless on mobile devices, it's actually the default state. You have to do extra work to ensure a web page is portable even just with basic HTML to mobile. So we can't truly say the web is always "portable" to every imaginable device.

And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.

So the idea of portability is not and never has been a requirement for something to be "the web". There have been non-portable web pages for the entire history of the web. The sky didn't fall.

The idea that everything must target an abstract machine whether the authors want that or not is clearly key to Mozilla's idea of "webbyness", but there's no historical precedent for this, which is why NaCL didn't insist on it.


> What does "portable" mean?

In the context of the web, portability means that you can, ideally at least, use any browser on any platform to access any website. Of course that isn't always possible, as you say. But adding a big new restriction, "these websites only run on x86" was very unpopular in the web ecosystem - we should at least aim to increase portability, not reduce it.

> And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.

Historically, yes, and Flash as well. But the web ecosystem moved away from those things for a reason. They brought not only portability issues but also security risks.


Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares. Large parts of the industry don't agree with the goal of portability or even explicitly reject it, which is one reason why so much software isn't distributed as web apps.

Security is similar. It sounds good, but is always in tension with other goals. In reality the web doesn't have a goal of ever increasing security. If it was, then they'd take features out, not keep adding new stuff. WebGPU expands the attack surface dramatically despite all the work done on Dawn and other sandboxing tech. It's optional, hardly any web pages need it. Security isn't the primary goal of the web, so it gets added anyway.

This is what I mean by saying it was vague and unclear. Portability and security are abstract qualities. Demanding them means sacrificing other things, usually innovation and progress. But the sort of people who make portability a red line never discuss that side of the equation.


> Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares.

As far back as I can remember well (~20 years) it was an explicitly stated goal to keep the web open. "Open" including that no single vendor controls it, neither in terms of browser vendor nor CPU vendor nor OS vendor nor anything else.

You are right that there has been tension here: Flash was very useful, once, despite being single-vendor.

But the trend has been towards openness: Microsoft abandoned ActiveX and Silverlight, Google abandoned NaCl and PNaCl, Adobe abandoned Flash, etc.


There are shades of the old GPL vs BSD debates here.

Portability and openness are opposing goals. A truly open system allows or even encourages anyone to extend it, including vendors, and including with vendor specific extensions. Maximizing the number of devices that can run something necessarily requires a strong central authority to choose and then impose a lowest common denominator: to prevent people adding their own extensions.

That's why the modern web is the most closed it's ever been. There are no plugin APIs. Browser extension APIs are the lowest power they've ever been in the web's history. The only way to meaningfully extend browsers is to build your own and then convince everyone to use it. And Google uses various techniques to ensure that whilst you can technically fork Chromium, in practice hardly anyone does. It's open source but not designed to actually be forked. Ask anyone who has tried.

So: the modern web is portable for some undocumented definition of portable because Google acts as that central authority (albeit is willing to compromise to keep Mozilla happy). The result is that all innovation happens elsewhere on more open platforms like Android or Linux. That's why exotic devices like VR headsets or AI servers run Android or Linux, not ChromeOS or WebOS.


> with a bunch of stuff renamed

And a capability system and a brand new IDL, although I'm not sure who the target audience is...

> it's basically just regular assembly language

This doesn't affect your point at all, but it's much closer to a high-level language than to regular assembly language, isn't it? Nonaddressable, automatically managed stack, mandatorily structured control flow, local variables instead of registers, etc.


Some hardware in the past has had a hidden/cpu managed stack. Modern CPUs with features like CFG have mandatorily structured control flow. Using a stack machine instead of a register machine is indeed a key difference but the actual CPU is a register machine so that just means WASM has to be converted first, hence the JIT. Stack based assembly languages are still assembly languages.


It helps if you actually qualify statements such as "Containers are absolutely miserable things". I'm in a world where were using containers extensively, and I don't experience any issues whatsoever about which one might thing "WASI would be the solution to this".



Yeah the real answer is that all of this stuff is still a work in progress. Last I checked WASI doesn't have a concept of "current directory" for example, so porting software is not trivial.

Also WASI is a way of running a single process. If your app needs to run subprocesses you'll need to do more work.


But what's the benefit of replacing containers with WASI?

The performance would be worse, and it would be harder to integrate with everything else. It might be more secure, I guess.


Imo stuff like Flatpak has the right idea - provide a rich but controllable set of features, API/ABI compatibility, while providing zero overhead isolation (same as docker since it relies on the same APIs).

I also rather like the idea of deploying programs rather than virtual machines.

Docker's cardinal sin imo is that it was designed as a monetizable SaaS product, and suffers from inner platform effect, reinventing stuff (package management, lifecycle management etc) that didn't need to be invented.


I recommend you watch [0] if you haven't seen it yet, it describes the history of javascript, iirc until 2035.

[0] https://www.destroyallsoftware.com/talks/the-birth-and-death...


Thanks! There's a video I've been looking for, for years. It's about web technologies and recusion ("I put a VM in side a VM"), and it's satire / comedy.

It might be this one I'm thinking of, as it closely fits the bill. But something is telling me it's not, and that it was published earlier.

Any ideas?


You can use javascript as a single cross platform compile target. What's the difference?


Javascript comes with mandatory garbage collection. I suppose you could compile any language to an allocation-free semantic subset of Javascript, but it's probably going to be even less pretty than transpiling to Javascript already is.


> it's probably going to be even less pretty than transpiling to Javascript already is.

I don't see how it'd be much different to compiling to JavaScript otherwise. Isn't it usually pretty clear where allocations are happening and how to avoid them?


“Pretty clear” is good, “guaranteed by language specifications” is better.

Why reverse-engineer each JS implementation if you can just target a non-GC runtime instead?


WASM allows you to run some parts of the application a bit faster. ;)


WASM, and asm.js before it, roughly exist because Javascript is such a bad compile target.


WASM works with any language and can be much faster than javascript


You can compile any language to JavaScript. jslinux compiled x86 machine code to JavaScript.

So basically wasm is some optimisation. That's fine but it's not something groundbreaking.

And if we remove web from the platform list, there were many portables bytecodes. P-code from Pascal era, JVM bytecode from modern era and plenty of others.


> some optimisation

That's underselling it a bit IMO. There's a reason asm.js was abandoned.


Wikipedia mentions that Wasm is faster to parse than asm.js, and I'm guessing Wasm might be smaller, but is there any other reason? I don't think there's any reason for asm.js to have resulted in slower execution than Wasm.


> I don't think there's any reason for asm.js to have resulted in slower execution than Wasm

The perfect article: https://hacks.mozilla.org/2017/03/why-webassembly-is-faster-...

Honestly the differences are less than I would have expected, but that article is also nearly a decade old so I would imagine WASM engines have improved a lot since then.

Fundamentally I think asm.js was a fragile hack and WASM is a well-engineered solution.


After reading the, I don't feel convinced abtout the runtime performance advantages of WASM over asm.js. he CPU features mentioned could be added to JS runtimes. Toolchain improvements could go both ways, and I expect asm.js would benefit from JIT improvements over the years.

I agree 100% with the startup time arguments made by the article, though. No way around it if you're going through the typical JS pipeline in the browser.

The argument for better load/store addressing on WASM is solid, and I expect this to have higher impact today than in 2017, due to the huge caches modern CPUs have. But it's hard to know without measuring it, and I don't know how hard it would be to isolate that in a benchmark.

Thank you for linking it. It was a fun read. I hope my post didn't sound adversarial to any arguments you made. I wonder what asm.js could have been if it was formally specified, extended and optimized for, rather than abandoned in favor of WASM.


Whatever it would have ended up like it would have been a big hack so I'm glad everyone agreed to go with a proper solution for once!


Both undersell and oversell. There are still cases where vanilla JS will be faster.

And AFAIK asm.js is the precursor to WASM, like the early implementations just built on top of asm.js's primitives.


*sheer

shear potential = likely to break apart


Haha, shear potential seems like a great accidental pun. I'll have to find an excuse to use it deliberately over the next week.


Java and JVM all over again


> being told web assembly helps some parts of Figma run faster feels like a big let down.

Not really when tools like Figma were not really possible before it


What was preventing the development of Figma before Wasm?

For developing brand new code, I don't think there's anything fundamentally impossible without Wasm, except SIMD.


Performance. JS can be as fast as wasm, but generally isn't on huge, complex applications. Wasm was designed for things like Unity games, Adobe Photoshop, and Figma - that is why they all use it. Benchmarks on such applications usually show a 2x speedup for wasm, and much faster startup (by avoiding JS tiering).

Also, the ability to recompile existing code to wasm is often important. Unity or Photoshop could, in theory, write a new codebase for the Web, but recompiling their existing applications is much more appealing, and it also reuses all their existing performance work there.


Yet Figma like tools do exist without Wasm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: