For serverless usecases, cloudflare etc.. found that it would be faster to just call a function from a wasm binary than spin up a whole docker container [1]. Which basically translated to cost savings.
In the last company I worked for [2], we saw wasm as a way to easily ship user code in a cross platform way. We mostly targeted edge-ml use cases. Wasm allowed us to package all the "libraries"/functions needed to run the user code securely. So users created and tested out their ML apps in browser and deployed them to all the platforms we supported. (Mobile, Browser, Embedded). As an added bonus they could write each of their functions in any language that compiled to wasm, and just ship it as one "app".
This seems like the best reason to use WebAssembly. The benefit is for the system integrator / platform and the value prop is clear vs. any other technology out there
Great question! The promise (and excitement) of Wasm is to have portable, secure, quick-to-start, and low resource usage apps. So, write code in Go, Rust, C++, or any other language that can output to Wasm (up to over 40 now!) and you're good to go! The binaries are super small too. Happy to dive in more too!
Are there realistic benchmarks that quantify these gains?
A first view of the promise reads reminiscent to the JVM days, which promised to solve portability. The main difference is JIT vs. AOT, but hidden behind that remains also the complex management of FS access, threading, spawning, SIMD, GPU, and other unportable differences. While I imagine that the shim would avoid having a whole Linux, what do we lose in the change?
Why not run that go/rust/etc code natively on the machine though? Is there extra sandboxing, network/filesystem virtualization or anything gained by compiling to and running in a WASM environment?
One benefit of using wasm is architecture-agnostic binary. Right now you can't run x86 binary on ARM or vice-versa. So basically you need to build your containers twice if you have macbook people and x86 servers. And technically those are different images, so there's chance that you'll hit some non-trivial difference. With wasm everything could be simpler.
This does not work for me. Java image that takes 30 seconds to build on x86 machine took 40 minutes to build on M1 after which I killed it. So this feature essentially does not exist as it's not usable. I don't think that's how most people use it. I, personally, rent x86 VPS just for docker.
Most images nowadays have ARM version, so probably that's how most people use it.
Yes, we need a compiler for Java. Java sources are compiled to Java bytecode along with many other actions for complex projects like downloading dependencies, generating sources, running tests which might run platform-dependent binaries and so on.
> That was the advantage given for the JVM. The .jar ran on x86 and ARM.
Yes, wasm and Java bytecode are the same in that regard. But Java bytecode failed to get adoption outside of Java world. Wasm might not, we will see. One significant improvement of Wasm over Java is good security story. Java sandbox is well known for its CVEs. Browser wasm implementations are battle-tested on billions of devices in the wild Internet. So you basically can't run untrusted Java bytecode without further boundaries like KVM or at least containers. But you definitely can run untrusted Wasm bytecode because that's what your browser does all the time.
And it is starting an additional emulated x86/amd64 vm that is emulated. The battery takes a big hit. Now imagine you can run wasm that doesn‘t need an extra vm and doesn‘t drain your battery much faster.
I think the # of people losing a lot of of battery to this is miniscule, especially since the OSX x86 emulation is pretty fast.
A bigger peeve is that there is no general reasonably fast cross build ability, to eg build, test and debug ARM images (for runnign on ARM servers) on x86 Linux or Windows etc.
Gained by you running your app on your own server? not much really.
Gained by some "serverless" provider who tries to run multiple apps like yours, in parallel on the same machine? Yes. Less process overhead. Probably, less memory footprint too.
A majority of software already does this. WASM seems more like a direct attempt to run binary blobs and to simply "trust" the intrusive binary sandbox while leaking all kinds of information when trustworthy sandbox solutions and multi architecture compilation are nothing new and not a new problem that needs solving. It seems if anything the problem people are trying to solve with WASM is how to get people enthusiastic about handing over more privacy.
Depends on which "this" you're referring to. We (Docker) are trying to make it easier for developers to use the tools and capabilities they know and love to build, share, and run Wasm applications.
As far as Wasm itself, it's designed to provide a fast, lightweight, secure, and portable binary format. While it was originally designed to help bring native code to the browser, it's quickly spreading to the server side. Many folks are using it for edge/IoT, but it's growing into other areas (saw demos today of even using it in databases as psuedo stored procedures). Happy to dive in more if you have more questions!
I'm honestly really confused about the specific role docker is playing here.
What, exactly, is Docker doing? Is it compiling the application? Is it making a runtime for the wasm binary? Is it being the runtime for the wasm binary, so the end user builds through whatever usual build processes and gets a binary they can then easily run?
If the wasm binary is lightweight and portable, why is docker useful?
edit: Given the other comments about "what problem does this solve" I think maybe the blog post has missed its mark slightly
Yeah. Not trying to be cynical, but this feels like someone at Docker said "we need to have a WASM story" and this is what they came up with. I really don't see the point.
Sorry for the delayed response, but the problem we're trying to help solve is how to help lower the barrier of entry to using Wasm apps leveraging the tool many developers are already using. As an example example, with Docker+Wasm you can use a Dockerfile to build the app in a container image, distribute the Wasm bundle as an OCI artifact, and test it locally using the Wasm runtime. While the ability to build using containers has existed all along, the ability to actually run the Wasm app locally side-by-side your other containers (and leverage the same container network) is new. Hope that helps!
This should be the copy of the announcement. Finally it's clear what it does. The announcement doesn't explain at all what docker + WASM is.
Summary of summary: Dockerfile -> docker build -> docker container with the WASM app and runtime inside -> docker push/pull -> docker run container.
However, even if I never compiled anything to WASM I think that it was already possible to build an image with a WASM container and runtime inside (as for any other language/runtime.) So what's the friction this is removing from the process?
That does help thanks. It's still all a bit abstract to me because I (like many others) are still not familiar with WASM itself so this next step is probably going to go unappreciated for a while.
I'm a bit surprised by several comments in this post talking about how WASM could replace containers but I don't have the context around it
Easy orchestration and deployment is the only thing I can think of. Because of docker's infrastructure.
But tbh if orchestration is really the concern, Docker + wasm seems less efficient than having a dedicated app that can orchestrate multiple wasm modules within the same process. But maybe that's something docker can solve later as the actual requirements emerge.
the only thing I see is that after wrapping wasm inside a docker you can leverage all those docker tools(k8n,etc)?
the docker solves portability and scalability issues with a reasonable overhead, so wasm hides inside docker can benefit from some?
I wonder if it could make testing frontend code that uses WASM (but not DOM) fast and easy since you wouldn’t need to fire up a complete browser environment.
I’m not sure that it’s possible at the moment. In the past when I needed to test a WASM integration, I ended up using that approach and it was kind of a pain not to get immediate feedback on the WASM code’s API tests since it was essentially only testable through the complete integration environment.
I like integration tests, but I like smaller and faster test suites for easing development in some conditions as well.
You can currently test a WASM-targeted API if you use unit tests or other language-level testing approaches, but you won’t get the constraints of the WASM runtime as far as I know. Maybe the lack of garbage collection could be a critical constraint to test against.
I suppose you could even test dynamically linked binaries without a browser as well.
I’m sure there’s far more to it that I’m not aware of, and maybe testing really isn’t that useful of a feature here — I’m just guessing based on my own experience.
Great question! There isn't a way to run Docker directly in the browser. But, there are tools (like Play with Docker at play-with-docker.com) that lets you interact with a CLI in the browser to run commands against a remote cloud instance. I personally use this a lot for demos and workshops!
But... certainly a neat idea to think about what Wasm-based applications could possibly look like/run in the browser!
Hey! Peter from Snaplet here. This is really exciting stuff. We created the OSS postgres-wasm (https://github.com/snaplet/postgres-wasm) example a few weeks ago. An idea I'm playing around with is something like:
Edit.com opens a text-editor and terminal where I have access to the NodeJS binary and a connection string to PostgresQL. Want Redis? Open a new tab at https://redis.com/try, where the connection string will appear in the edit.com tab.
I used https://wasm.supabase.com/ to make sure some SQL commands I was writing for a blog were correct. It was super useful and faster than starting docker desktop, looking for the postgres image name, starting it etc..
I miss a feature where I can share a link with some data/schema pre-seeded (maybe from a gist?)
All three links that you posted appears to be either broken or malicious. Are you just trying to explain a concept using example domain names? Consider ".example" or ".example.com" (see RFC 2606) instead of potentially malicious domains.
Is it possible to sandbox the host system from the guests in WASM?
Are there namespaces and cgroups and SECCOMP and blocking for concurrent hardware access in WASM, or would those kernel protections be effective within a WASM runtime? Do WASM runtimes have subprocess isolation?
- TIL about teh Endokernel: "The Endokernel: Fast, Secure, and Programmable Subprocess Virtualization" (2021)
https://arxiv.org/abs/2108.03705#
> The Endokernel introduces a new virtual machine abstraction for representing subprocess authority, which is enforced by an efficient self-isolating monitor that maps the abstraction to system level objects (processes, threads, files, and signals). We show how the Endokernel can be used to develop specialized separation abstractions using an exokernel-like organization to provide virtual privilege rings, which we use to reorganize and secure NGINX. Our prototype, includes a new syscall monitor, the nexpoline, and explores the tradeoffs of implementing it with diverse mechanisms, including Intel Control Enhancement Technology. Overall, we believe sub-process isolation is a must and that the Endokernel exposes an essential set of abstractions for realizing this in a simple and feasible way.
> gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system surface. It includes an Open Container Initiative (OCI) runtime called runsc that provides an isolation boundary between the application and the host kernel.