Compile to LLVM or WASM

Wouldn’t it still be useful to have the stackalloc and Ptr stuff?

Sure, but then we’re back to the question of language semantics: do you want Scala Native semantics, or Scala.js semantics? You can go back to square 11 (Compile to LLVM or WASM - #12 by sjrd) and reread the thread from there. :slight_smile:

1 Like

There is more to a VM than a heap.

In WASM you don’t have any of the OOP stuff form JVM / JS. AFAIK all you have are “C like” functions.

You would need to lower all the OOP stuff, and have a (tiny) runtime for the parts that can’t be lowered. You would need to reimplement objects and virtual calls on them, closures, exceptions, and maybe some other things. Effectively what the Scala Native runtime does…

I need to look up the details, but that JS and WASM share a heap also doesn’t sound right for me. This would break the WASM security model which is nothing-shared-by-default. When you can pass pointers freely you’re back to the security of the host VM. (OK, maybe that’s done in the browser for performance reasons, and along the line “that it makes no sense to be more secure than the host VM anyway”. IDK, need to look this up). The other points still hold.

This sounds interesting.

This would also make a Scala Native WASM GC easy to implement, wouldn’t it? :grinning:

Yeah, I mean one can do it. Because, you know, you can! :smile:

I clearly see the enthusiasm for Scala.js. But I guess you would be equally capable to do it with Scala Native.

That’s the exact question. And imho it’s quite clear that it’s Scala Native. You want the pointer-juggling and c-lib-calling stuff because that’s what WASM was made for, and only with that stuff it can compete with the established VMs.

Also think into the future: When stand-alone WASM runtimes evolve and become feasible targets, they won’t have a JS VM attached to run your libs (besides that this construct will be imho likely dog slow, eating any possible WASM speed-ups, which are anyway a delicate concern). But the stand-alone runtimes have things like WASI. A nice “POSIX C” like interface. Scala Native is build around such thing…

It’s all about the ecosystem. And the primary ecosystem on WASM is currently Rust and C/C++. I don’t think this will change anytime soon as WASM was made especially to host such languages well. Trying to drag in the JS ecosystem into WASM, who else does this? (Especially as you can’t run JS libs on stand-alone WASM runtimes.)

This would also make a Scala Native WASM GC easy to implement, wouldn’t it?

Not really. Currently Scala Native is using LLVM as a backend which does support compilation to WebAssembly (using Emscripten or WASI-SDK toolchains), but LLVM IR have no capabilities of expressing the WASM GC constructs. For this we need to emit .wasm directly - that’s what Kotlin/WASM is doing, instead of using Kotlin/Native. It means that we’d need to create a new WASM specific backend for Scala Native. It’s exactly the same for using Scala.js to emit WASM GC compatible .wasm.

Because both platforms requires a new backends (or their extension) we need to seek for other features to determinate a best platform for initial WASM support. Scala.js wins here for several reasons:

  • It already has a fast, incremental compiler/optimizer and is stable
  • JavaScript based WASM runtimes are currently a dominant target. These (browsers, node.js, deno ) are also the only runtimes supporting WASM GC. Having a good, JavaScript interop is an additional advantage here.

When it comes to Scala Native, it would be a better fit for non-JS-based runtimes (wasmtime, wasmer) using WASI, however these have multiple blockers right now:

  • exception handling - no setjmp/longjmp or other low level control flows make it harder to establish low/no-cost exception handling. In JS-based runtimes we can use JS exception handling for free.
  • GC - no native runtime supports WASM GC yet. It means we need to ship our own GC, which itself has multiple blockers.

In long term I believe it would be great to have a Scala.js WASM backend for Web-related use-cases, and Scala Native WASM/WASI backend, for other use cases. Possibly, both of them can share large part of the logic.

6 Likes

But at this point we’re back to my first question. Why the push to WASM than at all?

It will just run Scala.js in the browser, much slower and with much more complications. It will have zero advantage against running natively in the proper JS runtime in such a scenario.

The other points are of course correct. WASM is still in its early days, and there are much things missing. That’s also why I can’t understand the hype. I think there is no reason to rush to WASM. (Especially as in the proposed implementation it won’t have any advantages against native Scala.js. Actually more the contrary.)

How would that work? Would Scala.js-on-WASM exceptions be handled on the JS VM? Because exceptions don’t have enough overhead already? :grin:

:cry:

This makes the whole stand-alone runtimes still completely unusable for GC langs. Hand crafted GC inside WASM is way to slow. I say Blazor once more…

When only targeting the browser and Node-likes, porting the Scala.js variant of Scala makes even less sense to me. Scala.js is already natively on these platforms.

What advantages are people hopping for running Scala.js in the described way (tightly coupled to the JS runtime, constantly calling in and out)? I really don’t get it. Especially as you can’t use any of the actual WASM capabilities which offer (almost) raw HW performance (at the cost to fiddle with low-level stuff) which you need to beat the excellent JS JIT compilers, and you can’t use any of the nice already existing WASM-capable libs or frameworks, for example from Rust. So you miss out on the whole current WASM ecosystem.

I really don’t get what’s going on here. :see_no_evil:

But OK, it will at least work… Somehow… :upside_down_face:

Btw, situations when WASM is needed - are when we want to use Scala with the WASM component model that will allow the use of together components written in different languages. [ Introduction - The WebAssembly Component Model ]. (Next phase of ideas arround interoperable multi-language VM runtime). Now it’s hard to say, how widespread this can be.

From the linked website:

Why the Component Model?

If you’ve tried out WebAssembly, you’ll be familiar with the concept of a module. Roughly speaking, a module corresponds to a single .wasm file, with functions, memory, imports and exports, and so on. These “core” modules can run in the browser, or via a separate runtime such as Wasmtime or WAMR. A module is defined by the WebAssembly Core Specification, and if you compile a program written in Rust, C, Go or whatever for the browser, then a core module is what you’ll get.

Core modules are, however, limited to describing themselves in terms of a small number of core WebAssembly types such as integers and floating-point numbers. Just as in native assembly code, richer types, such as strings or records (structs), have to be represented in terms of integers and floating point numbers, for example by the use of pointers and offsets. And just as in native code, those representations are not interchangeable. A string in C might be represented entirely differently from a string in Rust, or a string in JavaScript.

For Wasm modules to interoperate, therefore, there needs to be an agreed-upon way for defining those richer types, and an agreed-upon way of expressing them at module boundaries.

In the component model, these type definitions are written in a language called WIT (Wasm Interface Type), and the way they translate into bits and bytes is called the Canonical ABI (Application Binary Interface). A Wasm component is thus a wrapper around a core module that specifies its imports and outputs using such Interfaces.

The agreement of an interface adds a new dimension to Wasm portability. Not only are components portable across architectures and operating systems, but they are now portable across languages. A Go component can communicate directly and safely with a C or Rust component. It need not even know which language another component was written in - it needs only the component interface, expressed in WIT. Additionally, components can be linked into larger graphs, with one component satisfying another’s dependencies, and deployed as units.

Combined with Wasm’s strong sandboxing, this opens the door to yet further benefits. By expressing higher-level semantics than integers and floats, it becomes possible to statically analyse and reason about a component’s behaviour - to enforce and guarantee properties just by looking at the surface of the component. The relationships within a graph of components can be analysed, for example to verify that a component containing business logic has no access to a component containing personally identifiable information.

Moreover, components interact only through the Canonical ABI. Specifically, unlike core modules, components may not export Wasm memory. This not only reinforces sandboxing, but enables interoperation between languages that make different assumptions about memory - for example, allowing a component that relies on Wasm GC (garbage collected) memory to collaborate with one that uses conventional linear memory.

This is about defining interfaces on a level similar to C-“ABI”.

It’s about primitive numbers, pointers, and structs, and functions on them.

It defines basically a binary FFI layer…

It’s not about passing pointers to memory with high level objects (like the ones from the JS or the JVM world—which is all Scala.js can do). Actually it explicitly says that passing pointers (memory) directly is not part of the interfaces, as this would break the WASM encapsulation and with that the basis of the whole WASM security model. No shared memory…

Without a language such like Rust, C/C++, Zig, or Go, or actually Scala Native, which can define such low-level interfaces the WASM component model is not accessible.

Scala.js can (currently) only reasonably “talk” to JS libs because interop is on the object model level. If you want to have control over C-like ABI level things, so you could utilize the WASM component model to interact with the rest of the WASM ecosystem, you need to dip into features of Scala Native; which is already quite close with its C-interop to what would be needed to use WASM component interfaces.

Canonical ABI

An ABI is an application binary interface - an agreement on how to pass data around in a binary format. ABIs are specifically concerned with data layout at the bits-and-bytes level. For example, an ABI might define how integers are represented (big-endian or little-endian?), how strings are represented (pointer to null-terminated character sequence or length-prefixed? UTF-8 or UTF-16 encoded?), and how composite types are represented (the offsets of each field from the start of the structure).

The component model defines a canonical ABI - an ABI to which all components adhere. This guarantees that components can talk to each other without confusion, even if they are built in different languages. Internally, a C component might represent strings in a quite different way from a Rust component, but the canonical ABI provides a format for them to pass strings across the boundary between them.

Of course Scala.js could add such features. But that would amount to reimplementing Scala Native…

The wish to utilize in the long run the WASM component interop is actually just strengthening my point: You needed Scala Native semantics for that!

Starting from Scala.js is a dead end here. (At least as long as Scala.js won’t be augmented with all the low-level features needed to define FFI on the level of C-like ABI. But doing that would be imho nuts. Because this was already done for Scala Native. :smile:)

What I understood (I clearly know a lot less than the others on this thread, so please take it apart):

  • There’s a lot of traction around wasm lately
  • It looks a very promising platform for distributing portable and sandboxed code to run on lower performance devices, so the browserless runtimes
    • I don’t see a widespread reason compile Scala for the browser to wasm
    • I also don’t think wasm will be relevant in replacing the JVM in places where it runs well
  • We need to make the standard library run on WASI, then Scala has a great library ecosystem that can be used to write applications. So maybe interop (component model) is not the most important thing? Though if compiling through Scala.js is really “a dead end” and components get adopted, that’s not good.
  • Scala seems to be in a really good position thanks to Scala.js (compiler, linker / optimizer, Java library port written in Scala) and the ecosystem (a lot already cross-builds to Scala.js).

What’s missing is indeeed a native runtime supporting GC, exceptions, multi-threading (?). WasmEdge for example seems to have activites around all of those.

1 Like

@MateuszKowalewski have you read the whole topic before stating your ideas?

This post is a great summary why scala-native is a less promising foundation for prototyping scala to wasm compiler than scala.js: Compile to LLVM or WASM - #29 by tanishiking

reasons why starting with scala-native is hard:

plan on how to start with scala.js and gradually break free from browsers requirement:

afaik ‘blazor-wasm’ (i.e. not ‘-aot’) runs .net bytecode through a bytecode interpreter that itself is compiled to wasm, so you have two abstractions at once: one compiled (wasm) and one interpreted (.net bytecode). interpretation is very slow, so that’s why ‘blazor-wasm’ is usually the slowest. i don’t think blazor proves anything in context of scala on wasm roadmap, except maybe the fact that running custom (not built-in) gc on wasm is going to be slow.

extra stuff:
wasmgc provides method support in oo-style: gc/proposals/gc/Overview.md at main · WebAssembly/gc · GitHub
there is exception handling in wasm (no need to go through javascript?) as pointed in Add WebAssembly Linker Backend (with WasmGC and Wasm ExceptionHandling) · Issue #4928 · scala-js/scala-js · GitHub
in wasm probably you can’t turn opaque, gc-managed pointers to raw pointers, so calling C-like ABI will require non-trivial arguments serialization anyway (unless you run custom gc compiled to wasm which is going to be slow)

overall it’s not clear whether scala-native or scala.js semantics would be better, but starting from scala.js seems more promising right now.

do you have any examples where a gc-managed language (compiled to wasm) talks (calls both ways) efficiently (without arguments serialization) with a language without gc (also compiled to wasm)? let’s assume the arguments are arrays, but something more complex would be interesting as well (e.g. structs).

Just wanted to bring this up since I haven’t seen it mentioned so far: There appear to be an effort to implement a JVM in WASM that allow you to run unaltered jars.

2 Likes

Yes, I did. And my reaction was: Holly cow, what’s going on here?!

Nobody said it would be easy. It’s porting Scala to a completely new platform. This is huge. No question.

Only that WASM, as platform which was created with the goal of running low-level C-like languages in a secure sandbox, is closer to Scala Native than Scala.js. This should be obvious!

Sure. C-interop needs to be exchanged for WASM component interop.

WASM ABI is not C ABI. But it’s at least conceptually on the same abstraction level…An abstraction level currently completely missing in Scala.js.

This won’t work. That’s why I said it’s a dead end.

It may be easier to start, but you’ll going to hit a wall really soon…

This is a pipe dream. The performance will be much worse than what the JS JITs can achieve! WASM does not have a JIT. It will be therefore like running interpreted JVM code; dog slow. You need to optimize your code AOT (like LLVM does) to come even close to what a modern JS JIT can do. (Given JS JITs are optimized to compile away all kinds of dynamic OOP indirection; they’re especially good at that).

I would take bets for real money that Scala.js compiled to WASM would be much slower than directly running as JS.

People always underestimate how fast JS actually is. People hear “native code” and seem always to think it’s somehow magically faster. It is not! Not when you have a powerful JIT in the picture.

Please google for yourself how people building WASM modules in Rust complain that these are much slower than JS equivalents… You get only good performance when you hand optimize the stuff in the exact right places, and take advantage of the low-level control over your code and do all the “C tricks” to make things fast. You still need to beat the JIT while doing that though! Not an easy task.

Code developed in Scala.js can already be deployed on servers running Node.js!

So we’re not reaching any new platform here.

(Only that it would be dog slow in WASM, of course.)

This can’t happen before the stand-alone runtimes support WASM GC anyway…

Current state is classical vaporware.

Should it be true that WASM GC just lazily couples the JS runtime with the WASM runtime it could turn out that WASM GC on stand-alone WASM runtimes remains another pipe dream. Because these runtimes explicitly strive to get rid of the JS runtime component.

Of course the stand-alone runtimes could add their own WASM GC implementation. This requires to develop a competitive GC implementation from scratch. (You can’t just copy part of a JS runtime because the JS GC is tightly coupled to the runtime). This could take half a decade or more; if it ever happens, which is not sure.

WASI is the “C lib for the Web”.

Basing Scala’s std. lib on WASI means recreating something like Scala Native lib which is already a kind of “C lib”; at least from what it offers.

This would paint Scala on WASM in a corner, creating effectively a disconnected island with no access to any libs besides pure Scala WASM capable ones.

That’s a pretty small world! Scala is not the biggest language on the playground. Pulling something of like conquering a completely new runtime platform without access to already existing ecosystems seem crazy difficult, up to a guarantied failure (sadly).

Also this implies porting JS semantics to WASM. This looks as said like porting JS to WASM… It’s imho crazy to reimplement parts of JS on WASM. People do all kinds of crazy things, but I didn’t see something like that ever proposed anywhere. (Maybe because it just doesn’t make any sense as you can compile and run a lang with JS semantics and it’s libs easily to JS).

Means, like showed, reimplementing Scala Native… Because you need direct access to an ABI layer.

With Scala.js-on-WASM you would be even in a much worse spot.

Your “bytecode” is Scala.js IR. A fairly high level 'bytecode", depended on all kinds of OOP semantics, Completely unoptimized compared to JVM or CLR bytecode (even both are still quite high level, but at least methods have more low-level VM code which needs less interpretation so it can be executed more directly by a low-level machine).

Now you need to interpret this “bytecode” in an environment similar to the C abstract machine. All the high level constructs of your “bytecode” need interpretation, because your “VM” is more or less a simulated CPU and some memory and not something like the JVM or CLR. You can’t really JIT compile your “bytecode” as this would anyway amount to reimplementing a JVM or JS VM on WASM. (Actually exactly what MS Blazor is doing; which is provably dog slow).

Also Blazor proves that just porting existing code to WASM makes it usually much slower and not faster. (We’re back to the question for the actual gains one could get from Scala compiled to WASM). In theory a CLR running on WASM would be around in the same ballpark as running natively on bare metal, because there shouldn’t be to much overhead with WASM which is “almost” like bare metal on the abstraction level, right? But this is not true! The overhead is there, and you need to beat worlds best JIT compilers to come even close to the performance of other VMs. For specific code path, given hand optimized code this can work out. But just throwing some random code into the WASM runtime will make is slower. Blazor is a good example because it’s a large real world application where you can directly compare to the bare metal version and see how much performance gets eaten by the WASM layer.

The situation with Scala.js would be even worse, like said, because it needs to interpret higher level “bytecode”. (You could of course lower the code, compiling it down to something closer to the abstractions WASM offers. With is actually exactly what Scala Native does through LLVM IR :smile:).

Does this really exist? Looks like a unfinished proposal. There are open questions directly in the document. (I didn’t dig deeper so far, so happy to get educated).

Also this is a different object model to what JS (and therefore Scala.js) uses…

  • (structural) subtyping
  • […]
  • dynamic linking might add a whole new dimension

The last point is interesting. JVM artifacts are all dynamically linked, so you need to implement a WASM aware linker. Surprise: It will be similar to Scala Native’s liker as this is the abstraction level you need to go down. It’s not comparable to linking in JS where you don’t need to care about any ABI like low-level details!

Given this:

  • Want to represent objects as structures, whose first field is the method table
  • Want to represent method tables themselves as structures, whose fields are function pointers
  • Subtyping is relevant, both on instance types and method table types

This looks more like the abstraction level of OOP in C++. Method tables, hand crafted v-calls…

That’s what I’ve meant as I said you need to “reimplement OOP features”. These things aren’t given in WASM.

But Scala Native’s runtime does exactly this. Implementing Scala’s OOP in a C like environment from scratch.

Exactly my point!

You’ll need to reimplement most of what Scala Native did.

It is clear. It could not be more clear.

Scala.js-on-WASM would create a weird island solutions without access to any of the WASM ecosystem, or even any of the benefits of WASM (fast low level code for max performance where it matters; stand-alone runtimes are out of the picture currently).

It won’t run on any stand-alone WASM runtime for the foreseeable future, if ever, so you don’t reach a new platform which isn’t already occupied by Scala.

I actually agree that it may yield quicker something that looks like “it works” (only at least an order of magnitude slower than running natively on JS).

But it’s an dead end!

Let’s look again on the goals stated on GitHub:

Why Wasm?

Wasm was initially designed for faster performance close to native code execution within web browsers.

Yes exactly. It started as a low-level “turbo” for the cases where JS does not shine (everything around “number crunching”).

It was conceptualized especially to run C/C++ routines safely in a sandboxed browser environment.

It’s first and foremost low-level, on purpose.

However, its usecases extend far beyond the browser, owing to its robust security features and portability.

Both needs to be proven by time…

Portability isn’t so good as you couldn’t port any VM languages so far.

Security is also an unproven claim. It looks very well on paper. But that’s it. (And in the case you have shared memory with another VM security reduces to the security of the other VM. Back to square one…)

Also, the introduction of WASI further expands its range of use cases.

Jop. It does so only when you’re able to interface on the level of a “C lib” with “C ABI”… :smile:

  • Faster code execution in browser

Not for the general case!

Only under very specific circumstances.

  • Plugins

Only if you speak WASM’s canonical ABI…

  • Cloud
  • Edge
  • IoT

No of the stand-alone runtimes could reasonably run Scala at the moment.

Could take years, if it happens actually ever.

  • Interop with other languages

Only if you speak WASM’s canonical ABI…

So my conclusion would be: WASM is a long shot. It’s a new platform with all kinds of new challenges.

But it’s clearly made for low-level languages! Something that can already compile to JS does not need to be ported. It could not reap any of the benefits of WASM. I would be just one VM layer more to make things slow…

If anything than only compiling Scala Native to WASM would make maybe some sense. (And even this is not sure as Scala Native is currently also not about performance, so the “turob” for the browser apps isn’t something you could do with Scala Native, at least when not writing unsafe C-like code directly; also it has a long way to go to integrate with other system level languages like C++, or in the long run even better Rust).

CheerpJ is not OpenSource.

That’s already a K.O., imho.

Also it’s laughable slow, and a gigantic resource hog. Java applets which run fine in the 90’s of the last century have massive lag when you run them on CheerpJ.

It’s an impressive tech demo (like Scala.js-on-WASM would be :smile:) but it’s imho useless for most use-cases.

As I understand they aiming for the people who can’t afford to port ancient Java GUI desktop apps to the web. (Their prices are also only for really desperate people… :smile:)

i guess people did some research and experimenting and came at different conclusions than you.

the problem is that wasm is not exactly low-level. you don’t get all the freedom that you get in actual native assembly.

afaiu (or more precisely, these are my guesses about wasm):

  • wasm doesn’t let you manipulate thread stacks freely - you have to do shadow stacks if you want to do it and shadow stacks make everything very slow. you can’t take raw pointers to things on stack and (therefore) you can’t scan stacks using raw pointers. implementing your own tracing gc in wasm requires stack scanning, so that forces you to do shadow stack, so everything is slow.
  • you can’t take raw pointers to gc-managed objects in wasm, so you can’t do low-level introspection or use them as arguments in calls to native (i.e. not managed by gc) libraries
  • etc

as i’ve said above, wasm abstraction level is between scala-native and scala.js. you need to modify both in non-trivial way to handle compilation to wasm.

scala.js will be compiled to wasm like ‘blazor-wasm-aot’, i.e. there will be no scala.js specific interpreter in the middle. look at the nascent prototype: GitHub - tanishiking/scala-wasm: Experimental Wasm backend for Scala (playground)

i guess the prototype already uses it.

there are always open questions in wasm proposals to guide future extensions to wasm.

scala.js has linker too and a whole set of link-time optimizations.

WasmGC gives you primitives to define v-tables and v-calls. i guess they would compile to something more efficient than when emitting the whole virtual dispatch logic as expanded lower-level wasm that wouldn’t get devirtualization, because wasm jit wouldn’t recognize it as v-call.

@MateuszKowalewski you haven’t replied to my question. to me it’s the most important one in this discussion with you. if you want scala to do something no-one did before then i wouldn’t say it’s an obvious thing to do.

in general, the choice is between:

  • using WasmGC. in this case scala.js semantics fits better. integration with unmanaged languages is not efficient. program execution is fast as the medium-level primitives (declaring classes and their instances) are recognized by wasm runtime and optimized accordingly. avoiding custom tracing gc means avoiding shadow stacks and other things that slow down execution.
  • not using WasmGC, instead implementing own tracing gc in wasm. in this case scala-native semantics fits better. integration with unmanaged languages is efficient. overall the resulting program will be slow due to constraints when implementing custom tracing gc.

both choices are imperfect in their own ways and both should be pursued in the long run.

note that c, c++, rust, etc don’t have to make that choice because they aren’t using tracing gc at all (unless you specifically want to add that, but you wouldn’t use it univerally across your code anyway).

note 2: afaik blazor in wasm mode (both interpreted and aot) uses its own custom tracing gc (based on mono runtime) and that makes it slow regardless whether it’s using aot or not. the advantage is that having its own custom tracing gc (implemented in wasm) allows blazor to retain all the pointer arithmetic capabilities that .net allows you, so you don’t need another coding style when targeting blazor wasm.

That’s true. But it makes no difference.

WASM is a runtime for “C like” languages which run still inside a managed sandbox.

But inside the sandbox it’s more or less like what you get from the C abstract machine. Of course it’s limited in some ways that make it “safe”, so no matter what “magic” you do inside the sandbox even with raw “memory”, you can’t escape the sandbox (and can’t “hijack” things there either to some degree with some of the typical C exploits, afaik).

I’m not questioning this.

But Scala Native is much closer. That’s the whole point.

Which means lowering the Scala IR down to a level that is close to what you need to run Scala on a bare C abstract machine. Which is reimplementing Scala Native…

I need to look at this. :smile:

That’s the problem. What those proposals write often doesn’t have anything in common with what is actually there. WASM stuff is in large parts vaporware frankly…

Nothing compared to what LLVM (or actually the JS JITs) could do.

You would need to implement all these AOT optimizations to be even just theoretically competitive. But like mentioned a few times: Often even Rust code, compiled and optimized though LLVM, does not come anywhere close to the efficiency of JIT compiled JS.

That’s more or less how I interpreted that sample.

But this still means that you need to actually use these primitives to implement your own OOP. That’s why I said this is more like what you get with C++; actually even a little lower level as C++ brings already a default implementation for that stuff (but you still free to tinker with because C++ gives you enough control for that).

Sorry but I don’t get this part.

First of all there is no production GC language on WASM currently (besides experiments here and there). So the questions is moot, imho, because it asks for a comparison to something that does not exist.

Secondly, calling from and to a GC language from a non GC languages is a hairy and difficult task no matter the environment. Whether you’re on the JVM, CLR, or else where. Even when you can talk some low-level ABI directly (like it’s with Scala Native and C) it’s still not trivial.

Also doing this without going though some sort of FFI does not work anyway.

FFI implies usually argument marshaling. (You can do tricks in special cases to improve performance of your FFI, but this only works if both sides of the FFI are aware of that).

But what does this have to do with WASM? WASM defines only one canonical way to talk to the outside world (any other module or a WASM “world”). This is the component model! Based on the WASM canonical ABI.

It does not matter whether someone did this already.

There is only one way for that in the WASM world: Using the canonical WASM ABI.

WASM modules are closed systems by default. That’s the core of WASM sandboxing.

You can than dig small wholes here and there to talk to and from the inside of your sandbox. But these holes need to be explicitly defined, because you can’t share anything from the outside with the sandbox, or anything from the inside of the sandbox to the outside. (If WASM GC breaks that I would have actually some further and quite fundamental question regarding all the WASM marketing promises… Because than sandboxing would be a plain lie.)

I don’t see that.

There are no “two choices”. There is only one: Scala needs a GC; as without an efficient GC implementation in the runtime Scala on WASM is impossible.

So no matter which Scala IR gets compiled down to WASM, it will need to use WASM GC.

But to also utilize any of the potential advantages of the WASM platform you need low-level things in the language! You need it to talk canonical ABI with the outside world and ecosystem libs. You need it to write efficient compute kernels that can give you speedups for routines which are slow when implemented in JS.

You wanted to say impossible, right? Because without being able to talk canonical WASM ABI there is no integration with anything at all… :grin:

To be honest I would not believe this claim before seeing some robust benchmarks.

I don’t see that.

Scala.js-on-WASM makes no sense. It won’t bring Scala to a new platform all in all. That’s dead end effort.

But of course I’m not paying this. So I don’t have any right to tell people what they should or should not do here. Just that in my opinion the current proposal pushed further will turn out as a large wast of time in the end. :cry:

At least this would explain why they doing these crazy things at all. Do you have a source? Because I also never understood why MS is burning such ridiculous amounts of money on something that quite obviously doesn’t work well (and likely never will).

But GC is not the only thing here. The CLR as such is a static, ahead of time compiled program (in large parts). But even that is slow as dog on WASM. It’s not only about “the things running inside”. Just compare how fast this, or actually also CheerpJ which implements the JIT compiler part in WASM, compile JVM or .NET assemblies. But a compiler should be actually something that runs fine in WASM, because it amounts mostly to a lot of “number crunching”.

Or even better: See how Emscripten compiled programs run in the browser. They’re fast, but of course still slower than native. One could judge the WASM overhead from that.

For a language like Scala.js which is pretty high level compared to what you get from Emscripten the overhead will be much much bigger. That’s why I’m repeating over and over that Scala run in WASM won’t be fast. It will be slow. (And only when you hand optimize code, using the low-level capabilities of WASM, you can get, with luck, some speedups at all; but only in case all the Scala abstractions don’t eat the gains; Scala Native has at least some form of optimizer to “de-abstract” things. Scala.js doesn’t AFAIK…)

hmm, so you want to leverage scala-on-wasm to bring scala to more platforms rather than achieve speedups, because you don’t believe in the speedups? well, at least something tiny, but concrete actually is now understandable from your rant.

wasm gives you full set of primitives. javascript gives you only type number (which is double, i.e. 64-bit floating point) and int (32-bit signed integer if your code fits into certain shape). longs (64-bit signed integers) and other types of numbers have to be emulated, which slows things down (sometimes several times). that stuff alone could give significant speedup when number crunching. also there’s fixed-simd extension for wasm (and it’s already finished: proposals/finished-proposals.md at main · WebAssembly/proposals · GitHub ) that replaces the simd.js (which is now removed). next in line is GitHub - WebAssembly/relaxed-simd: Relax the strict determinism requirements of SIMD operations. which will allow you to use the full width of simd units in your cpu. that would be a major speedup compared to scalar javascript code.

as stated previously by others, non-ignorable parts scala-native ecosystem requires unrestrained pointer arithmetic, so to migrate to wasm they would need to be rewritten. in order not to lose performance when compiling to bare metal, you would need two versions of scala code targeting scala-native: one with unrestricted pointer arithmetic (to get full performance) and one with restricted pointer arithmetic (to get wasm compatibility).

scala.js-to-wasm in the first prototype would allow you to compile programs to wasm without major modifications.

i highly doubt that most of the effort would be spent on implementing wasm backend in scala.js. most probably rewriting the standard libraries and ecosystem to not depend on browser-specific apis would be the major effort. and with scala-native you also need to rewrite libraries, from plain libc to wasi-libc.

why scala compiler has to implement all aot optimizations? the wasm runtime is optimizing too. the more metadata you give it (by using correct built-in primitives that wasm platform gives you), the more optimizations it sees.

that’s just your opinion. there’s a plan already to make scala.js-to-wasm that is free of browser-specific apis.

i don’t get it. blazor does work. it’s slow, contrary to the hype, but it works. developer experience is probably ok. their objective was probably to allow coding webapps in c# on client side, which they achieved with blazor wasm.

i don’t have definitive sources (i don’t fancy spending a lot of time digging through whole asp.net repo to find details about blazor wasm memory management low-level details), but the issues about blazor wasm support for wasm gc suggest that blazor wasm doesn’t yet use wasm gc at all:
[Blazor] WASM GC · Issue #82974 · dotnet/runtime · GitHub
which is closed because the issue is now tracked at
.NET Notes · Issue #77 · WebAssembly/gc · GitHub

For people following this topic but not necessarily all Scala news: Scala.js 1.17.0 was released yesterday, and it contains initial support to compile to Wasm:

8 Likes

Amazing!

1 Like