Scala Native Next Steps

Stating first that SN is not mature yet, and then asking for proofs where SN has been used successfully, I think that’s kind of contradictory. You cannot ask for advanced show cases if the platform is not yet stable.

Also, regarding high-performance. There are other factors, such as low-latency and predictable performance. That’s why you find almost no single mature real-time audio or video platform on the JVM, but many based on C, C++ and Rust. Another factor is memory foot-print. Small computers like Raspberry Pi get better now, but on the not-so-old Raspberry Pi 3 with 1 GB of RAM I very quickly ran into trouble running more complex Scala applications. JVM does eat a lot of memory.


It’s also worth noting that “mature” is a process.

I mean, Scala.js was a mess in the early days: it functioned, but performed so poorly that you couldn’t imagine doing anything serious with it. It took years of hard work for the team to optimize it to be the crisp competitor to native JS that it is now. And it wasn’t practical to use until Haoyi threw his weight behind it, porting his existing tools to it and building new ones to make it usable. I came in as a user just as that was starting to gel, and bet that things would continue to get better – fortunately, that proved absolutely correct.

It’s still relatively early days for Scala Native at this point. I honestly don’t know whether it will turn out to be a serious player in the down-at-the-metal world. But I do know that Scala.js looked pretty implausible in the early days, and has IMO been a real success story, so I’m content to root for the project and hope that it shows itself to be worthwhile.


I agree with all the other comments made by @LPTK, @Sciss, and @jducoeur. When you see how incredibly well Scala.js has evolved, I don’t see why this success should not work out similarly for Scala Native (I think it already has to some extent).

You ask about a niche for Scala Native. For me there is no need for any niche. The major thing for me is that I can write low level code and still use Scala. I want to be able to manually handle memory using pointers for some part of my application. I want to be able to reuse my models when necessary, also in Scala Native, as I do in Scala.js. Using unsafe misc, is a poor hack in comparison, I would guess that is why nobody uses it, except for special purpose software.

I would much rather use Scala Native. Working through the first few chapters of the amazing Scala Native book, I get exactly the same feeling of control that I got when I first used Scala.js, compared to using Javascript. I learned Javascript, JQuery and then Angular before I learned Scala. I have done a bit of coding in C. The pointer stuff is much easier to understand and use in Scala Native.

You don’t need to wrap entire C libraries to use Scala Native, only the parts you need, just as we did in the beginning with Scala.js. It is very similar.

Time will show if people will embrace this initiative. I am pretty sure it will :relaxed:


Good observation :slight_smile: For one, there’s always the possibility that I’m just wrong and so I’m asking for substantiated counterarguments. I googled a bit around and found only (and the scaladex page). As usual you will find more libraries than end-user applications of any tech, so that’s why I asked (and I hope there’s some part of Scala Native that works, even if it’s not the one that I tried).

Other than that, “maturity” is not clearly defined and invite people to overestimate the generality of that statement.

Good point. I only got myself recently started with Rust because of experimenting with real time audio stuff. For comparison, afaics there are two major reasons that affect latency and predictability on the JVM:

  • GC pauses
  • JIT non-determinism and resulting performance

Do you have any information whether SN’s GC is optimized for low-pause times? The general advice for low-latency applications on GC platforms is to be allocation free, so that GCs are avoided altogether. The same must be said for native applications since native heap allocation is often even more expensive than allocations in a GC-managed heap (but, admittedly, doesn’t trigger excessive pauses or need twice as much memory).

JIT non-determinism means that you can get different performance profiles on different runs. This makes work on performance optimizations harder on the JVM. A general problem with ultimate performance is that you need to work close to the metal for performance-critical work, i.e. you need to understand the transformations the compiler does (or write machine code directly). Scala on the JVM probably has more layers between the high-level language and the executed machine code than Scala Native but it’s not a given that you can predict performance from Scala code on SN for the last bits of performance.

So, I agree, it would be nice to have low-latency and predictable performance but we should treat that as an explicit feature and without explicit work on it, it’s not a given that Scala Native provides it.

Thanks for sharing. Sounds like an interesting project. Have you tried running on the JVM or with native-image?

Regarding other comments, there seem to be two kinds of arguments:

The first one is “Let’s wish that Scala Native will be this and that and it will be (at some point).” - Wishing is completely fine, it’s the basis of many political decisions. But hey, we are engineers, let’s try to get also into the technical argument and try get a nuanced few into the trade-offs and costs of features and let that also drive the decision making :wink:

The other one (which might be implicitly meant by what looks like an argument of the first kind) is “Scala Native is compiling to bare metal, the consequence is that it must (or will?) have [feature X].” - let’s try to be explicit here. Many of the things people have wished for (high performance, low-latency, low memory footprint) are actually features that need to be built and validated. So far, not even the Scala Native documentation actually promises those features.

At the moment we’re actually relying on LMDB (Lightning Memory-Mapped Database), a low-level C library, to manage the data efficiently. It was pretty easy to define the bindings and use it seamlessly from our code, thanks to Scala Native.


“What Platform or Ecosystem?” is a very good question to ask. I am also skeptical that bootstrapping Scala Native into its own platform will be feasible.

One (to me) interesting possibility is to target the Python ecosystem. By which I mean Python libraries + the native libaries supported by it. There’s a lot of interesting stuff in that area. Swift is making good use of it, and the way this is done is all publicly accessible as far as I can see.


I am surprised that nobody has mentioned native iOS apps.

You can use Scala.js for that with one of the several frameworks that allow you to write iOS apps in JavaScript (like Capacitor but there would be some significant benefit (startup times, no JIT compilation, hopefully performance) to be able to run natively compiled code instead. And here you certainly have a very large ecosystem.


I have been working a bit with Python within scientific computing. I really miss Scala’s type system, and pythons syntax often results in deep nesting of loops and if/else. I think if Scala could wrap those same native libraries we could make really nice, fluent api’s and dsl’s.


Thanks for running this small experiment Johannes, I would be interested in seeing if there are important differences in memory consumption as well. Would you be able to provide such numbers?

(Peak) memory consumption is not really a scalar value with a GC because there’s usually a trade-off between GC (pause) times and peak memory usage. The other problem is that other than for the JVM I don’t really know how to reliably get GC stats (like retained size) for native-image and scala-native. The very blunt tool of using /usr/bin/time -v and then reporting “Maximum resident set size”, I see approx this:

  • java -jar -Xmx250m … : 3s wall clock run time, max RSS: 229MB
  • native-image with -Xmx250: 3s wall clock run time, max RSS: 89MB (though there’s something weird going on, as max RSS goes up, when I decrease -Xmx)
  • env SCALANATIVE_MAX_SIZE=250m ./smaps-reader-out-release-full-lto-thin: 22s wall clock run time, max RSS: 3.2GB (as reported before, with different configurations I could get it to eat so much memory that it killed my desktop environment because of triggering Linux OOM handling)

This again may or may not just point to regexp being currently broken in scala-native. More rigorous benchmarking would be necessary to give more relevant results.


As a bit of background, for the given timings the smaps reader parses all /proc/<pid>/smaps files which during testing were about 40MB of size. A more reasonable approach with more recent kernels is to parse smaps_rollup instead which already aggregates all entries into one per process. With that change the task is much smaller and I get these numbers:

  • java -Xmx25m -jar: 0.8s wall clock (1.66s CPU) time, 65 MB peak RSS
  • native-image with -Xmx25m: 0.32s wall clock (0.3s CPU) time, 13MB peak RSS
  • Scala Native 0.4.0-M2, release-full, thin lto: 0.5s wall clock (0.49s CPU) time, 25MB peak RSS

Small update: JUnit support was merged into Scala Native master.

This will be important for anyone wanting to port a library, which already has JUnit-based tests, to Scala Native.

It will also be a productivity boost for contributing to Scala Native itself, since we can now e.g. port features from Scala.js using the exact same tests, without adaptation.

The implementation was ported verbatim from Scala.js, with some adaptations for Scala Native, so the same features and limitations apply.


I think Martin’s idea of targeting the Python ecosystem, if possible, would be an amazing game-changer. That would be why you use Scala-Native.

I have yet to hear any other reason that would be very compelling. Huge complex programs that really benefit from Scala’s abstraction capabilities tend to tolerate JVM startup latencies pretty well. If it’s simple enough to run in 0.2s, almost all the time you can get it done just fine in Go or somesuch.

“I already know Scala and don’t want to learn another language” also isn’t all that compelling unless Scala can actually solve problems in that domain better than the other language, or cross-compiling Scala libraries is really easy (which isn’t entirely true so far).

“I want it as fast as possible” almost always will end in “so use Rust or C++”, because they routinely beat all sorts of other languages that are native but use GC. Furthermore, Rust gives you tools to get a lot of speed safely. It’s very hard to compete with Rust here.

But easy access to Python libraries would change everything. Python’s two secret weapons are: starting out, the syntax is really easy; and it has amazing libraries for everything. I don’t know if it’s possible–and I don’t know how Swift has been doing–but I would really like to live in a world where it is possible and has already happened.


This project is getting a touch stale but I believe this perhaps a way to bridge the gap to Python. According to this, it looks like the plans were to support Scala Native as well - in master?.

I am bullish about Scala Native especially if we can get from 0.4.0-M2 to M3 and get Scala 2.12/2.13 support going. Denys did a fabulous job of getting close to JVM speeds in his tests using Commix, the concurrent Immix GC. Commix needs a M3 release. Getting the Scala Center on board is big but we also need community support.

Having JUnit support (Thanks @ergys) means that Scala Native can test Java libs against JVM and Native. Also, all downstream Scala libraries can use the same tests for all three platforms, JVM, Scala.js and Native which is great for cross compiling.

Scala Native using clang and LLVM like Swift, Objective C and a host of other languages allows it to potentially support different processors and platforms that even the JVM doesn’t support. One big benefit is that java.lang.Object doesn’t have the memory overhead it does on the JVM.

Yes, Scala Native needs some work but I think it has a bright future but it also needs people to try it out and get involved. It took Scala a long time and lots of people and work to get where it is today. Scala.js also took quite a while to get really good. The power of Scala Native is the Scala language we love where we have complete control over the future.


Hi All,

I just want to report that I got Scala Native working on Raspberry Pi using Ubuntu 64bit OS. This is the released 0.4.0-M2 version. Since this is an ARM processor and the OS supports Java, I was able to just compile directly on the Raspberry Pi. Certainly this processor has lots of horsepower and memory but just compiling based on the platform LLVM target triple works out of the box. I didn’t do any extensive study, just “hello world”, so some of the platform could have ARM processor problems - not super likely but just want to mention.

More details in the tweet and responses.



Wow that’s amazing. I will try and see if something similar is possible for an even smaller instruction set (like 30-40 instructions in AVR microcontrollers)

1 Like

After the first Scala Native release under the Scala Center’s care, it’s time to check the progress of our goals and step in the new cycle of planning.

Short and mid-term goals

Our major goal in this category was supporting Scala 2.12 and 2.13. It was fully accomplished with complete support for their latest releases - 2.12.13 and 2.13.4.

The effort investment in building solid foundations for reflection support enabled us to provide a more robust testing interface. For both of them Scala Native and Scala.js now share a standardized API. We believe it will help developers to cross-publish for both platforms.

Scala Native is now providing native runtime for the JUnit testing framework. It can be used by users to integrate with their existing JVM test-bases as well as to cross-test between the 3 execution platforms: JVM, Scala Native, and Scala.js. This also improved our quality assurance - we can now reuse the rich test-base defined in Scala.js, and in the future, we will be able to run tests against the JVM implementations to provide the highest possible compliance in our Java standard library implementation.

Long-term goals

The Scala Center committed to improve and enhance Native interoperability and some steps have been taken in fulfilling this goal. We have improved interop for C function pointers - they can now be created via implicit conversion from ordinary scala.Function, as well as created from / cast to arbitrary pointer type. Also, the type-safety of some native methods was improved. We are currently working on other interop features that might be introduced in future releases, e.g.: easier definition of fixed-size arrays and structural types, or linking Scala code as native libraries.

With recent support for Scala 2.12 and 2.13, we’re keeping up with support for the latest Scala 2 releases. We also realize the need for the future support of Scala 3 andwe are going to provide it as soon as other more urgent issues will be addressed and polished. Among these we can mention support for multithreading, compliance with various processor architectures, providing an implementation of missing methods in the Java standard library.

Other activity

Besides, the main goals the Scala Center has provided multiple bug fixes addressing various aspects of Scala Native, their list can be found in the latest changelog. We have also improved the Scala Native configuration interface in the sbt plugin to be more compact, easier to use and maintain.

Finally it is important to acknowledge and celebrate contributions from our community including multiple bug fixes, as well as new features like cross-compilation, support for Java default methods in Scala 2.11, providing bindings for native libraries or pending work on support for 32-bit architecture.

Check out our release 0.4.0 blog post describing the most user-impacting changes introduced and stay tuned for the next updates.

– The Scala Center Team


This is wonderful work! A big “Thank You!” to everyone who is pushing Scala Native forward.


Improve and enhance native interoperability, so that Scala Native can support as many libraries as possible.

The A#1 obstacle to pursuing this goal in earnest on, say, Apple OSes (iOS/MacOS frameworks) & Microsoft OSes (UWP) is the official lack of support of multithreadedness whatsoever in Scala Native. Single-threaded usage of Apple frameworks restricts them to trivial usage, not suitable for AppStore development at all (even if I were to contribute a Scala-Native language projection mechanism, to borrow Microsoft Xlang’s terminology).

How big of an obstacle is multithreaded support in Scala Native foreseen to be, say, via Scala’s Thread? What I am asking effectively is:

  1. Does the Scala-Native community know the remaining work toward any degree of official multithreaded support in Scala Native? or
  2. Is it a case of not yet even knowing what the unknowns are?

When reading this topic, it strikes me that many of the arguments why Scala native “might be not so useful” seem to perceive native programming as a hostile, cumbersome and somewhat painful and outdated environment.

However, the world of native programming has also evolved considerably over the last two decades. There are mature libraries and frameworks, and people are using likewise modern concepts, techniques and abstractions. Admittedly, the learning curve can be steep at times.

And while in fact it might be not so compelling to port an application designed for the JVM ecosystem into a native environment (instead of just using a clever wrapper or runtime) – I do see a promising perspective for a certain kind of application actually rooted within the a modern native programming style. Because, what is often quite expensive and cumbersome in the native frameworks is the implementation of advanced control logic and handling of metadata.

Consider e.g. some 3D graphics, model generation or simulation stuff. You’d certainly not need to implement the I/O for raw data in Scala. This would likely be some kind of engine, and thus receiving commands via pipe or a network connection, and you wouldn’t need Scala for any of this either. You’d certainly use a completely deterministic memory management for the “heavy lifting” anyway, cleanly built with the techniques of the native world. However, it would be very compelling to embed an advanced language as Scala to control how the processing proceeds, and to build rule based systems, constraint solving or an advanced description language, or selection of heuristics based on metadata.

Based on such a vision of a hybrid system, the most important missing part right now would be a good ability to interface with concurrent processing done within the native part. The computation part would e.g. use a thread / worker pool and lock free data structures from the native world, futures, coroutines, async I/O, you name it. However, a “command and control” part written in Scala-native would then have to face the challenge to receive callbacks and dispatch instructions from/into multiple threads or workers.

With the current state of affairs, you’d probably do all of the concurrency stuff entirely within the native framework and run the “command and control” part (Scala) within a single thread and attached through a dispatcher queue, similar to the way UI toolkits typically deal with user interactions.