Alternative scalajs/scala-native distribution mechanisms

It is my concern, greatly informed by discussions at Scala World with other FLOSS contributors, that many of the “taxes” that maintainers have to pay is becoming so burdensome that many are choosing not to publish their work for fear of the extra work (actually admin) that publishing demands.

In particular, ScalaJS and scala-native both use custom classifiers on maven central for their primary distribution mechanism and require library authors (FLOSS volunteers) to bare the burden of publishing, making modifications to their build that are very complex (the typical scala developer is terrified to look in build.sbt, never mind have to deal with multiple projects). I propose that the burden to maintain and publish scalajs and scala-native is too high to be sustainable.

Although it is all too easy to say that libraries do not need to publish for scalajs and scala-native, the truth of the matter is that library authors are under huge community pressure to support all valid targets. Right now, this typically includes 2.10, 2.11, 2.12, 2.13-MX with cross builds for scalajs and scala-native, handling missing elements in this matrix.

I would like to discuss alternative distribution mechanisms for downstream scalajs/scala-native users to be able to depend on community libraries, that do not need to build and distribute for scalajs or scala-native. Effectively moving the maintenance burden (compilation, testing, additional build time, dep availability, etc) onto the users of the library rather than the maintainers of the library. @olafurpg suggested that source distributions may be a viable option, with local changes. I’ve already floated this by @sjrd and @ochrons and I’m interested to flesh out the options. Another option is for the scalajs / scala-native user community to maintain mirrors of core libraries, and issue their own builds with platform-specific patches.

making modifications to their build that are very complex

I have two responses to this:

  • If your library trivially cross-compiles, you can use crossproject.pure, which is basically a 1-line change to your build file. Not exactly the hardest thing in the world

  • If your project doesn’t trivially cross-compile, re-organizing it to split out platform-specific bits into separate folders is a lot more involved than the changes to the build.sbt! And that’s unavoidable complexity, since often you just need different code to run on each platform, so no amount of packaging-improvements will make it go away

Honestly, from my experience interacting with people in the community, problems with the build.sbt or hitting fooJs/publish fooJvm/publish is a non-issue. The trouble of doing so is drowned out by the unavoidable cost of separating out platform-specific code, and refactoring to remove dependencies on JVM APIs that do not exist in JS (e.g. reflection, classloaders, filesystem), or re-implementing missing parts of the Java std lib that your project uses.

Although it is all too easy to say that libraries do not need to publish for scalajs and scala-native, the truth of the matter is that library authors are under huge community pressure to support all valid targets.

Another option is for the scalajs / scala-native user community to maintain mirrors of core libraries, and issue their own builds with platform-specific patches.

This already can happen, and had happened, e.g. David Barri maintaining forks of Scalaz, others forking Shapeless, Parboiled2, Scalacheck, and other projects long before they decide to upstream it into master. And Akka.js remains a fork.

It seems silly to me to say people are forced to integrate Scala.js. If you don’t want to integrate it, don’t, and if someone wants to enough, they can publish and maintain a fork. This has happened many times.

If someone wants to upstream it and publish a crossproject, that’s entirely their choice. People do it, because it’s not that hard, and in many ways easier then managing multiple forks of the same repo.

Now, I’m sure there are ways of making things easier, but I don’t think the above post accurately reflects the current difficulties with cross-publishing libraries. e.g. nobody cares that you use custom suffixes on maven central, so changing things to avoid that won’t actually make life better for anybody

I wouldn’t be surprised if, after the problem is properly discussed and defined, we discover that a 10-page “migrate your library to Scala.js!” tutorial and a few tweaks to SBT would yield more benefit than revamping the entire Scala ecosystem and toolchain to use source dependencies

3 Likes

I’ll try to start this discussion with a few general remarks. And even before that, I’d like to recap how it currently works, so that everyone reading this thread is on the same page. In order to simplify my propose, I will typically often refer to Scala/JVM and Scala.js–because I’m used to that–but as far as I know, everything I say should equally apply to Scala Native.

The current scheme

Compilation pipeline

Independently of the build tool or even the distribution strategy, the compilation pipe of Scala follows these steps. For the JVM:

  1. scalac compiles .scala files into .class files
  2. the JVM reads .class files, links them on the fly and executes them

For Scala.js:

  1. scalac + scalajs-compiler compiles .scala files into .sjsir files
  2. the Scala.js linker reads .sjsir files, links them and produces a .js file
  3. a JS engine reads the .js file and executes it

It is important to note that Scala.js does not directly interact with .class files. It does not convert .class files into .sjsir, but .scala files. However, for a lot of reasons, the Scala.js ecosystem still needs its .class files: for separate compilation, incremental compilation, IDEs, macro expansion, and zillions of other tools. The .class files that come out of a Scala.js compilation are not the same as those that would come out of a normal compilation step, because Scala.js internally needs to manipulate the scalac Trees to get its job done. This means that those .class files are not binary compatible with the ones coming out of a regular Scala/JVM compilation (just like those coming from different major versions of Scala).

Distribution: artifact suffixes

The typical distribution format for Scala is as jars on Maven Central (or Bintray or whatever). For Scala/JVM, these jars contain .class files. For Scala.js, they contain .class files + .sjsir files.

Because of binary incompatibilities between the ecosystems of different major versions of Scala/JVM, the de facto convention is that a project foo from organization org.foobar is available under an artifact foo_2.12 for the 2.12 binary ecosystem, foo_2.11 for the 2.11 ecosystem, etc.

Following suit, and because the binary ecosystem of Scala.js is another dimension, the convention that we (@gzm0 and myself) chose back in Scala.js 0.5 was to expand on that scheme, publishing the Scala.js variant of foo for 2.12 as foo_sjs0.5_2.12. Since Scala.js itself is not binary compatible between major versions, the same library published now with Scala.js 0.6.x will have the artifact name foo_sjs0.6_2.12.

Back when 0.6.0 was released, the Scala.js ecosystem was still tiny, and no one ever even tried to cross-publish between 0.5 and 0.6. Now, we are facing a unique period in Scala.js’ history, with Scala.js 1.x coming up (the first milestone was published some time ago). The large ecosystem of Scala.js that we have now means that library maintainers are actually cross-publishing for the 0.6 and 1 binary ecosystems. The present months are therefore additionally hard on library maintainers. However, Scala.js 1.x is supposed to last “forever”, always being backwards binary compatible (like Java). Once 1.0.0 is out, therefore, I expect intra-Scala.js cross-publishing to fade away and “never” come back.

I put “forever” in quotes because one never knows. But to give an idea of the time scales we are talking about, note that the lifetime of Scala.js 0.6.x is approaching 3 years already. I expect the lifetime of 1.x to be significantly longer than that.

Distribution: artifact metadata (and why TASTY will not help)

I’d like to point out that a Maven artifact is not just about the .class files in the .jars. Artifacts also contain metadata, the most important one being transitive dependencies.

Even for a so-called pure cross-compiling project (whose set of source files is exactly the same on all platforms), the transitive dependencies of the projects are not the same per platform. Just like foo_2.12 transitively depends on, say, fizzbuzz_2.12 while foo_2.11 depends on fizzbuzz_2.11, a Scala.js version foo_sjs0.6_2.12 transitively depends on fizzbuzz_sjs0.6_2.12. And fizzbuzz could have platform-dependent code, meaning that its platform artifacts are different besides metadata. At the very least, some transitive dependency is eventually going to depend on scalajs-library_2.12 for the Scala.js ecosystem but not for the Scala ecosystem.

Moreover, even if fizzbuzz exposes a source compatible API across its platforms, it might do so using platform-dependent type aliases, implicits, and a bunch of other things that would make even the typechecked tree for foo different in its JVM and JS variants.

Therefore, TASTY is never going to solve this problem.

sbt integration, crossProject, and ++

Not considering cross-compilation, the way Scala.js integrates with sbt is through an AutoPlugin: ScalaJSPlugin. Applied on a project, that plugin completely switches the target of the entire project to be Scala.js instead of JVM. This has, among others, the following consequences:

  • Add the scalajs-library library to the dependencies
  • Add the scalajs-compiler compiler plugin for scalac
  • Change the crossVersion setting so that artifacts and their dependencies are suffixed with _sjs0.6
  • Add all the Scala.js-specific tasks, such as fastOptJS
  • Change run, test and friends to run with a JS engine on the output of the Scala.js linker

Clearly, the resulting project is not usable as a JVM project anymore. Therefore, as far as sbt is concerned, there is no such thing as a cross-compiling project. If we want to cross-compile a “project”, we actually need two sbt projects that share their source directories and most of their settings: one with enablePlugins(ScalaJSPlugin) and one without.

This is what crossProject gives you: it is a builder with the same surface syntax as a Project, but which at the end of the day gives you two actual Projects that sbt can use.

So why is this so complicated, whereas ++ is so easy to switch between versions of Scala, and does not require additional Projects for every major version of Scala? Basically because the only effect of ++2.12.2 is to set every scalaVersion := "2.12.2" (I may be simplifying, but that’s the gist of it). Then every other setting that depends on the Scala version is derived from that setting. However, the different between a JVM project and a JS project is whether or not we apply enablePlugins(ScalaJSPlugin) on it. No amount of set whatever is going to get sbt to add or remove an AutoPlugin from a project.

This is why we cannot have a +++ of sorts that would dynamically switch the target of project.

Identifying the issues

OK now that I have (hopefully) articulated the various dimensions of the issue and the current scheme, we can really talk. And as a first discussion item, I’d like to clearly understand what are (really) the issues that project maintainers face. Without understanding this, we might look for solutions to non-existent problems.

In the OP, @fommil mentions the difficulty to deal with build.sbt and in particular the multiple projects created by crossProject. I take note of that point of view, but I have a feeling that there is more to it. After all, aren’t most sbt builds nowadays basically multiprojects anyway? And are the changes to the build really that complex? I am not sure.

Let’s take as example the PR I made not long ago to scala-collection-strawman to add cross-building for Scala.js: https://github.com/scala/collection-strawman/pull/220. It was pretty much just turning a project into a crossProject. The reviewer also asked that a couple short-cut commands be added in the top-level project to make it easier for the typical contributor to run the tests on the JVM.

Of course it is some work. And being comfortable doing these changes requires to be a bit comfortable with writing a build.sbt to begin with. But is it so hard that it pushes library maintainers away from publishing their stuff?

The answer might just be yes, and there is no other issue. If that is the case, then let’s tackle this issue by all means necessary.

But before we do that, I would like to be sure that there are no other reasons that push people away from cross-compiling for Scala.js. It is no use to solve one aspect of the problem if we do not understand the big picture.

Here are a few things that I have heard or overheard in the past, that could be other reasons:

  • CI build times: Scala.js is slower than Scala/JVM (because, you know, JS), enough so that the CI build times can significantly increase for a library that cross-compiles. Even if locally you only ever run your tests on the JVM, your CI builds will have take a hit.
  • Need to install Node.js on machines: the Scala.js tests won’t run without Node.js (or some other JS engine), so your CI infrastructure must have it installed. Locally, the developer who wants to run the JS tests also needs to install it.
  • Familiarity with Scala.js: most library maintainers don’t use Scala.js on a regular basis, if at all. Some could feel like they are taking responsibility for something that they don’t understand.

However, I have never seen the above clearly articulated first-hand from a library maintainer, so I cannot evaluate how much they are true or if they are even a problem at all.

OK I’m done for this first post. To conclude: let me reiterate my Big Question:

Are there other difficulties that library maintainers face when supporting Scala.js, besides the complexity of the build.sbt?

7 Likes

A 10 page tutorial would obviously be of value to some people, especially if they want to publish these things for their own purposes, not because of community pressure. But the fact that library authors have to do anything at all is a sign that the burden is in the wrong place.

Let’s not forget that a lot of contributors may just be diving into a project, yet they need to face this complexity that they are probably not used to in their work (application) project… so the maintainer needs to explain it over and over. It would be best if none of this was there and simply didn’t need to be explained.

OK, forget source distributions… How about having a js/native community-maintained set of mirrors? Rather than this being an upstream requirement on contributors.

BTW, when writing build tooling, the problems become much more pronounced.

(incidentally, the whole implementation in terms of multiple projects really irks me. You can’t talk me out of that… it just does. I feel it makes my builds unclean. I feel the same way about any plugin that forces me to have a new Project that doesn’t map onto my mental model of a separate artifact. That said, I’ll avoid referring to this particular point because this thread is about moving around the maintenance burden not refining the current implementation)

A post was split to a new topic: Finding way to engage with more FLOSS Scala maintainers

Ah, so basically you’re not interested in reducing the maintenance burden overall? For the sake of argument, let’s say maintaining Scala.js cross-compilation was as easy as maintaining a different Scala major version (i.e., ++-like easy), would you still call be fighting to get rid of it?

1 Like

We’ve had these for, literally, years by this point. If you run a library that people want to use on Scala.js, just tell people you don’t have time to support it and someone else will fork and publish it.

But if your concern is “people can’t say no when random internet strangers ask them to do more work”, this clearly won’t work at all

Do you have any scala.js/scala-native projects you maintain that you can point me at the build to show how troublesome it is? That would definitely help people understand your concern

I think the overall point is that now-a-days, you can’t just build a useful Scala library for Scala and publish one major version at a time.

Instead, you’ve got to deal with supporting multiple backends and multiple major versions of Scala, and in-all, that’s a burden that falls squarely on the maintainers of libraries.

I think his point is that he worries no one is paying attention to the burden falling on maintainers of popular open source libraries as a result of this.

I would imagine that as a result of this, one could argue that it makes it more difficult for people maintaining libraries to actually maintain them, thereby potentially having a negative effect on the overall community.

I don’t think it’s a point that you can just toss aside and discount.

1 Like

My main pain point is this issue here Support multiple scala-js/scala-native versions · Issue #47 · portable-scala/sbt-crossproject · GitHub . I am attempting to release an ultra stable library as part of the new scala platform (scalajson) and I find out that I am unable to build against different versions of Scala.js (and also scala-native). A big part of the Scala.js community is still on 0.6.x, and I also want to publish for 1.0.0. Same deal with scala-native

Turns out the only way to accomplish this is with environment variable hacks and at this point I want to explore using CBT (although apparently I can’t because the plugin for using the scala platform will only support SBT).

In my opinion, the first step to solve this problem is to stop treating different platforms (such as Scala-js or Scala-native) as sbt plugins. Instead they should be first class backends for Scala.js, so its easy to have a simple alternative to crossScalaVersions for scala-native/scala-js (as a setting in sbt ideally).

Another problem is this issue with standard libraries, my personal pet hate where Scala ecosystem appears to be porting Java libs rather than creating idiomatic Scala versions that cross compile to all platforms (even if it just wraps existing libraries on JVM platform). Onceeveryone settles on scala._ packaged library (or whatever the library happens to exist) its incredibly easy to deal with migrations and cross compile code. A very real example of this is Scala Future, it was supported almost instantly in Scala.js and thanks to this basically all libraries which used Future for asynchronous/concurrency problems had no issues porting to Javascript.

Compare this to the chaos we have with date libraries (in Scala world, we now have to deal with 4 date libraries. JodaTime, Javatime for JDK8, scala-java time (GitHub - cquiroz/scala-java-time: Implementation of the `java.time` API in scala. Especially useful for scala.js), and nscala-time (GitHub - nscala-time/nscala-time: A new Scala wrapper for Joda Time based on scala-time). In order to cross build stuff at work, I had to use type alias’s (i.e. a DateTime alias which points to Javatime for JDK8 on JVM and points to scala-java-time DateTime on Scala.js because for some reason it sits in a different package (i.e. org.threethen.bp). Is it so hard to just release a date library that is compiled on all platforms and included by default that is also part of Scala (and not just Java ports, which usually fail for non trivial things in areas like io and date because JVM makes a lot of assumptions). I mean the other thing is that, Scala is not Java, we don’t have to have a library frozen in stone for ages due to backwards compatibilty issues. Even if there are problems which these ports, they can be solved iteratively over time (we are also a strongly typed static language, and we have scalafix)

In summary (tl;dr version)

  • Make SBT cross publish to other platforms (scala-js/scala-native) just as easily as it is for different scala versions now. Scala should also stop assuming JVM is the “default backend”. This probably means that zinc should stop relying on .class files (JVM specific) and maybe work directly on trees
  • Make and provide common libraries by using the best libraries out there as inspiration that is provided and included by default on all platforms. We saw how succesfull this was with Future for Scala.js, if Future wasn’t part of the Scala stdlib the async story on Javascript would have been a complete and utter nightmare

Also I like the idea of source distributions with maybe a global internet cache of binaries to speed things up. A huge amount of this complexity is usually due to having to support binary compatibility between differing versions of Java/Scala (and now Scala.js with 0.6.x vs 1.0.0x). I have envy of languages like Go and Crystal which basically said “lets forget about dynamically loadable libraries” which require binary compatibility, which has made publish OSS libraries so much easier. Also source compatibility is much easier to maintain then binary compatibility. Even though we have tools to help with binary compatibility (i.e. sbt-mima) this is only for the JVM (its also not installed/disabled by default). Also no such tools exist for Scala-js/scala-native (should they start caring about binary compatibility)

The best solution for this is either source distributions or making it as easy as possible to publish against everything with SBT. If we choose to do the latter than

  1. SBT in general needs to be faster, like, a lot faster. When you start having complex builds that cross publish on a lot of things, your build tool needs to be as fast as possible. Somewhat complex builds easily take minutes just to load in SBT now
  2. SBT also needs to be a lot easier to use, and a lot easier to cross publish things (i.e. sbt-doge helped in certain cases, but it came a bit late and its still confusing on how to use it)
  3. Stuff like Mima needs to be enabled by default
  4. Scala/SBT/Zinc needs to stop treating JVM as the default backend which also means things like scala-native and scala-js needs to stop being “sbt plugins” to enable things like crossScalaVersions for different backends

On the surface, I think adding in really good support for source distributions is probably easier.

I do, see Explore CBT · Issue #34 · mdedetrich/scalajson · GitHub and Support multiple scala-js/scala-native versions · Issue #47 · portable-scala/sbt-crossproject · GitHub .

In general its just becoming a massive PITA, and builds are becoming really complex with in ThisBuild vs using common settings and stuff like sbt-doge. I really hope CBT can simplify things in this area, or maybe just make source distributions the default (the current dependsOn syntax for source inclusion is terrible and also ironically doesn’t work with cross project that well)

3 Likes

Let me start by correcting a few misconceptions.

That is a common misconception. Source compatibility is virtually impossible to maintain. By comparison, Binary compat is therefore infinitely easier to maintain than source compatibility. I explained all of this here: A Guide on Binary Compatibility - need your input! - #2 by sjrd. Fortunately, source compatibility is less important than binary compatibility, because it does not suffer from the dependency diamond problem. That also I explain in the mentioned post.

Another misconception. sbt-mima is just as capable at checking binary compatibility for Scala.js libraries as for JVM libraries. We made sure of that when designing Scala.js.

Uh!!? Scala.js cares about binary compatibility very much! In fact, I think the 0.6.x series come in second place for the longest streak of maintaining backward binary compatibility in terms of elapsed time (2.5+ years so far, after sbt 0.13.x which is 4 years in), and actually hold the record for the longest streak in terms of number of versions (21 binary compatible releases from 0.6.0 to the currently released 0.6.20).


OK, now my actual open ended answers.

I don’t think so. Scala.js and Scala Native both emit reasonable .class files that tools such as zinc can use. See my first post in this thread. This was designed as such since Scala.js 0.1 (I’m not kidding) so that all those tools would play nicely with Scala.js out of the box.

I very much doubt that. Sources in Scala are extremely volatile, due to some of its language features, most notably implicits. As I said earlier, maintaining source compatibility in Scala is not possible. Therefore dependency graphs based on source dependencies will break in horrifying ways, and nobody will be able to fix them.

Between source deps and @fommil’s suggestion of Scala.js mirrors everywhere, at the least the latter stands a chance, because it’s technically sound. The former isn’t.

This is an idea worth exploring, but it would require strong buy-in from sbt core. I won’t fight that fight, but if other people in the community are willing to, be my guest. Note however that there is a fundamental difference between switching the Scala major version versus switch the target, regardless of whether Scala.js is activated via a plugin or not. The former doesn’t change at all the set of defined settings and tasks. The latter, however, means that some settings and tasks are defined in some targets and not in others (obvious example: fastOptJS). This means that for example when you switch your project from Scala.js to Scala/JVM, if you have settings that read fastOptJS.value, your build breaks because the dependency graph cannot be built. This is fundamental.

One of the “tweaks to SBT” I was thinking about was this boilerplate here:

lazy val effect = Project...
lazy val effectJVM = effect.jvm
lazy val effectJS  = effect.js
lazy val effectNative = effect.native

lazy val iteratee = Project...
lazy val iterateeJVM = iteratee.jvm
lazy val iterateeJS  = iteratee.js
lazy val iterateeNative = iteratee.native

Anyone know how easy it would be to let people do

lazy val (iterateeJVM, iterateeJS, iterateeNative) = Project...

?

Or even

lazy val (iteratee, iterateeJVM, iterateeJS, iterateeNative) = Project...

Where iteratee is a placeholder project that does nothing much other than aggregate the {JVM, JS, Native} projects, so you can easily e.g. run tests or compile against all of them.

Of all the things you have to do in SBT to make cross-building work, this is one of the parts which I find neither simple (e.g. converting project -> crossproject) nor unavoidable (e.g. configuring different dependencies for different platforms). It would be great if we could smooth it over

2 Likes

This is actually false, plenty of other languages deal with source distribution (most recent mainstream example is node.js). As you are write, it does introduce the diamond dependency problem, but this is solvable (in tools like npm there is a command like shrinkwrap which generates a dependency map, see https://docs.npmjs.com/cli/shrinkwrap ). Another example of such a language is https://crystal-lang.org/ (which is statically compiled and also uses source dependencies)

Granted in node this is slightly easier, because node simply inlines all packages in node_modules so its easily possible to have 2 packages inside a project that have different versions. This is however possible in Scala (or at least JVM since OSGi allows this)

Alternately, maintaining source compatibility doesn’t mean imply ignoring binary compatibility it can also simply mean that you don’t have to generate binaries for your platform, you just need to make sure that the resulting binaries from your source aren’t incompatible. You can simple use a tool which (before creating a new tag) will compile a source against a matrix of targets and just verify the produced .class files are binary compatible (with mima for example) compared to the previous version. The point is this can be done outside of SBT.

The point of making Scala work with source compatibility is it shifts some maintenance burden from library authors to users, and if you look at languages like Node/Ruby/Python this is helpful in creating a big ecosystem of libraries. Java managed to get away with this mainly because its one of the few static languages that building is very simple with and has a very strong maintanence on binary compatibility

In final the point is that there are other mainstream languages (both static and dynamic) that work with source distributions and they can handle it fine. It does put a bigger burden on library users, but it also makes it a lot easier for library maintainers (which is arguably what you want if you we want more libraries in our ecosystem)

And scala-native?

Its not just binary compatibility, its also forward compatibility. I have had to go into projects and fork maintain separate forks to upgrade their Scala.js versions. I mean I am not complaining here, Scala.js has done a wonderful goal when it comes to backwards compatibility, however we now have this issue of 0.6.x vs 1.0.0.x and these kinds of issues will continue to happen. Like I said, I have this issue right now where there isn’t a sane way to cross compile between scala.js 0.6.x and 1.0.0.x and it will be the same deal with scala-native.

Of course, but this is fundamental issue to how SBT is designed. Its based around JVM (or assumes JVM as the main backend) when this shouldn’t be the case. I mean SBT works on patched direct acyclic graphs, so this is definitely possible, its just that the settings have been designed in such a way that JVM is assumed to be the main backend.

In those languages, it is actually possible to evolve a library in backward source compatible ways. As I said before, this is not the case for Scala: you can’t add a public method without breaking source compatibility. Your entire argument before that sentence is moot because of this fundamental difference between Scala and the languages you have mentioned.

Scala Native has the same design principle. However, since it is less mature, we still sometimes discover “design bugs” that cause NIR not to be binary compatible when it should. However, they are being fixed. Therefore, modulo bugs, sbt-mima also does the correct thing for Scala Native.

The problem you describe there is not forward compatibility. Between Scala.js 0.6.x and 1.x, we are actually breaking backward binary compatibility, which explains the difficulties. It has however nothing to do with forward compatibility.

Also, as I explained above, this problem will not continue to happen for Scala.js, since 1.x is supposed to be backward binary compatible forever.

As for Scala Native, it is less mature, so yes, these things will still happen for Scala Native. However, precisely because it is less mature, I would advise library authors against cross-compile for different versions of Scala Native at the same time. This is mental. Just upgrade to the latest Scala Native version and only publish for that one. Forget about older versions. There isn’t a large ecosystem on things stuck on old versions of Scala Native that you still need to support.

Cross-compiling between Scala.js 0.6.x and 1.x is only necessary because Scala.js 0.6.x was already so mature and stable that there actually is a large ecosystem working on 0.6.x, including production applications. Scala Native does not have that problem yet.

I don’t think this is an accurate depiction of the problem. If sbt did not assume JVM as default, you would have a ScalaJVMPlugin that would declare some JVM-specific JVM settings. And then what? You still have exactly the same issue when switching back and forth between ScalaJVMPlugin to ScalaJSPlugin: some settings become undefined and your whole setting dependency graph can break. Anyway, there are exactly three sbt settings that do not apply to a Scala.js project: console (for which we warn), javaSource and javacOptions. Every single other sbt settings applies. So I would even question the assertion that sbt is too assuming of the JVM.

No, sbt does not assume the JVM. The problem is the dynamic nature of sbt. It is theoretically possible to create a sbt whose world is closed (global plugins are not added to the build, only a closed set of local plugins are) and that can be linked to native code or even javascript code. Of course, you would need to relink every time you add a local plugin, and you need the actual codebase to cross-compile.

There are other technical solutions to the problem of plugins only targeting a concrete backend (Scala Native vs Scala.js) version. One could imagine an sbt plugin that defines a public interface called by settings and tasks, but whose implementation classloads the backend version and calls concrete methods there. This has two problems:

  • Dramatically complicates the design of the sbt plugin.
  • All the arguments of the methods invoked have to be either primitives or Java classes (e.g. String).
  • It may not actually be possible, depending on concrete technical details.

I’m just throwing out there this idea. I do not consider it’s feasible and worth the trouble – I assume it would require @sjrd too much time to implement it this way.

I don’t believe this is true, especially for Crystal which has features such as mixins/monkey patching which can break source compatibility just by adding public methods (and actually, everything in Crystal is public, they have no concept as private). In Ruby you can pretty much do anything you want (granted this is a dynamic language). Adding public methods can break any current code.

Sure, but this problem is still going to exist. Its not just Scala.js, it will apply to other backends as well. We have companies/OSS projects still having to publish for Scala 2.10. And as specified before, this also forms a matrix, so if we have some common library (lets say a Date library) we end up having to form a matrix of all possible Scala/Scala-js/scala-native configurations.

Its not just about Scala.js, its about the Scala ecosystem in general

I am just stating that, just as @fommil has said, its become quite difficult (for whatever reason) for people to maintain libraries, moreso than what I suspect is typical of other mainstream languages (which is what Scala is competing against). I wasn’t trying to imply that making SBT agnostic against different Scala backends would be easy. In fact, just as stated it would likely be quite difficult in SBT due to the fact that all settings are global, this is likely a design issue though. I mean scala-native should not even have to have any idea about scala-js fastOptJs, or even be able to access it, because its irrelevant (I would actually argue this is one of the problems in SBT, having all settings as global actually makes it more confusing for people using SBT because you have access to certain settings in certain contexts which you should even access, or even care about). Maybe the first step would be for SBT to implement a concept of private/public in regards to scoping so that all settings aren’t global?

I think for similar reasons (at least according to @cvogt) this is why cross compiling against scala backends isn’t hard to do in CBT, because CBT allows you to hide details about implementations in traits with private (just like in any Scala code). At least to me these kind of problems are just demonstrating design problems that SBT has, hence why they are not easy to fix.

I mean at the end of the day, especially with Dotty looming on the horizon, this problem is just going to get worse and worse.

I don’t like the fact that you’re blaming sbt’s design. In my view, it has nothing to do with this. You can effectively create private keys (tasks and settings) with sbt that are not accessible by other keys. This is why autoImport exists, and this is also why you can scope keys inside tasks. Ah, and don’t forget about local private keys, which also exist though most of people don’t know about them.

Why people don’t know about them or don’t use them correctly is another discussion.

CBT relies on source compatibility, and that’s okay. But let’s not pretend that source compatibility solves this problem, because it doesn’t. The main problem here is the tight coupling between Scala.js and the sbt plugin to use it — likewise for Scala Native.

I actually have zero experience maintaining a cross-compiled library, so that’s why I don’t go into the main discussion here. But so far we’re beating around the bush…

No, again, same problem: if you do that, now when you switch your project from JVM to JS or conversely, your .settings() don’t typecheck anymore (instead of failing to build the dependency graph). In fact that’s worse than the previous alternatives; with those, at least, you could write your .settings() with the appropriate dynamic dependencies so that your dependency graph would in fact be correct in all configurations. With private settings for platform-dependent things, you made this impossible.

There is a chance to bread backward-compatibility in the future. Just align the breaking changes with Scala versions, as long as Scala.js 1.0 on Scala 2.13 is not backward-compatible with Scala.js 1.0 on Scala 2.12.

So it becomes a good opportunity to advertise Scala.js when someone created a PR to add Scala.js support for a JVM library.