I’ll try to start this discussion with a few general remarks. And even before that, I’d like to recap how it currently works, so that everyone reading this thread is on the same page. In order to simplify my propose, I will typically often refer to Scala/JVM and Scala.js–because I’m used to that–but as far as I know, everything I say should equally apply to Scala Native.
The current scheme
Compilation pipeline
Independently of the build tool or even the distribution strategy, the compilation pipe of Scala follows these steps. For the JVM:
- scalac compiles .scala files into .class files
- the JVM reads .class files, links them on the fly and executes them
For Scala.js:
- scalac + scalajs-compiler compiles .scala files into .sjsir files
- the Scala.js linker reads .sjsir files, links them and produces a .js file
- a JS engine reads the .js file and executes it
It is important to note that Scala.js does not directly interact with .class files. It does not convert .class files into .sjsir, but .scala files. However, for a lot of reasons, the Scala.js ecosystem still needs its .class files: for separate compilation, incremental compilation, IDEs, macro expansion, and zillions of other tools. The .class files that come out of a Scala.js compilation are not the same as those that would come out of a normal compilation step, because Scala.js internally needs to manipulate the scalac Trees to get its job done. This means that those .class files are not binary compatible with the ones coming out of a regular Scala/JVM compilation (just like those coming from different major versions of Scala).
Distribution: artifact suffixes
The typical distribution format for Scala is as jars on Maven Central (or Bintray or whatever). For Scala/JVM, these jars contain .class files. For Scala.js, they contain .class files + .sjsir files.
Because of binary incompatibilities between the ecosystems of different major versions of Scala/JVM, the de facto convention is that a project foo
from organization org.foobar
is available under an artifact foo_2.12
for the 2.12 binary ecosystem, foo_2.11
for the 2.11 ecosystem, etc.
Following suit, and because the binary ecosystem of Scala.js is another dimension, the convention that we (@gzm0 and myself) chose back in Scala.js 0.5 was to expand on that scheme, publishing the Scala.js variant of foo
for 2.12 as foo_sjs0.5_2.12
. Since Scala.js itself is not binary compatible between major versions, the same library published now with Scala.js 0.6.x will have the artifact name foo_sjs0.6_2.12
.
Back when 0.6.0 was released, the Scala.js ecosystem was still tiny, and no one ever even tried to cross-publish between 0.5 and 0.6. Now, we are facing a unique period in Scala.js’ history, with Scala.js 1.x coming up (the first milestone was published some time ago). The large ecosystem of Scala.js that we have now means that library maintainers are actually cross-publishing for the 0.6 and 1 binary ecosystems. The present months are therefore additionally hard on library maintainers. However, Scala.js 1.x is supposed to last “forever”, always being backwards binary compatible (like Java). Once 1.0.0 is out, therefore, I expect intra-Scala.js cross-publishing to fade away and “never” come back.
I put “forever” in quotes because one never knows. But to give an idea of the time scales we are talking about, note that the lifetime of Scala.js 0.6.x is approaching 3 years already. I expect the lifetime of 1.x to be significantly longer than that.
Distribution: artifact metadata (and why TASTY will not help)
I’d like to point out that a Maven artifact is not just about the .class files in the .jars. Artifacts also contain metadata, the most important one being transitive dependencies.
Even for a so-called pure cross-compiling project (whose set of source files is exactly the same on all platforms), the transitive dependencies of the projects are not the same per platform. Just like foo_2.12
transitively depends on, say, fizzbuzz_2.12
while foo_2.11
depends on fizzbuzz_2.11
, a Scala.js version foo_sjs0.6_2.12
transitively depends on fizzbuzz_sjs0.6_2.12
. And fizzbuzz
could have platform-dependent code, meaning that its platform artifacts are different besides metadata. At the very least, some transitive dependency is eventually going to depend on scalajs-library_2.12
for the Scala.js ecosystem but not for the Scala ecosystem.
Moreover, even if fizzbuzz
exposes a source compatible API across its platforms, it might do so using platform-dependent type aliases, implicits, and a bunch of other things that would make even the typechecked tree for foo
different in its JVM and JS variants.
Therefore, TASTY is never going to solve this problem.
sbt integration, crossProject
, and ++
Not considering cross-compilation, the way Scala.js integrates with sbt is through an AutoPlugin
: ScalaJSPlugin
. Applied on a project
, that plugin completely switches the target of the entire project to be Scala.js instead of JVM. This has, among others, the following consequences:
- Add the
scalajs-library
library to the dependencies
- Add the
scalajs-compiler
compiler plugin for scalac
- Change the
crossVersion
setting so that artifacts and their dependencies are suffixed with _sjs0.6
- Add all the Scala.js-specific tasks, such as
fastOptJS
- Change
run
, test
and friends to run with a JS engine on the output of the Scala.js linker
Clearly, the resulting project is not usable as a JVM project anymore. Therefore, as far as sbt is concerned, there is no such thing as a cross-compiling project. If we want to cross-compile a “project”, we actually need two sbt project
s that share their source directories and most of their settings: one with enablePlugins(ScalaJSPlugin)
and one without.
This is what crossProject
gives you: it is a builder with the same surface syntax as a Project
, but which at the end of the day gives you two actual Project
s that sbt can use.
So why is this so complicated, whereas ++
is so easy to switch between versions of Scala, and does not require additional Project
s for every major version of Scala? Basically because the only effect of ++2.12.2
is to set every scalaVersion := "2.12.2"
(I may be simplifying, but that’s the gist of it). Then every other setting that depends on the Scala version is derived from that setting. However, the different between a JVM project and a JS project is whether or not we apply enablePlugins(ScalaJSPlugin)
on it. No amount of set whatever
is going to get sbt to add or remove an AutoPlugin
from a project.
This is why we cannot have a +++
of sorts that would dynamically switch the target of project.
Identifying the issues
OK now that I have (hopefully) articulated the various dimensions of the issue and the current scheme, we can really talk. And as a first discussion item, I’d like to clearly understand what are (really) the issues that project maintainers face. Without understanding this, we might look for solutions to non-existent problems.
In the OP, @fommil mentions the difficulty to deal with build.sbt
and in particular the multiple projects created by crossProject
. I take note of that point of view, but I have a feeling that there is more to it. After all, aren’t most sbt builds nowadays basically multiprojects anyway? And are the changes to the build really that complex? I am not sure.
Let’s take as example the PR I made not long ago to scala-collection-strawman
to add cross-building for Scala.js: https://github.com/scala/collection-strawman/pull/220. It was pretty much just turning a project
into a crossProject
. The reviewer also asked that a couple short-cut commands be added in the top-level project to make it easier for the typical contributor to run the tests on the JVM.
Of course it is some work. And being comfortable doing these changes requires to be a bit comfortable with writing a build.sbt
to begin with. But is it so hard that it pushes library maintainers away from publishing their stuff?
The answer might just be yes, and there is no other issue. If that is the case, then let’s tackle this issue by all means necessary.
But before we do that, I would like to be sure that there are no other reasons that push people away from cross-compiling for Scala.js. It is no use to solve one aspect of the problem if we do not understand the big picture.
Here are a few things that I have heard or overheard in the past, that could be other reasons:
- CI build times: Scala.js is slower than Scala/JVM (because, you know, JS), enough so that the CI build times can significantly increase for a library that cross-compiles. Even if locally you only ever run your tests on the JVM, your CI builds will have take a hit.
- Need to install Node.js on machines: the Scala.js tests won’t run without Node.js (or some other JS engine), so your CI infrastructure must have it installed. Locally, the developer who wants to run the JS tests also needs to install it.
- Familiarity with Scala.js: most library maintainers don’t use Scala.js on a regular basis, if at all. Some could feel like they are taking responsibility for something that they don’t understand.
However, I have never seen the above clearly articulated first-hand from a library maintainer, so I cannot evaluate how much they are true or if they are even a problem at all.
OK I’m done for this first post. To conclude: let me reiterate my Big Question:
Are there other difficulties that library maintainers face when supporting Scala.js, besides the complexity of the build.sbt
?