Discussion on the major release cycle of the Scala Platform

The full details of the Scala Platform release process can be found here.

To align with one of the major goals of the Platform (stability), we propose a conservative release cycle that allows libraries to ship breaking changes every 18 months (1 year and six months). The rationale behind this decision is that potential modules shall be widely used, and therefore they are supposed to be well-tested and stable. If this is not the case, maintainers can make their changes in the incubation period, before a stable release.

However, I’m not sure if everyone is in favor of this policy and would like to hear other people’s opinion. It’s true that the breaking release process is long, so maybe reducing it to 12 months would work. However, this would imply that users need to go through a (hopefully not) painful migration process. This is not very user-friendly, unfortunately.

If this ends up being the case, however, the Scala Platform infrastructure could help providing tools that migrate (some, potentially all) breaking libraries’ API. I believe this would be a key tool for the Community, be it used in this situation or not. Other platforms could also benefit from them. However, this requires time and it’s not easy to get right. It might be better to allow breaking changes every 18 months if this is not a major issue for library authors.

What is reasonable for you? Would this be an impediment for you and other library authors?

What counts as a breaking change? Is it any change that will no longer compile, or any change for which there is no automated upgrade path? I could see being more aggressive with updates if migrators were supplied with the updates using something like Scala-Fix.

From the process:

Modules can only break backwards compatibility in major releases, that happen: (a) every 18 months (1 year and 6 months), or (b) when new Scala versions are released.

The automatic library migration tool is an neat idea to consider, but I’m not clear how viable it is and whether it’s worth the effort. We cannot rely on their existence at this moment, IMO.

The Scala Platform process uses semantic versioning. Semantic versioning allows you to bump up the major version number if you’ve done any incompatible changes both at the source and binary level (and backwards or forwards). There would be no distinction between forwards and backwards as there is in the Scala compiler release process (I don’t see a lot of value in distinguish them, at least; does anyone have a good example where this would be meaningful for users of the Platform?).

It’s unclear at this moment what Scalafix will be able to do, but from the official proposal submitted to the Advisory Board it’s meant to focus on Scala → Dotty migration.

Feels a tad coarse-grained to me – for a Platform-wide clock with so many inputs I’d probably say 12 months instead of 18, aligned to some agreed month. That’s still slow enough to not be a huge deal for us application developers, while not putting too much sand in the gears of the library authors. 12 months is a long time in modern software development, especially for libraries; I don’t see a compelling need to push it all the way out to 18.

I think the per-platform frequency should be allowed to be relatively high (12 months seems fine to me), but the per module frequency should not, especially if there are many breaking changes. Adopting the Scala Platform shouldn’t commit you to a serious rewrite of your code every 12 months. That’s just too much work. In practice few libraries have that much churn (especially widely used ones), but it is a good idea I think to make that an expectation: breaking changes are aligned on a single release cycle that occurs once every 12ish months (maybe we could just say 1 in 4 if we have a 12 week release cycle), but that a single module will NOT be breaking more than a very minimal amount of stuff repeatedly year after year.

I understand this the other way around. To me, the Scala compiler/stdlib release process fuses backward and forward binary compat, since a breakage in any direction is considered a breakage. I think libraries should distinguish those things. Breaking forward binary compat shouldn’t be a concern for libraries. It’s breaking backwards source and/or binary compatibility which is bad.

My take on this is that a major version bump is required iff backward source and/or binary compatibility is broken. Concretely this means adding a method is fine in a minor release; removing a method requires a major release.

There are a lot of details to consider beyond this simple distinction, though:

  • When we deprecate a method, we can break backward source compatibility for codebases with -Xfatal-warnings. IMO this should not force a semver major bump. We could demand a semver minor bump for this case.
  • In theory, in Scala, even adding a method to a final class can be backward source incompatible, for codebases which previously pimped that method via an implicit (with potentially a different contract). It cannot break backward binary compatibility, however. I would recommend that this scenario require a sever minor bump.

IMO a bump in the semver release number should guarantee full backward source and binary compat. A semver minor update should guarantee 100% backward binary compat, and reasonable source compat (by “reasonable”, I mean excluding the corner cases identified above, and potentially others I have not thought about right now).

2 Likes

We are slowly working on automatic library update process in IntelliJ IDEA. It’s already possible to use (we also implemented SAM migration for Scala 2.11 -> 2.12), but it’s still quite experimental, I think it will be completely finished during 2017.

I like that you start listing details.

Concretely this means adding a method is fine in a minor release; removing a method requires a major release.

In theory, in Scala, even adding a method to a final class can be backward source incompatible, for codebases which previously pimped that method via an implicit (with potentially a different contract).

Why just final classes? The same applies to non-final ones, so the impact seems wider.

Also, why only “in theory”? If perfect source-compatibility isn’t to be expected (which is probably correct), say because it’s not enforced, that should be explicit.

Skimming at both the Haskell PVP (http://pvp.haskell.org/, for source compatibility) and what the JLS says on binary compatibility:

Chapter 13. Binary Compatibility

a full discussion of compatible changes is probably longer.

We probably don’t need source compatibility to be as perfect as the Haskell PVP demands (in the world of Haskell PVP, supposedly source-compatible dependencies can be picked when packages are installed, so users see the build failures), but if guarantees are best-effort that should be explicit.

Can we clarify if we are talking about source compatibility or binary compatibility? There is a world of difference between the two (and I would rather opt for source compatibility for a 12 month duration, if possible).

Source compatibility is not really helpful for people building libraries on top of your libraries though.
Since the platform is means to be “core” and “used by many libraries / projects” it should aim for binary compatibility.

It’s possible to pull off, you simply deprecate and add new APIs - we’ve been keeping binary stability in Akka since 2.3 (March 2014!), and it is a feature to be stable and remain so for libraries as core as these.

Absolutely! It’s just that in a non-final class it’s not as surprising because a def in a subclass could need an override. Of course any problem in final classes is also a problem in non-final classes.

1 Like