Behaviour of `@experimental` in Scala 3

I won’t go into the details of the other things stated as this would become a repetition, but I strongly disagree with:

The Python 2 => 3 transition could not be more different to the situation in Scala!

It took almost 1.5 decades for completely different reasons.

Python did not have a proper migration story, they did not have working rewriting tools, and especially they don’t have a type systems that could catch semantic drift of your code.

Python code would just silently change meaning after switching the version. That meant that you would need to retest every line of your code again, likely by hand.

At the same time there was a strong movement in the Python community to keep Python 2 alive, instead of letting it die in dignity quickly. This was part of the problem, not the solution!

The point here is purely about stability guaranties and “official commitment”. This is an issue of setting the expectations right. It’s not a technical issue.

Of course you can deploy to production whatever you wrote last night. That’s “normal”, in some sense…

But you likely won’t not sign a SLA for that code… That’s the point.

There can’t be any “pragmatic” definition of “experimental language feature”. This needs to be strict, otherwise it’s useless. And strict means: No exceptions. Even if they would be convenient to some degree.

And Scala 3 also has its own holes/problems when it comes to migration and whats being discussed in this thread is one of them, otherwise this thread wouldn’t exist.

As pointed out, there is a lot of code stuck on Scala 2 precisely because of the issues being discussed here, so if you are implying that Scala 3 doesn’t have these migration problems you are wrong.

This kind of black and white thinking is not fruitful and if this is where it ends then it will hurt Scala 3 adoption and/or create workarounds/hacks that I pointed out before which will cause more problems down the line (this is already happening in izumi-reflect btw).

Be extremely careful where you draw those lines, otherwise they can bite you in the future.

I’m against painting things black or white in general.

But there are some things that are binary by nature.

I don’t think there is an issue with “hacks”. That may be imho even a legitimate solution you could employ in your org. There were things like this here in Rust:

The point being: If you use that, it’s very, very clear you’re on your own. You’ve been warned: “Please do not use this”… :smile:

There is nothing wrong with “I know what I’m doing”. We’re grown up people in the end.

The question that matters is: Who’s responsible for the outcomes?

In case you knew what you were doing it’s clear: If something goes wrong the only dude you can blame is the one looking at you from the mirror.

That’s not the case if someone promised you some guaranties.

That’s the distinction which matters here, imho.

The only reason they are binary is because process/rule reasons, nothing technical. I have already clearly demonstrated which this is the case, there is no point in repeating myself here.

The point is the current Scala 3 implementation doesn’t even really allow “hacks”. As was pointed out earlier, other languages don’t even take it to this level i.e. C# has a its own version of @experimental but this can be worked around with pragma’s/compiler flags. Scala 3 didn’t just implement what is considered a “normal” implementation of “experimental”, it went an extra 20 miles which myself and others argue is too far.

My suggestion is already pretty clear on this, users of macro annotations: no. authors of macro annotations: yes. And this kind of distinction is unique to macro annotations specifically and not to other experimental features, and if experimental doesn’t allow this nuance then I would argue the problem is with experimental/rules, not macro annotations.

Also in the real world, exceptions exist :wink:

It seems to boil down to me not buying this argument.

As I see it you could have made the exact same argument for Scala 2 macros:

“The users where completely innocent, it was just the bad macro authors who blocked the way.”

But that’s imho not how it works! What happened:

The users took macros for granted, no matter the underlying implementation. And this caused the whole issue.

So the solution is to make sure that users never take experimental features for granted again. They have to opt in to those features in a very explicit way. (Imho it’s not even explicit enough, `@experimental’ is already a kind of escape hatch; and this discussion would not be happening if that escape hatch did not exist in the first place).

1 Like

Only if you ignore all of the nuance for Scala 2 macros which was pointed out before. Scala 2 macros are not Scala 3 macros and aside from the “experimental” label, almost every single problem with Scala 2 macros which caused all of the problems that you are pointing out are solved with Scala 3 macros, because people learned from that experience.

The problem with Scala 2 macros is not that they existed and that people didn’t treat experimental seriously, its that due to the design of Scala 2 macros the compiler got shoehorned into a specific design that they couldn’t get out of. As someone who is now writing/learning Scala 3 macros, this is not the case irrespective of a experimental label or not. Scala 3 macros are properly encapsulated, so even in catastrophic circumstances its not going to have the same impact Scala 2 macros did, by design.

If these technical nuances are going to be continuously ignored, we should leave the discussion here especially judging by the fact you didn’t even respond to it when I brought it up before.

I have found a simple solution to the complications using experimental code. The idea is simple: we can add an -experimental compiler flag allowing users to use any experimental features. At the same time, this flag would not break the transitive properties of experimental code (i.e. any code that depends on experimental code is itself experimental).

This implies that an application that wants to use experimental features only needs to add the -experimental flag on any compiler version. Libraries can opt-in as purely experimental using -experimental.

To achieve this, the -experimental behave as follows:

  • Annotate all top-level classes with @experimental (possibly as a desugaring)
  • Build tools / incremental compilation: Toggling on or off the -experimental flag will invalidate all compiled classes to force them to be recompiled with or without the added @exerimental annotations
2 Likes

There are two issues with that line of reasoning:

  1. Users in 3.x expect to be able to upgrade the compiler to a newer version, and still use all the libraries they used before. This is broken if macro annotations change in a breaking way while still experimental.
  2. Even if we tell users that they might have to upgrade the macro library, if the changes to macro annotations make something impossible that was exploited before, then no matter what the library authors do, the users are still stuck.

So users are absolutely impacted by changes to the experimental macro annotations.

That’s why the rules are the way they are. This is the reason. And it is very, very unlikely that we will move from that position.

2 Likes

This would be perfect!

I had the presumption that this is already implied, i.e. if a user is using a macro annotation on 3.3.x and they upgrade to 3.5.x then they would also have to resolve a newer artifact for that 3.5.x, but point taken.

Definitely true, I mean some of my reasoning here is that since macros already existed in Scala 2 I had an assumption that these impossible cases/exploits/mistakes are already (largely) accounted for. Its not like macros is an entirely new feature that Scala never had.

All I can add here is that I would be wary of the tradeoff here in regards to people avoiding macro annotations completely due to the previously mentioned issues with @experimental since it can make it harder for Scala 3 team to get feedback on these experimental features so they can improve them (including finding these exploits), i.e. if you make the @experimental features so hard for users to use practically then very few people will even try/test them which is problematic for different reasons.

In the end though I do like @nicolasstucki suggestion

In my opinion it would be better for testing if some amount of people would actually use a “pre-release” version of the compiler. Currently almost nobody is doing that as far as I know.

So one would need to create some initiative for that. For example lure people with shiny new features… :grin:

Regarding the -experimental flag: I don’t understand it.

Isn’t this back to “free for all”? Everybody would just put that flag into their build, and be done. You would have “nightly crimes” built into the compiler. What’s than the point of it?

I don"t think this would help in testing. People would instead assume that everything works like before, just with the “small addition” that now they can use any “experimental” feature without any further issues. That’s back to square one, imho.

But maybe I just don’t grasp the simplicity behind it? :thinking:

1 Like

This is the natural conclusion for being so binary/black+white when it comes to matters like this. You make it so hard for users to test the feature that then no one bothers.

That’s why it should be as simple as setting a Scala version in your build config to test “pre-release” versions of the language. Just use the “pre-release” compiler.

But this needs to be separate form some “stable” Scala edition, imho.

Mixing both, testing / evaluating, and regular use, is the root of the issue that creates unwanted liabilities for the language, as I see it.

A different example: You can not run Debian Stable and Debian Testing at the same time! When you mix packages form Testing into Stable, or the other way around, (which “works fine” “most” of the time) nobody is going to help you with any issues you encounter. Your “testing” is completely worthless for the Testing branch, and of course there is no support from the Stable branch after you created a so called “Frankendebian”.

Testing is something different than running production code. You can’t do both at the same time, imho.

It is not exactly “free for all”, if you do not want to accidentally use experimental features or APIs you can not add -experimental. Not that companies with large stable code bases will not want to enable this feature, this implies that libraries will tend to use stable APIs if they want to be appealing. Only early adoptors of new features will live in the -experimental bubble, but once the feature becomes stable they can leave it.

2 Likes

That is the current state. However, it seems that it is not that simple to set up pre-release versions in the compiler.

I want to use some cool library that helps me solve my problem quickly.

After I’ve added this lib to the build the compiler complains that some deep transitive dependency is “experimental” (whatever this means).

ChatGPT says I need to add the -experimental flag to the build.

After doing that my new library works fine.

No issues!

/s


I don’t mean to be snarky, but I guess this would end up like that in a lot of places.

I think this was exactly the issue with the previous approach. Things were not communicated correctly, so people used experimental features way too lighthearted.

Exactly! This is an issue.

This should be simple. (And first of all there needs to be some defined notion of “pre-release” compiler anyway, which there isn’t).

The current approach, with the @experimental annotation actually leads to a mix of “testing” features with “stable” code. Like stated, it don’t think this is good in general.

Having proper “testing” versions of the compiler (which are released on a tight schedule), with some basic guaranties (like that it won’t explode instantly when touched :smile:), would be the better approach. Technically much simpler. Alone managing the whole @experimental thingy in the compiler is non trivial afaik. It creates even extra bugs sometimes… Imho such kind of features flags bear (in this case) unnecessary complexity. A simple branch based workflow would make things much more straight forward, saving bandwidth for proper issues.

That is precisely what we have now. We release nightly versions that have passed all the tests in the compiler. You can use any nightly version of the compiler to get access to experimental features. The limitation is that libraries are not meant to publish on those versions.

3 Likes

In case you know the Debian terms “nightly” is more like “Unstable”. I was thinking more in the direction of “Testing”.

I had something a little bit less current in mind, which does not move faster as you can update things. :sweat_smile:

Something that you could actually use on a day to day basis while testing stuff for a little bit longer than half a day…

Maybe something in the ballpark of 1 to 2 releases per month?

Eh? Scala 3 nightlies aren’t mutable snapshots. Each one is published to Maven Central with a permanent version number (with a SHA in it).

Maybe it would help if there was a web page on the Scala website that laid all this out. There’s https://docs.scala-lang.org/scala3/reference/experimental/index.html and Experimental Definitions , but they don’t contain this information, currently.

I understand this.

But when you tested against one of these, your testing results may be invalidated by the next version the next day. It’s a very fast moving target.

Of course it’s good to have nightlies. But imho that’s not the same as testing versions.

Just my 2ct here. I’m not sure anybody would like to put effort into testing versions. The idea presented before was more of a substitute for @experimental. Not sure this could be sold. So I don’t push it too hard. It’s just an idea. Maybe someone would get inspired by it. But it’s nothing I would try to talk anybody into.

In the end it’s all about what the compiler team likes to do.

Most likely everybody knows by now, but I’m a big Debian fan. I think they do a lot of things right. I like their approach of having three “release channels”: You have Unstable for the bleeding edge stuff. Whatever gets committed and uploaded lands there. No guaranties. Not even that the system doesn’t explode on an update. Than there is Testing. Stuff form Unstable migrates to Testing when it survived some grace period in Unstable without severe bug reports. And of course all leads do Stable, which gets released from a frozen Testing at some point in time, when it’s ready.

The point is: You can run Testing as a more or less regular system. It’s kind of a rolling distri. You’re not much behind Arch, but it’s quite safe to use as it has some basic guaranties (it won’t explode on updates usually, if nothing goes very, very wrong).

Other complex and big projects (like mentioned: e.g. web-browsers) have this “release channels” approach. I think it gives you the best of all worlds: People can select their preferred level of “stability”. Not everybody is on the same level. There are more than enough people who use some “pre-release” versions, so you get a lot of early feedback as a project. At the same time you can do all the continuous integration stuff. No big-bang releases, as things just trickle down the release channels, and when stuff hits the stable channel it’s actually pretty well tested. Also such an approach matches the idea of incremental improvements quite well.

Scala doesn’t have anything like a kind of in-between “pre-release” version. It’s: You go all in nightly, or you stay on official releases. No middle ground like with the mentioned Debian Testing branch.