Behaviour of `@experimental` in Scala 3

There is currently a discussion happening on github which was requested to be moved to contributors regarding the new behaviour of experimental language features.

The new intended behaviour being talked about is that if a new language feature in Scala 3 is considered experimental, there is no easy way to opt out of this. In other words, when features are marked experimental with the @experimental annotation, this forces the Scala 3 compiler to enforce that all transitive call sites of that initial @experimental also need to add @experimental.

Specifically in the context of Macro Annoations, if you define a macro annotation i.e.

// Note that we are forced to put @experimental here due to MacroAnnotation
// having @experimental
@experimental class myAnnotation extends MacroAnnotation

We also have to add @experimental to the user code that happens to call myAnnotation, i.e.

@myAnnotation def something

will not compile, you have to do

@experimental @myAnnotation def something

instead (although it is important to note that to reduce the noise (if possible) you can add the @experimental annotation to a top level object/class although you can then counter argue that you are marking your entire object as @experimental when in fact it may only be in a few places.

In case its not clear, its also entirely intended that you cannot get around this, i.e. unlike in Scala 2 where you could add scalac flags and/or experimental features were considered a warning this in Scala 3 you have to add the annotation unless you use a snapshot/non mainline compiler.

This kind of designed has surfaced some consequences which have been raised in the earlier mentioned thread. The first one which is specific to macro annotations is that the forced transitive application of @experimental can be considered excessive because for the caller (i.e. “user”) of the macro annotation, the fact that its @experimental is unnecessary. While for language features like direct style this behaviour is understandable, for macro annotations the user doesn’t really care how a macro annotation is implemented so even if the entire design of macro annotations is change its irrelevant (although it is to be noted that hypothetically speaking macro annotations could even be removed at some point).

The other problem is a specific one for library authors when it comes to creating code that uses macro annotations which are cross compiled both to Scala 2 and Scala 3. Often the goal of such libraries is that the user can write cross compatible Scala code (i.e. code that cross compiles both to Scala 2 and Scala 3) and with the current design this isn’t possible because of that forced @experimental annotation. This is because there is no such thing as an @experimental annotation in Scala and although this can be worked around by either creating a stub scala.annotation.experimental just for Scala 2 I have reservations that this would be expected behaviour.

Authors of izumi-reflect/airframe which are such libraries that are designed to support multiple Scala versions this way have also stated the same concerns.

My personal 2 cents are specifically in the case of macros/macros annotation, the transitive behaviour of forcing the @experimental annotation is excessive and has unintended consequences. While I understand that macro/macro annotations could even be removed/significantly changed at some point I also find this highly unlikely considering a lot of what can be considered critical Scala libraries actually rely on these features.

2 Likes

The entire point of experimental is to incentivise to use it with the nightly compiler, because that specifically produces experimental TASTy output, which forces users to use nightly compiler also

I’m afraid the consequences here are very much intended. In fact, the system has been designed precisely in order for those exact consequences to surface.

We don’t want stable libraries to be built on experimental features.

We allowed that in Scala 2, and it resulted in people taking macros too far, too fast, and then we were stuck with a pretty bad design. Even though they were “experimental”, we could not in fact change things in there because it would break people’s code.

You are already trying to make a similar argument:

You’re already saying: “we have those critical libraries that rely on that experimental stuff, so you cannot possibly take it away from us!” Well, yes we can, and we probably will, in one way or another. The only protection we still have against such arguments is precisely to prevent your library to be used by non-experimental code. If it requires experimental code to be used, your library cannot possibly be critical.

Macro annotations have not even been submitted to the SIP process yet. There’s no way they are going to be made stable before that.

5 Likes

To be a bit blunt I think this is going to have an impact when it comes to Scala 3 adoption especially for libraries that have to cross compile between Scala 2 and Scala 3. I will let the authors for the other libraries speak for themselves but at least for Pekko (which I help maintain) we would basically have to either make @experimental stub annotations (which as I said before is probably not what is desired) or duplicate entire swaths of code just for Scala 3, hindering all of the effort that was done to cross compiling between Scala 2 and Scala 3 possible.

Another example, I have unpublished work to replace Scala reflection with izumi-reflect (mentioned before) in GitHub - tminglei/slick-pg: Slick extensions for PostgreSQL with the underlying intention of adding Scala 3 support. Since GitHub - zio/izumi-reflect: TypeTag without scala-reflect. Supports Scala 2 and Scala 3. is experiencing the same problems, this work would in some part also be in vain because for Scala 3 users it would again reveal the same issues with @experimental. While it may be possible to solve this specific issue another way without using izumi-reflect (i.e. maybe using Scala 3’s new generic programming with tuples???), I personally don’t have the bandwidth for this as it would mean completely separate implementations for Scala 2 versus Scala 3 (and there are other concerns here, i.e. getting the work to be accepted by the library author would be harder).

So while I understand the reasoning behind this, my impression is that Scala 2 to Scala 3 adoption, especially for existing maintainers for Scala 2 libraries doesn’t appear to have been considered as strongly as it should have.

And also just re-iterate, as a library author writing macros/macro annotations I really don’t care if the entire design/implementation of macro changes. Its the users that I care about. I completely understand the consequences/burden’s of using experimental features, but users shouldn’t have to care about this and thats where I think its gone too far.

If this is the case then why does the @experimental annotation even exist for the stable LTS Scala, i.e. Scala 3.3.x.

Also I am not sure why the Tasty output would be relevant here because at least specifically with macro’s, the macro’s job is to generate/rewrite code at compile time and that Tasty output from that generated compile time code would match the same Tasty output of whatever the Scala version of the user of the macro is. The macro annotation/definition itself may generate experimental Tasty output, but from what I can tell this appears to be by design rather than a technical limitation?

I think it maybe even didn’t get far enough?

The whole point is to set expectations right. And the expectation should be “If I’m using experimental language features or experimental libraries (libraries which use experimental language features) I may run into severe issues as I’m playing beta tester”.

People, especially the end users, need to know that they’re running dev versions of their software stack.

So if there is still some impression that “one could create ‘stable’ libs with experimental features” the message wasn’t brought over clearly enough, I guess.

This is actually a very good question.

Maybe a model closer to what Rust does would be better?

They have experimental stuff only in the nightlies.

This would have two interesting consequences:

First of all the expectations would be set very very straight. Stable Rust Scala is rock stable. If you want the shiny new stuff you need to give up on this strong stability promise. You need to do this very consciously.

At the same time it would likely increase the amount of people who would take the risk of using a “pre-release” compiler, and become proper testers of new features or changes. Which would be good for the compiler development as there would be more feedback, and likely more people caring to get things fixed in case they are suboptimal. So faster and better turnarounds.

BTW: In Rust people run nightly compiler versions in production for a long time.

It’s not like a testing version needs to be somehow unstable! It’s more about the guaranties you give.

I for example run Debian Testing as my main OS on all private machines. It’s rock solid, imho the most stable rolling distri under the sun (and actually more stable than some “LTS” versions of other distris). But I know: There are no guaranties. Things may break—even it’s very unlikely as stuff in Testing must first survive Unstable without major issues for at least a few days.

The whole point is: You have “release trains” with different guaranties. You have “old and boring, but it doesn’t change much”, or you have “new shiny stuff, but it may includes bad surprises”. (Or something in between if you have the capacity for more “shades of stability”; which Scala likely doesn’t have.)

What I personally would prefer for Scala would be a quite fast moving “testing branch”, maybe once per month (not “nightlies” as this is to fast even for people who are willing to be beta testers), and some LTS version for the conservative people.

So who likes can live a “move fast and break things” culture, where you get all the newest shiny stuff early, but you need to keep up with the development around you, and it’s also up to you to report and maybe even try to fix surfacing issues. OTOH the people who only like to do a big update every few years can have this also. (Maybe comparable to people who move only between Editions of Rust¹.)

Of course this depends on the compiler team and whether they want to move to a “release fast, release often” model. (Imho such a model proved quite successful when looking at other open source projects.)


š Please take this with a grain of salt, as they depicting a theoretical world there. There are (small) issues with Rust editions. Nothing is perfect. But the basic idea is not bad!

I think the issue when comparing the current problem to Rust is that Rust is not dealing with the problems that Scala is, specifically the fact that we have this inbetween transition period between Scala 2 and Scala 3.

If Scala 2 didn’t exist and we only had one Scala and didn’t have the situation where codebases needed to cross compile between Scala versions then it wouldn’t even be a problem (or much less of a problem).

This is why I think we should be wary of just blindly copying/comparing what other languages are doing without ignoring the context, in this specific case Rust is not dealing with the same problems Scala is, i.e. it hasn’t made en epic/major bump in language evolution (and it may never do that, but thats a tangent).

To me, this point of “are we making it harder then it needs to be for libraries to support Scala 2 and Scala 3” should be a central consideration for a lot decisions that are being made. Scala is already on the hard end of the spectrum when compared with other languages for library maintainers, we shouldn’t make the job even harder.

If Scala 2 was well and truly “dead” then I would understand this, but the current situation is far from that (in fact I wouldn’t be surprised if the vast majority of code out there is still Scala 2, something that will likely take around a decade to transition if we are to take Python 2-3 as an example).

Yes this is indeed important.

Scala makes already a few things more difficult in this area than imho needed.

But I think the most important thing now is to try to push people to update. Hanging in some in-between limbo for the next decade is not good. This would make progress really really hard.

Progress always means breaking something. One way or the other. That’s the nature of progress: change.

So the most important thing is to make it easy to upgrade Scala and not really try to keep the old version of Scala alive. The result would be that nobody moves. People never move when they don’t strictly need to… :smile:

So there need to be initiative for people so they want to move (for the features) and at the same time feel some urge to do it (because things become quite fast odd in the “old world”).

Of course blindly copying other languages is not a good idea. That’s why I proposed actually a different model; same in the spirit but different in the details.

I think forcing people to use some “testing version” to use some “experimental” features would make sense. Than all the technical difficulties with the @experimental annotation (the topic here) would actually just disappear.

Your lib with macro annotation would be needed to be marked as “testing”, and it would only run on a “testing” Scala version.

If the lib is important enough people would still use it. People run even nighly compiler versions in production in Rust… The benefits you get for that just need to look sexy enough.

The compiler would get more and better testing, and maybe even more contributors as people would have a real initiative to invest into issues they have. Trying to switch back to a “stable” version wouldn’t always make sense.

I see no big issue. Release early, release often works imho anyway best. Look for example what browser do. They get the cycles shorter and shorter. Or the whole idea of continuous integration. This makes things actually simpler for everyone. Big bang releases / updates fail miserably quite often. (This was actually also the critique about the long Scala 3 development. People where asking why not a more incremental approach. Of course it’s hard to break bigger things with some incremental approach. So you need to change the epoch once an epoch… :smile:)

To me it seems like your issue is not that @experimental should behave differently, but that macro annotations should become non-experimental as soon as possible.

If you make @experimental stuff easier to use in the general case, then you’re back in the Scala 2 situation where everybody just enables all the flags in the compiler and experimental loses all meaning.

IIRC that was how it worked initially in Scala 3 as well.

2 Likes

Right, and if this is the case then the entire premise of the library would need to be thrown down the toilet because Pekko would never accept having a snapshot/nightly compiler version and as a consequence it would mean that we would be forced to duplicate source code in multiple places or do ugly hacks like its done in scalatest where there is a custom comment based preprocessor for different Scala versions (this is the unintended consequences I am talking about).

And to me this kind of a stance is acceptable for Scala 3 versions up until LTS, but since we are LTS this is the first Scala version where we were told that as a library, its strongly recommended to use.

The natural conclusion of having @experimental only in non LTS versions would mean that that Scala LTS 3.3.x would be yet another Scala version that is not suitable for certain cross Scala 2/ Scala 3 projects but thats also not a place I want to go because there is already enough on the Scala 3 team’s plate when it comes to work (or put differently, I don’t think demanding that Scala 3 supports every feature that Scala 2 had as “stable” is very mature).

I also don’t know if its intentional but I am also getting the feeling that commentators are handwaving away the “user” vs “library author” distinction particularly when it comes to macro annotations. To me this is actually extremely critical, again I totally 100% understand why other experimental language features like direct syntax also need to transitively pollute user code with @experimental because even the syntax/language structure we can wildly change but with macro annotations the syntax is already set in stone (its just a standard annotation which exists both in Java and Scala).

Macro annotations don’t exist yet in Scala 3. What exists is just something for testing the concept. Anything beyond that is at the risk of the tester. That’s what the expectation is supposed to be. If you want to cross-compile with Scala 2, wait until macro annotations will be officially part of the language.

3 Likes

C# has the similarly infectious RequiresPreviewFeaturesAttribute, which is required on APIs using preview features, as well as APIs calling those APIs.

What is not documented in this case is that the compilation error generated by its omission is technically a warning, and can be suppressed using #pragmas and compiler options. This prevents your assembly from being flagged as using preview features, and gives you an escape hach in the “this API is subject to change but unlikely to be removed” case. This makes it tricky for novices to blunder into shipping preview features, but also avoids forcing everyone into preview builds.

2 Likes

Doing this before the rough edges are filed off Scala 3 would, speaking frankly, be language suicide. The unfortunate reality is that Scala 3 is not yet on par with Scala 2, so until it is, making it easier to support both will reduce the number of libraries that decide to opt-out of the upgrade and block downstream libraries.

3 Likes

Actually thats not what I am suggesting. I don’t have a problem with the @experimental being transitive and not having an escape hatch, just that specifically for macro annotations specifically this should only be the case for the people defining the MacroAnnotation (and also people extending MacroAnnotation) but not for the users (i.e. callers of the MacroAnnotation).

This is different to Scala 2, where even for library authors had those escape hatch mechanisms.

But how do you than communicate to those users that they’re using something that may break in all kinds of funny ways at anytime?

Exactly this was / is the big problem with Scala 2 macros.

The problem was never that someone decided to build macro annotations in the first place. The problem was / is that end-users started incorporating macro-based libs and depending on them for their production systems without thinking hard enough about the long-term consequences.

You basically want that state of affairs back.

I’m not sure this is the right way forward…

Whats your definition of break wrt macro annotations specifically? The syntax of the macro annotation is already set in stone, its an annotation after all. Even if the entire design of macro annotations changes, for the user its always going to be

@myMacroAnnotation def something = ...

Of course if macro annotations were entirely redesigned then it would effect library authors, but not users

I disagree, I think it would be more accurate to state that the authors of macro had an aversion to the macros being changed because they already wrote so much code against current definition of macros. Also we have to take into account of how Scala 2 macros were created, it was from the getgo an extremely experimental design because it just exposed the internals of the Scala compiler.

As far as I recall (because this is already some time ago) If end users were complaining, its not because they were complaining about hypothetical breaking changes to macros (which would largely be a macro author issue) but rather not being able to use the library at all because Scala 2 may hypothetically remove the macros. The whole whitebox vs blackbox macros also comes into play here, but regardless of @experimental Scala 3 has already learnt the lesson here which is being conservative in what features they added, i.e. start with smaller subsets first and gradually increase support where as with Scala 2 macros were wild west that supported so many things pretty much right at the start.

Macro annotations have not even been submitted to the SIP process yet. There’s no way they are going to be made stable before that

Is there any chance these SIPs be smaller and move faster。

Some SIPs can be split into multiple smaller ones, but macro annotations not really(?):
They are a wholly new construct, which will be extremely powerful.
Therefore, every part of its design needs to be well suited for the other parts

3 Likes

My argument is your creating a problem where one doesn’t exist. The issue was not uses using macros, the issue was macro authors not wanting to rewrite their existing macro codebases.

I think the bigger point is an entire history and context is being ignored, i.e. at the time for Scala 2 macros

  • Cross compiling Scala code for multiple versions was much more painful then now (so any hypothetical breakages of Scala 2 macros would be harder to deal with)
  • Scala 2 macros exposed the internals of the Scala compiler which is one of the main reasons why they became so problematic. This is not the case of Scala 3 macros where even the current non SIP version has been deliberately designed so that it doesn’t leak any dotty internals
  • Due to the process of exposing compiler internals, the Scala 2 macros had massive scope almost entirely on release whitebox/blackbox which then of course meant that libraries started relying on all of these features. This isn’t the case with Scala 3 macros where quite deliberately they are starting small.

The issue with Scala 2 macros are understandable, but we have to be wary of sensationalizing this and going from one extreme (which was Scala 2 wild west macros) to the complete other extreme (which is where I would argue we are now).

Even with the current macro annotations not being part of SIP, the situation is already magnitudes better than what Scala 2 macros are for the reasons I stated before. Unless macro annotations are decided to be entirely removed, even if we removed the whole @experimental annotation from them right now the ramifications would be no way near as bad as with Scala 2 macros.

And just to drive the point further home about how much of a big deal Scala 2 revealing compiler internals were, it also meant it was really hard to evolve Scala 2 in fundamental ways because those internals were leaked into the macro API (even though it was considered experimental) so such changes could have broken macros for the authors writing them. We don’t have these problems with the current design.

To put it differently, Scala 2 macros were experimental precisely because they were doing things like exposing compiler internals (which at the time everyone pretty much knew wasn’t the best idea, but it was better than nothing) where as Scala 3 macros are experimental because they are new. So to me claiming that Scala 3 macros without the @experimental annotation would pose the same problems as Scala 2 macros is reductionist line of thinking that is missing all nuance.

I can understand using the @experimental annotation because there is a rule with SIP/new features, in which case its all be damned, rules are rules but lets not conflate this with actual Scala 2 macro community experiences and most importantly the reasons behind them.

I hear you. There does indeed seem to be some sort of general pattern here in Scala 3.

I’ve called it “overreaction” a few times in different contexts.

In fact the overreaction went so far that a few months ago macro annotations were considered completely out of scope for the new macro system. Because of some quite specific issues with the old macro annotations…

In general macros in Scala 3 are still not there. They’re still not powerful enough. Besides some code transformation features they still don’t allow proper code generation (imho the more interesting part of macros).

But I think there is a difference here from real overreactions: Whereas “We won’t do that. Point” reactions are clearly overreactions imho, the new, strict, definition of “experimental” is not an overreaction per se. It’s just a very strict—and imho the only valid—definition of “experimental”.

Experimental means experimental. No escape hatch here.

Because if you water down the definition it becomes useless.

How would you prevent this from happening again?

Imagine that macros don’t go away, but a major paradigm shift happens that requires all macros to be rewritten from scratch. (I’m not saying this is likely, but in theory it could happen. The whole point of experimentation is that you don’t know the outcome in advance. Surprising things may happen.)

But if such macros were to be widely deployed in production systems, forcing large orgs to make large investments to change this again, what do you think some people here might expect to see in their (private) inboxes?

Why do you think the macro authors would be willing to rewrite their macros (which “work fine”) again this time? Why do you think end-users would be willing to put new untested code into production?

I really don’t think this problem is made up!

The new macro annotations don’t even fully exist yet.

How do you know there are no issues with them? We haven’t even tried them yet and you say “they work fine”. Isn’t that a bit of a premature conclusion?

First of all, I think that following rules for the sake of following rules is just plain bullshit.

The whole point of rules is that they’re there for a reason.

What you’re asking for now is actually to bend the definition of “experimental feature”. But this definition needs to be strict by sheer purpose! If that definition were not strict, and did not communicate its intent very clearly to the outside world it would become useless in general. There is simply no purpose for a willy-nilly definition of “not officially supported”. This is a purely binary distinction. It’s fully supported, or it just isn’t officially supported. There is nothing sane in between.

The latter case does not mean you cannot or should not use it. But you need to be fully aware that you’re completely on your own. No commitment from upstream whatsoever.

One last thing: In case you accept that the definition of “experimental” needs to be strict I guess you would probably (indirectly) ask for macro annotations to be fast-tracked and made a stable feature ASAP.

I beg you not to push this to much! Please don’t create a sense of urgency here.

Good design takes time. It needs the freedom to scrap some parts and start over if some direction turns out to be sub-optimal. But having someone breathing down your neck constantly asking you to deliver “something” doesn’t create a sense of freedom to actually experiment; quite the opposite.

I want to point once more to Rust. They had a quite similar situation with their async feature.

It took many years for Rust’s async to become stable. This meant that a lot of people could not use it because they were tied to “stable” Rust, or had to resort to nighlies for all their purposes, including running in production. Of course, this also affected the whole library ecosystem. Anything that used experimental async features needed to depend on a nightly compiler…

The constant push (for years!) to finally deliver something burned out some of the key players there. And in the end, what Rust delivered is actually quite mediocre. They probably needed more time to fix all the known problems. But given the outside pressure they were under, they finally delivered a sort of 80% solution. Now they’re stuck with a subotpimal design more or less forever…

I don’t want to see this result in Scala! Things should be shipped when they’re done, not before!

It’s much more important that the result is really good than that “something” is out there fast.

And languages are even more special in this regard, as it’s really very, very hard¹ to change something after the fact. With a language, you stick indefinitely with what you have designed and released as “stable”. You might say that you only have one shot at hitting the target. Aim well, and don’t shoot unprepared!


¹ Just look what kind of woes it’ll likely take to just swap around two function parameters to repair a weird design decision. Changing things once committed is really no fun exercise.

2 Likes

The point I am making is that Scala 2 macros were considered experimental because of the way they were designed, i.e. they were exposing Scala 2 compiler internals and they implemented concepts which in retrospect they shouldn’t have (i.e. whitebox macros). iirc even the original author of Scala 2 macros Eugene Burmako stated this (although this is a long time ago, we are talking a decade ago).

On the other hand Scala 3 macro annotations are considered experimental just because they are new, thats it. They are pre SIP and they haven’t been used yet.

And this distinction is critical, because plenty of new stuff was added in Scala 2 without a formal SIP process and it didn’t create the same problems that macros did, there was clearly other things at play rather than just marking something @experimental or not.

Unless the large orgs were themselves writing macros, it wouldn’t effect them at all. It would effect the authors of the macro libraries themselves.

Not sure what you are implying with “untested” but new users put untested code into production all of the time. And regardless of the design of the macros, even if they are “experimental” its always possible to test that they are doing what you expected because fundamentally you test that the AST that the macro produces is what you expect.

Thats simple, the design of Scala 3 macros avoided almost every single pitfall of Scala 2 macros. That is a fact, I already stated how previously.

I am not saying that Scala 3 macros will never have issues, what I am saying is that a lot of the issues that users experienced with Scala 2 macros are unique to Scala 2 that won’t happen here. I also don’t think we should entertain the “how do you we know there won’t be issues” because this is fallacious reductionist way of thinking because with this premise you can argue whatever you want.

Almost anything by definition is new, and whether something requires a SIP or not is due to arbitrary distinctions being made which get codified in rules.

Yes and the reason/s is being questioned here right now. Rules are a tool and if they are not fulfilling their purpose (or causing more problems then they are solving) its entirely reasonable to question the rules.

TBH, I don’t really care what Rust does, they are a new language that started completely from scratch and they have an entirely different set of issues, users and even language designs that don’t apply to Scala . So I am saying we should stop comparing ourselves to Rust because Rust is not dealing with the issues that Scala has, that is Scala is in this in-between transition between Scala 2 and Scala 3 which is something that you entirely dismissed earlier. We can get inspirations from Rust if we want, but we have gotten to the point that any alternative ways of thinking/criticism’s of the current process being made are handwaved away because “since Rust did it, it must be right!”.

If you want to make a more legitimate example then you should be comparing us to Python (specifically Python 2 to Python 3) and the transition from Python 2 to 3 took a decade (if not longer) because of the same kinds of problems I am pointing out now.

You cannot look at Scala 3 development in a vacuum while ignoring Scala 2, the vast majority of code (both open source and in companies) are still on Scala 2 and as @morgen-peschke pointed out earlier, if you ignore this then you will neuter adoption of Scala 3 hurting everyone, it really is that simple.