I hear you. There does indeed seem to be some sort of general pattern here in Scala 3.
I’ve called it “overreaction” a few times in different contexts.
In fact the overreaction went so far that a few months ago macro annotations were considered completely out of scope for the new macro system. Because of some quite specific issues with the old macro annotations…
In general macros in Scala 3 are still not there. They’re still not powerful enough. Besides some code transformation features they still don’t allow proper code generation (imho the more interesting part of macros).
But I think there is a difference here from real overreactions: Whereas “We won’t do that. Point” reactions are clearly overreactions imho, the new, strict, definition of “experimental” is not an overreaction per se. It’s just a very strict—and imho the only valid—definition of “experimental”.
Experimental means experimental. No escape hatch here.
Because if you water down the definition it becomes useless.
How would you prevent this from happening again?
Imagine that macros don’t go away, but a major paradigm shift happens that requires all macros to be rewritten from scratch. (I’m not saying this is likely, but in theory it could happen. The whole point of experimentation is that you don’t know the outcome in advance. Surprising things may happen.)
But if such macros were to be widely deployed in production systems, forcing large orgs to make large investments to change this again, what do you think some people here might expect to see in their (private) inboxes?
Why do you think the macro authors would be willing to rewrite their macros (which “work fine”) again this time? Why do you think end-users would be willing to put new untested code into production?
I really don’t think this problem is made up!
The new macro annotations don’t even fully exist yet.
How do you know there are no issues with them? We haven’t even tried them yet and you say “they work fine”. Isn’t that a bit of a premature conclusion?
First of all, I think that following rules for the sake of following rules is just plain bullshit.
The whole point of rules is that they’re there for a reason.
What you’re asking for now is actually to bend the definition of “experimental feature”. But this definition needs to be strict by sheer purpose! If that definition were not strict, and did not communicate its intent very clearly to the outside world it would become useless in general. There is simply no purpose for a willy-nilly definition of “not officially supported”. This is a purely binary distinction. It’s fully supported, or it just isn’t officially supported. There is nothing sane in between.
The latter case does not mean you cannot or should not use it. But you need to be fully aware that you’re completely on your own. No commitment from upstream whatsoever.
One last thing: In case you accept that the definition of “experimental” needs to be strict I guess you would probably (indirectly) ask for macro annotations to be fast-tracked and made a stable feature ASAP.
I beg you not to push this to much! Please don’t create a sense of urgency here.
Good design takes time. It needs the freedom to scrap some parts and start over if some direction turns out to be sub-optimal. But having someone breathing down your neck constantly asking you to deliver “something” doesn’t create a sense of freedom to actually experiment; quite the opposite.
I want to point once more to Rust. They had a quite similar situation with their async
feature.
It took many years for Rust’s async
to become stable. This meant that a lot of people could not use it because they were tied to “stable” Rust, or had to resort to nighlies for all their purposes, including running in production. Of course, this also affected the whole library ecosystem. Anything that used experimental async features needed to depend on a nightly compiler…
The constant push (for years!) to finally deliver something burned out some of the key players there. And in the end, what Rust delivered is actually quite mediocre. They probably needed more time to fix all the known problems. But given the outside pressure they were under, they finally delivered a sort of 80% solution. Now they’re stuck with a subotpimal design more or less forever…
I don’t want to see this result in Scala! Things should be shipped when they’re done, not before!
It’s much more important that the result is really good than that “something” is out there fast.
And languages are even more special in this regard, as it’s really very, very hard¹ to change something after the fact. With a language, you stick indefinitely with what you have designed and released as “stable”. You might say that you only have one shot at hitting the target. Aim well, and don’t shoot unprepared!
¹ Just look what kind of woes it’ll likely take to just swap around two function parameters to repair a weird design decision. Changing things once committed is really no fun exercise.