I believe it is a good idea to have clear guidelines what annotations can and cannot do, and to keep to these guidelines in our own distribution. Otherwise we’ll risk a “broken window” effect where soon enough annotations are free to change or influence anything so that no categorization is possible anymore. This will be particularly important to set expectations what an eventual annotation macro design should accomplish.
With that in mind there’s a good case to be made to make infix and mixin soft modifiers instead. I can’t really see a downside to doing this. But we have to do it quickly, before we release Scala 3.
Isn’t this category 1 “tighten or relax error checking or warnings, or customize error messages” ? In particular you placed @uncheckedStable in this category which can be used in Scala 2 to make any paramless def stable, thereby allowing it to be used in a path (this annotation isn’t implemented at all in Scala 3 currently).
Yes, but I believe unchecked... annotations are different in a way. They assert a semantic property (in this case, being stable), that cannot be checked by the compiler. It feels different in quality from @infix.
At some point in the future we might have a way to check that a function is pure. Then a case could be made for a modifier stable or pure that would influence the way a function can be used. And at that point I believe it should be a modifier.
So I believe the main reason why @uncheckedStable is an annotation and not a modifier is the unchecked part.
I like the increased regularity we get from making infix and mixin be soft modifiers. As long as the context isn’t ambiguous, I think this is a win.
Is there any difference to how modifiable these things are considered to be? Could we, for instance, decide that @main is not how we want to do things and remove it in a point release, whereas we would be stuck with infix if we decided we didn’t like it? Or is this all stable on the same timescale?
What is, what should be the distinction between the modifier and annotation? If I remove an annotation and some code become suddenly invalid, or tests start failing, it is a modifier after all, not mere annotation.
IMHO Scala should either keep a clear visual distinction between both or maybe assume every modifier to be an annotation and change to @private, @protected, @abstract, etc. for the sake of regularity.
I have several issues with the methodology used here. It is looking at the historical status quo of things that are annotations, and trying to categorize them.
First, it should look at the existing modifiers to try and categorize them at the same time. A cursory glance at the keyword list suggests that several modifiers would fit in the mentioned categories. I would highlight:
override fits the definition of category 1 to some extent (not less than @uncheckedStable and @uncheckedVariance at least, IMO). It’s probably justified as a keyword because it is overwhelmingly widespread, but it could have been @override def
lazy perfectly fits the category 2. It could very well have been @lazy def or @memoized def.
Second, the following annotations are not categorized:
@deprecatedOverriding, @deprecatedInheritance, @migration: fit in category 1.
@deprecatedName: you’d think it would fit in category 1, but it has a much stronger impact as it allows some calls that would otherwise not even pass typer, let alone refchecks. It does not fit category 2 either. (note that its effects are similar in scope to what @infix does)
@BeanProperty and @BooleanBeanProperty: they would fit in category 2, except that they also allow some calls that would otherwise not typecheck
The meta-annotations: they would have to be a third category.
Third, my two points above show there already exist inconsistencies with the proposed model. The after-the-fact categorization is therefore at best a rationalization of the status quo, and definitely not making explicit a sort of hidden policy/guidelines used to get to the status quo in the first place.
Last, but not least, one of the goals listed for the categorization above is:
This will be particularly important to set expectations what an eventual annotation macro design should accomplish.
When I look at the categories (even the more precise ones added just above my post), I don’t see which ones would be fine for macro annotations and which ones wouldn’t. Within a single category I find both: @threadUnsafe would probably be implemented as a macro annotation, although @strictfp wouldn’t, to use a stark example.
Given all the above issues with the methodology, I think we should follow an entirely different methodology, that is not based a priori on rationalizing the status quo. Instead, we should first define what we want the categorization to achieve first, and work from there. If it is about deciding what macro annotations will and will not be able to do, we need to find categories that allow to define precisely that. If it is about deciding what the “core language” is, in terms of type checking (is it valid TASTy?) / type inference / run-time behavior, then we should define the categories accordingly. Currently, the proposed categories do not achieve any stated goal.
It’s fine to try to come up with different categorizations. I was only trying to make some sense of what we have.
What I want to avoid is the conclusion: “there is no valid categorization, so it’s a free for all”. Unfortunately, the discussion here veers into that general direction.
Of course, some of it is historical. I would argue that override should never be an annotation. It fits in none of the categories I have outlined. lazy could be an annotation, indeed. So we might have to add as a further criterion
Prefer a modifier over an annotation if it is used often, not just in some specific use cases.
That would justify to keep @strictfp @volatile @threadUnsafe @elidable as annotations, but lazy as a modifier. But I agree this one is a borderline case. Maybe they should all have been defined as modifiers. (but it’s too late to change it now)
One could argue that BeanProperty and BooleanBeanProperty are currently implemented in the wrong way (in Scala 2, they are currently not implemented at all in Scala 3). They should be interop only, which means the definitions they generate should not be visible (in the same compilation run) to surrounding code.
My own categorization would start from the following goal: to specify TASTy as nicely as possible, and therefore the validity of elaborated Scala programs. I will use “type checking” to mean the ability to check that a TASTy program is well formed; and “elaboration” as the process to create TASTy from .scala files. That means that type inference, implicit resolution, etc. are not part of what I mean by “type checking” in this message.
With that goal in mind, I see the following categories, in increasing order of “core-ness”:
Optimization/hints only, i.e., not (supposed to be) observable: @inline, @specialized, etc.
Error/warning control only, i.e., can only change what errors or warnings are emitted, but otherwise is completely not observable: open, @deprecated, @implicitNotFound, @nowarn, etc. In particular not the things that make the type system checkable or not (@uncheckedStable, @uncheckedVariance)
Behavior only, i.e., does not impact elaboration nor type checking, nor even interop, but can affect run-time behavior: @strictfp, @volatile, lazy
Interop, i.e., does not impact elaboration nor type checking within Scala itself, but does export a different API in host languages: @throws, @static, your desired @BeanProperty (maybe @varargs? not sure; I think it interacts with override checks), most @JS... annotations.
Elaboration, i.e., everything that influences elaboration, but leaves no defining traces afterwards: @mixin, @deprecatedName, @infix
Type checking, i.e., everything that impacts the TASTy type system and whether a program is well-typed under that type system: abstract (prevents new), @uncheckedStable, @uncheckedVariance, @targetName. There is some gray area for the stuff that are “ref checking”: override, visibility modifiers (except private), @native, sealed, abstract, final.
I am not sure where to put @compileTimeOnly. I think it fits in error/warning control only.
IMO, only the “type checking” section (and its gray area) deserve keywords (sort or hard). Everything else can be in annotations, as they’re not core to TASTy’s type system.
Under that categorization and goal, @mixin and @infix should stay as annotations, together with @deprecatedName. I would like @uncheckedStable and @uncheckedVariance to become keywords; and I would like lazy to become an annotation.
Aside: as an alternative for @uncheckedStable, it’s worth nothing that it is very compiler-oriented. It means “set the Stable flag on that symbol, and roll with it”. As users, we are using that annotation on a def to make it a val from the point of the view of the type system. Why are we not defining them as vals, then? Well, because we want them to be evaluated as defs. We could reverse the meaning, and introduce @noMemoize/@noField, to be added on a val to make it evaluated as a def. That would be a true behavior-only annotation in my categorization, whereas @uncheckedStable is supposed to be a keyword/modifier.
As an application developer I can’t say much about what categorization would be best, since most of these annotations I have never worked with before. However I think lazy is way too common to make an annotation, whereas I haven’t encountered uncheckedStable once before. So when flipping keywords/annotations I would also take into account the “average” Scala developer and how they perceive the language. IMO lazy should stay a modifier since it is too common and important for an annotation.
I don’t think that Tasty is the right level to discuss the categorization. If we follow this to the end, then something like implicit or sealed should be an annotation, since neither has a bearing on Tasty’s type system. But that’s clearly taking it too far.
I also think that a making uncheckedVariance, uncheckedStable, targetName , or deprecatedName modifiers would take things too far.
So I could agree that anything that affects typing before and up to Tasty is a candidate for a modifier. But I believe one should make exceptions for the unchecked and deprecated categories. These somehow do not look like they are on the same level as the others.
sealed is clearly part of my “gray area” that I think belongs to the TASTy type system. Extending a sealed class A from class B when B is not listed in the child classes of A breaks the type system, in a way. It’s like final.
implicit is an interesting observation. It’s true that I find it hard to justify as a keyword using my categorization, yet it’s clearly a keyword. Then again, it’s going away, so … given and using don’t have issue, because given is a kind of definition (it does not even live in this entire categorization effort, not any more than if), and using is important at the type checking level IMO: it is not valid to provide using arguments to non-using parameters and vice versa.
(Note that @deprecatedName should stay an annotation according to my categorization. I’m answering for the other 3, here:) Perhaps, but then the question is: what makes them fundamentally different from the stuff that clearly are keywords?
There is an interesting advantage to annotations, which may come into play in the decision: when viewed inside an IDE, annotations allow hovering or control-clicking to see their associated documentation.
It’s a nice way to quickly obtain information about obscure little known annotations, like @compileTimeOnly, without having to google them:
Based on this, my criterion would be: for common core language features everyone is expected to know, use keywords. For more specialized features which are only used in special cases, use annotations, which are visually more intrusive but are also more informative.
Regarding @mixin, I wonder if it’s possible we’re barking up the wrong tree. (Sorry, I wasn’t sure which thread to put this in, but it does affect the decision of whether mixin should be a keyword or annotation — maybe it shouldn’t be either…)
Is it really something we want to mark traits with?
The main use case that I know of is Product and Serializabe. Both come up because they’re added by the compiler.
Maybe the reason I don’t want to see them is not because they’re special and nothing involving Product should have Product inferred in the LUB.
Maybe it’s because I never expressed interest in Product. It’s just something the compiler does behind the scenes.
Another reason why I don’t want the compiler to infer Color with Product with Serializable, is that all Colors extend Product with Serializable and therefore it’s not adding useful information.
Then again, it kind of is… what if I want to call .productIterator on it?
So in short, can we take a step back and clarify what are the circumstances under which we want to exclude something from a LUB / inferred union, and what are the reasons to not infer it?
I can think of 3 kinds of factors:
Trait A vs. trait B – it depends on how the user “classified” a given trait, as a “super trait / impl trait / mixin trait / whatever terminology”
How it came to be inferred (this is important in any case) – by being a common supertype among other common supertypes in an expression with multiple branches (if, match); by transfer from another type (def x: Product = ???; val y = x); other ways
What makes it a supertype – explicit extends keyword; generated such as a case class; inherited indirectly #3 makes me think that maybe instead of modifying the trait we should be able to modify the extends clause, like class A extends B & C & C
Not sure what the syntax could be but here are some random ideas:
Something like extends vs. implements, where there is a separate clause
Somehow do something with self types.*
with vs. &
An annotation on the type within the extends clause (does this parse? class A extends B with C @mixin with D)
*Actually self types are very relevant. They are similar in that they express inheritance and subtyping in a way that’s hidden from other parts of the code. The difference is that self types only declare the inheritance and don’t actually “provide” it, it needs an extends clause somewhere else. Here we might have a “hidden extends” concept where something is fully a subtype of something else but that fact is suppressed in the context of type inference. Together that would make 3 different ways of expressing inheritance with differences between each.
I considered changing the extends clause instead. Something like private inheritance in C++. What convinced me otherwise was the paper by Ross Tate (referenced in the original PR for super traits) that showed empirically that there is a sharp distinction between traits that should be used in types and traits that should not. He calls the latter “shape traits”. So pushing this into the extends clauses is counter-productive. We should know when we define a trait what kind it is.
And the issue is far bigger than just Product and Serializable. For now there’s also Comparable and 9 traits in the standard library collections that are treated that way. And that was just a rough first sweep I am sure if we look carefully there are many more.
Actually, I don’t think the issue is much bigger than Product and Serialiazable. These two show up a lot because of case classes. Others are extremely rare, except where they are intentional.
For example, Comparable[A] is invariant in A, so for Comparable[A] to make it into the LUB, it would have to be the same A, and that is unlikely to be by accident.
If the concern is that people are annoyed or confused when mysteriously Product or Serializable show up because they used case classes, there is a simple solution: have every case class extend a trait CaseClass, which in turn extends Product and Serializable, then the LUB will be CaseClass instead of Product with Serializable and no one would be surprised.
Maybe even have traits CaseClasse1, CaseClass2, etc, which extend Product1, Product2, etc.
I have proposed to go with transparent instead of @mixin, as argued in the other thread.
If we follow that recommendation, there’s just @infix as an annotation that is different from the others. No matter how we decide to categorize annotations in the end (and we have time for this), it’s a bad idea to introduce a precedent now with @infix that is used quite differently from any other annotation. So I propose to make it a soft modifier instead.