@bytecodeName, @internalName, @platformName
Exactly this, yes! Having both names defined at the same time enables using operators without as much pain. It makes it clear that you canāt override just one or the other or override the two names separately. Both names are wanted for calling conventions, but two separate and separable definitions is undesirable.
Having the declaration force the operator to be used and make the linked name become unavailable will absolutely make people avoid this feature and continue the old behavior of defining one name in terms of the other instead.
Relevant open ticket on the Dotty issue tracker: @alpha
conflicts with user-defined aliases, which contains a link to some gitter discussion as well.
What about simply @aka("plus")
. Too cute maybe?
If non-mandatory, I like @aka
very much ā short and with a promise of being known But then I definately think it also should be known from the Scala side. I concur with @hypatian 's and @kai 's and others views that if the annotation generates an automatic Scala method with nice doc from scaladoc generated, then there is a big benefit of using it: you get something for the extra code spent on annotation and import. I donāt find it critical to beginners (esp the binary compatibility reasoning is probably very abstract to them) but it sure is a bonus if you can google + using plus (even if unrelated social network google plus is dead these days ), and searching the net for docs is what we do all the time as @deusaquilus points out. And that is easy to explain to a beginnerā¦ If you are rewarded with generated stuff on the Scala side (method + docs) then library authors might be more inclined to use it consistently.
It would be good if other members of the SIP committee would weigh in here since the opposition to providing dual names was quite strong.
I can recall two reasons:
First, it leads to more ways to do things. A library writer might use an alpha
annotation for a symbolic name since thatās the recommended way to treat symbolic names. But that would open the possibility for everyone to call the method with the symbolic or the alphanumeric name. We have learned by now that this is a bad idea. We just got rid of the ability to call normal methods infix, so we do not want to open another way to make one use of a library look different from the next one. One could say that Scala already provides more than one way to do things in many instances, so how is this different? I would argue this is worse since itās more indirect. When Scala provides different ways to do things, we have usually a very detailed reasoning why this is so, and we avoid the different ways unless there is indeed a good reason why both ways should be supported. This is more indirect: No matter what library A does, its clients can call it in different ways and thereās not even internal consistency enforced.
The other reason is that I believe allowing both names to be called would be an encouragement to define more symbolic methods. By now, thereās strong push-back against symbolic names because they are harder to google. But if this argument goes away, I see many people using some symbolic names they fancy with the argument that one can still use the alpha name if one does not like it. Seen over the whole ecosystem, this will likely lead to a worse codebase than otherwise.
This reasoning might look strange to some people. We all just want the flexibility and argue that we should be trusted to use it wisely. The one thing I learned the hard way is that this is just not true. People will generally not use it wisely, or not agree what wise usage is. The only way to restrain this is if the language is more opinionated, and this matters most at the interface between a library and its clients.
But yes, @aka
would imply that both names are usable, so @encodeAs
or @platformName
are better to express this.
It does and potentially some names should be automatically inferred, only if you want to develop a library or in any case need something as advanced as Scala-to-Java interop you can easily learn the other way.
Iām still not clear how @alpha can possibly work with mixed-language type hierarchies. Consider this scenario:
- Interface
AJ
, written in Java, declares a methodplus
- Trait
AS
, written in Scala, declares method+
with@alpha("plus")
- Interface
BJ
, written in Java, extends bothAJ
andAS
, and declares methodplus
- Trait
CS
, written in Scala, extendsBJ
We assume all method signatures are identical.
As seen from BJ
, there is an AJ.plus
and an AS.plus
, so we expect BJ.plus
to override both of them.
But what does CS
see now? Clearly, it should see BJ.plus
. Does it also still see AS.+
? But how is that possible if BJ.plus
overrides AS.+
?
Yes, thanks for clarification; I can definitely accept that argumentation, although I think the argumentation may be different depending on if we talk about simple symbolic plus/minus with intuitive semantics of adding and removing etc and other esoteric ones not taught since primary schoolā¦ So one opportunity is maybe to have special rules for a selection of āwell knownā symbolic operators and require hoops to jump over for more esoteric symbolic methods. But then again we want to avoid special cases and make things generalā¦ Iām hesitating on what is best in general and trust you experienced language designers on the semantics of non-mandatory @alpha
or what it will be called (preferably something short). But Iām still wholeheartedly convinced that a type with a simple + method or similar should not require āstrangeā complication, as said already. (And I can without problems live with defining alpha-named method aliases to symbolic operators myself in my libraries when needed, and pledge not to introduce def !:#!//!:==>>
)
But what does
CS
see now? Clearly, it should seeBJ.plus
. Does it also still seeAS.+
? But how is that possible ifBJ.plus
overridesAS.+
?
Thereās no need to guess, one can try it out. CS
sees both +
and plus
. They both mean the implementation in CS
, no matter whether the implementation in AS
is concrete or abstract.
How about we declare the set of āstandardā operators exempt from the annotation requirement?
Everyone knows how +
, *
, etc. should be read out loud ā wee donāt need an annotation there (we would ideally use clear internal names for them, like plus
and times
instead of $plus
and $times
, but that would probably cause migration problems).
The real place we need to enforce this annotation is for obscure names like ^?#!
ā¦
Many languages, like C++, Python, and Rust, already allow āoverloadingā standard operator names, so this would be quite natural to understand for beginners.
Iād read ^?#!
as what it does (eg āA could be a Bā if thatās what it does) - but itād be the same as the alpha name, I guess
I suppose common names would be ābyā for /
and āmoduloā for %
. But %
is also used in SBT files to construct artefact definitions, and /
is used to construct paths in various libraries, including Better Files and Akka Http.
Nothing prevents you from annotating these operators, if they have non-standard meanings.
Yes I agree that āsimpleā symbolic methods such as + - etc are far less problematic, and a special rule for them might be good, but on the other hand, every new rule is a potential burden. Iām not sure whatās best here, but Iām sure plausible code should be as easy to read and understand as possible.
I want to mention the solution of adding an optional boolean parameter to the annotation to expose the name, not so much because I think thats the right solution but to have it out there.
@encodeAs("combine", visible=true)
def |+|(a: A)
The longer I look at it the less I like it (and the less I like this is done in an annotation rather than dedicated syntax), but at least itās out there now.
In FP there are some established symbolic combinators. It would be great if they could be conveniently offered by libraries.
Itād be nice if reflection wouldnāt be used if you called a method in a structural type from an inline def
. Then you might be able to provide a +
method for all classes that define a plus
method or something like that.
extension [A <: {def plus(o: B): C}, B, C](a: A)
inline def +(o: B): C = a.plus(o)
Why are we mainly discussing an option where the symbolic name is primary, but the operator is secondary? An ideal solution in my book would designate the alphabetic name as THE primary ā and it would be the name shown on hover by IDE, featured in documentation, etc. and the name of the method to override when implementing the class:
trait Monoid[A] {
extension(self: A)
@aka("++")
def combine(that: A): A
def empty: A
}
The bytecodeName
concern can be handled with a separate annotation, and would indeed be a great way of resolving overloads instead of DummyImplicit
def fn(a: Any)
@bytecodeName("fnT")
def fn[T](t: T)
Operator byte code name may be handled in the same aka
annotation:
@aka("*>", bytecodeName = "starRight")
def productR[F[_], A, B](fa: F[A], fb: F[B]): F[B]
I understand the appeal of having both names accessible from Scala, but I still canāt decide between ++=
and addAll
, especially because of the risk of accidentally incurring āinfix assignment expansion.ā
Duelling dual names could be resolved by using export with rename:
class C { def addAll() = () ; export this.{addAll => ++=} }
and using an annotation for the platform encoding or interop name, with a language feature to enable use of encoded names (and perhaps disable use of the symbolic name, to avoid mixed usages).
I was going to joke about how I avoid confusion by using 42.$plus(1)
, but just realized:
scala> locally { var x = 42 ; x.$plus$eq(1) ; x }
val res3: Int = 43
Whew, āfixed in dottyā:
scala> locally { var x = 42 ; x.$plus$eq(1) ; x }
1 |locally { var x = 42 ; x.$plus$eq(1) ; x }
| ^^^^^^^^^^
| value $plus$eq is not a member of Int
So under the language feature scheme, import language.alpha
would enable selecting a member $plus
or plus
introduced by an explicit @alpha("plus")
but not the arbitrary use of $plus$eq
, and to enforce that style perhaps also disable use of +
.