Implementation of extensions doesn't match expectation or documentation

I have previously posted on Stack Overflow and created a bug on this topic.

As I mentioned in both of those places, I do understand how extensions are implemented and why that implementation impacts right-associative methods the way that it does. However, I believe that the implementation we have is confusing at best (violating POLA) and conflicts with several significant statements in the Scala 3 Book and the Reference Docs (I expand on this assertion in far more detail in my comment on the bug report linked above).

Specifically, the translation of extensions into methods with multiple parameter lists and no target (or “receiver” if you prefer) breaks expectations specifically with right-associative methods.

This design happens to not be a problem with left-associative methods since it all just aligns naturally, but for right-associative, developer expectations are broken. This can be seen in all of the posts, bugs, and questions about this issue (links here are just a sampling).

I’d add that it also makes reading and maintenance significantly more difficult. Prior to Scala 3’s extension every sufficiently experienced Scala developer would have immediately known what this was and how to parse it mentally while reading:

a %: b

While we may not immediately know what the %: operator does, we would know where to look for it. Namely on b’s type.

In Scala 3, however, it is impossible know what is happening on that line of code without going and finding the implementation of %: and determining if it is on b’s type or on an extension of a’s type.

Consider also type classes. In Scala 2 a right-associative method on a type class would behave exactly as you expect. To be completely honest, given the above ambiguity around handling of right-associative methods in the various places they can be defined, I actually have no idea what would happen in Scala 3 if I attempted to define a right-associative method in a type class. A type class is a class, so it seems as though it should behave like a right-associative method on any other class, but methods are added in an encapsulated extension now, so maybe it will be inverse of what I expect with regard to target and operand at the call site?

While obviously, I could go write some code to test that out, the fact that it isn’t immediately obvious is exactly the problem I’m concerned about. While it seems to me that the community sentiment is opposed to right-associative methods (at least the way they have been prior to Scala 3), I cannot conceive of how breaking the model and adding significant confusion to the issue actually helps resolve anything except by perhaps abusing developers to the point that they drop the construct all-together.

I know that at this point Scala 3 is out there, being adopted, and changing this would be difficult. I’m certainly not opposed to taking on the challenge of championing a fix of this, but I acknowledge that it may be too big of an ask at this point. However, this seems like a significant enough wart in what I consider to otherwise be a truly delightful language, that it bears discussing in these terms (as opposed to just requesting the “how do I do this now” as in many of the other links I provided).

Am I alone in this concern or perhaps missing some significant context that would make this design make sense to me?

3 Likes

LOL, the bug-report got closed as “Won’t fix” without any further justification.

Something’s very wrong here…

A “spec bug” is a bug as any other bug and needs to be fixed, of course. Only because something was written on some piece of paper with a “spec” label on it doesn’t make it “right” automatically, imho.

2 Likes

Yes, If it doesn’t match, the the spec need an update.

The problem is: It matches the spec.

But the spec breaks assumptions…

The way to “fix” a “spec bug” is to submit a SIP, if it was indeed intended as is. It amounts to intentionally changing the design of the language (in a breaking way in this case), so that’s where it goes. Discussing it on Contributors is the correct first step of that process.

(There could be “editorial bugs” where the written text does not correspond to the intent–and then typically not to the implementation either–, which could be fixed by a PR directly on the spec text, but this is not one of those.)

4 Likes

OH! Progress.

From a “hard no”, to “let’s talk”.

That’s a good outcome by now. :rocket:

That’s why I’ve put “spec bug” in scare quotes. A definition can’t be “wrong”. But it can be weird.

What was the tooling to make quantitative explorations across on the whole ScalaDex? I’m not sure a “fix” would break anything, as out of the perspective of an end-user this never worked as intended. So I would guess “nobody” used this feature anyway (like me) and just waited for a “fix” for the “obvious bug”.

:smile:

But I guess it needs some scripts around it, doesn’t it?

That you used tasty-query on the libs from the ScalaDex index, that was written in the SIP.

I mean, maybe the code use could be dumped somewhere so it could be adapted and reused for further “quantitative studies” across the Scala library ecosystem?

It looks something like this:

but it might not be the latest version. That doesn’t matter to reapply it to other analyzes.

1 Like

Ok, so at least I’m not alone in thinking this is “wrong”.

Are we sure that this is currently working per spec? Just because one contributor thinks it is working as designed, doesn’t necessarily mean it is. As I’ve pointed out several places, the documentation I can find is conflicted, at best. But maybe there’s somewhere else I should be looking besides the Scala 3 book and the Scala 3 reference pages?

For the sake of argument, let’s say that is the correct place to find the “intended” design. Given that it is self-contradictory, would this still fall into the camp of needing a new SIP? Or can this be handled as a clarification / refinement of the existing SIP (which I couldn’t find documentation of ???) and updates to the compiler to align?

In the meantime, I’ll see if I can put together a reasonable search of the ScalaDex for usages of this pattern. But, given how backwards and broken it seems, I have a hard time imagining library authors adopting this. I find it much more likely that it is hiding in private code bases where the people who asked and were told to do it backwards went ahead and did it that way and moved on.

3 Likes

We discussed this in the dotty compiler meeting, which includes quite a number of people actively working on the compiler, including Martin Odersky. We collectively verified that yes, this is currently working per spec. This is not just my personal opinion on the matter.

Apologies. I meant no offense. I don’t know how decisions are made on the dotty team. I guess my main point was that the specs as I have access to / know about seem self-contradictory to me (namely the Scala 3 reference pages). Given the contradictory nature of the documentation and the stated intention, it seems to me that considering it “working as designed” is up for interpretation. I can respect that the dotty team has come to a consensus on what “as designed” means in this case, but that meaning isn’t at all clear to me as an outsider.

As I work on researching and documenting this with a potential goal of a SIP to improve the language, are there more sources I should consult in trying to understand the compiler team’s current intentions? Any pointers on where I might find the original SIP that led to the current handling of right-associative methods in extensions?

I appreciate the responses and help.

2 Likes

The couple of topics on “extension methods” show that the syntax underwent churn for a couple of years.

However, the reference doc comment on “The Swap” predates that. I assume without further evidence that this was always part of the scheme.

This comment about extension method as sugar for application links to

The Proposal which mentions right-associative operators a few times, in the OP about C# style and in the replies, such as szeiger the rassoc guy for Scala 2.

The Proposal OP also links to the PR and its comment history.

I think the spec need only specify that “extension methods” are an extension to method application syntax. It has nothing to do with “extending a type”.

The only justification required is the example of i max j vs math.max(i, j). Don’t give me machinery (value classes) when I just want to call a method. Give me a way to call the method, that also looks cool.

(Edit: it’s necessary to click “more” on forum search, to see all the topics.)

1 Like

I agree. Ideally, we’d have the following situation:

  • Right-associative class methods are disallowed.
  • The only way to define a right-associative method is as an extension method.
  • The order of formal parameters is the same as the order of actual arguments (which corresponds to the current rule for extension methods)

Since Scala did not have extension methods originally, we had to make them normal methods, and that meant we needed this weird swap of the argument. That was a cludge. Let’s not elevate it to a principle by making the same mistake for extension methods where it makes even less sense.

I hope that over time, we will arrive at the ideal state by deprecating right associative member methods. E.g. List would become

class List[+A]:
  def prepend(elem: A): List[A]
  ...
object List:
  extension [A](x: A) def :: (xs: List[A]) = xs.prepend(x)

No weird swapping of arguments; everything is obvious.

12 Likes

This may be obvious to everyone, but for my own sake and for clarity of communication I want to point out that there are at least 3 different things at play here:

  1. right-associativity (i.e. implied parenthetical grouping of equal-precedence operators)
  2. right-receptivity (to make up a term… i.e. which operand is the functional “receiver” of the binary operation)
  3. symbolic operators as aliases (vs. independent functions/methods)
  4. Perhaps there’s also something here about infix vs prefix notation, but I can’t quite put my finger on the exact connection at the moment, so maybe I’m reading into things.

As far as I can tell, there is no suggestion of removing the concept of right-associativity, only right-receptivity. And the proposed syntax for it seems to imply a preference for symbolic infix operators to be externally defined aliases for alphanumeric prefix operations on the type.

To be clear on my perspective, I’m pro- keeping right-associativity as a concept. How we have it is worth discussing, I suppose. But I am also very much pro- right-receptivity. I believe it encourages a significantly better encapsulation of responsibility on the authorship side and a more elegant syntax on the calling side in many cases.

The example that actually prompted my bringing this up at all might be illustrative of my thinking (yes, I’m aware there may be better ways to do this, but keep in mind I’m simplifying quite a bit to fit into the context of a forum post):

I wanted to encapsulate the equivalent of this logic:

class Envelope(val msg: Message):
  @targetName("opTransform")
  def ?:(shouldTransform: Boolean): Option[String] = 
    if shouldTransform then
      Some(msg.transform)
    else
      None

class MessageFilter(...):
  private def shouldPass: Boolean = ???

  def getMessage(envelope: Envelope): Option[String] = shouldPass ?: envelope

Maybe this example is overly simplified, but hopefully it is clear that my goal was to keep the knowledge of how to transform encapsulated in the Envelope type.

This, obviously works, as is, given right-associativity behavior on classes. But imagine that Envelope is from a library, so I neither have access to the class nor to its companion to add an extension (on Boolean?) there. But I also don’t want to add envelope transformation as a feature of all Booleans, even in the scope of my MessageFilter. I really would like to keep the transformation logic confined to Envelopes. So I attempted this:

extension (e: Envelope)
  def ?:(shouldTransform: Boolean) = ... //Same implementation as above

But now my calling syntax (shouldPass ?: envelope) doesn’t compile. I either have to write it envelope ?: shouldPass or move the implementation of ?: to an extension (b: Boolean) clause. Swapping the operands at the call site is ugly. Imagine that getting an envelope is the final result of a series of maps, flatMaps, whatever. Now the critical boolean control operator is hidden at the end, easily missed. And moving the implementation to Boolean is just plain strange. There is no reason for any Boolean instances in scope to have any concept of what an Envelope is.

I know that this example is weird in other ways (?: being both wrapping and transformation, etc) but those are just artifacts of me trying to minimize the example. My point is really about encapsulation of logic, organization of code, and clarity at the call site. Losing right-receptivity would harm every one of those aspects in this case, in my opinion.

Finally, to briefly touch on symbolic operators as aliases… I rather like the @targetName annotation with a symbolic method name being sufficient, but I’m not going to get too bent out of shape over a different way of handling symbolic & alphanumeric names for the same operation. I will say, though, that I really don’t like moving what seems like a legitimate member of the type out of the class and into an extension on the companion. I could maybe live with it (though not my preference) if all symbolic operators had to be there, but it would seem very odd indeed if ++ could be on the class but :: had to be in the companion.

And if we’re moving all symbolic operators to the companion, I think we need a different syntax than the current extension syntax. The way extension (t: T) looks to me is like it is extending the type with additional methods. Not that it is providing the left operand to operators specified after an intervening def. And, for what it’s worth, as I’ve pointed out elsewhere, all of the Scala 3 language docs provided to developers seem to reinforce my interpretation of what extension (t: T) means, regardless of what the original intention may have been.

Sorry for replying to myself here, but I’ve been reading more of the links provided and re-reading the ones I had found before. I cannot escape the idea that everyone seems to believe that extensions extend a type. For one it is in the documentation:

Extension methods allow one to add methods to a type after the type is defined.

It is mentioned in the proposal from fkowal:

It builds on the knowledge/expectations of how class/trait inheritance works.
…
This trait also extends the type T with additional extension methods

and the one from odersky:

Extension methods are a more straightforward and simpler alternative to implicit classes.

In fact, as som-snytt points out, szeiger raised the exact same issue I raised here, without a resolution that I can see in the thread.

The concept is also raised in PRs:

The aim of this proposal is to

  • make typeclasses easy and pleasant to use.
  • remove the rift between standard class hierarchies and typeclass hierarchies. Both should be written in the same way and it should be possible to mix both styles and move fluidly between them.
  • in conjunction with opaque types, replace the lower-level constructs of value classes and implicit classes.

All type based goals. These same goals are referenced from LukaCJB’s proposal which led to odersky’s PR#5114.

And throughout all of the threads on this topic both in the forums here, in PRs, and everywhere that I can find, there is abundant discussion of the handling of this. Which seems to imply a strong belief that the type definition is the thing being extended. I realize that the final syntax dropped references to this, but what I’m trying to point out is the community understanding of what extension means.

I have seen assertions to this effect elsewhere as well. But based on what seems to me to be abundant evidence that extensions are seen as extending a type, this feels like wishful thinking, not a reflection of the reality “on the ground” so-to-speak. Which is exactly why this topic keeps coming up, multiple bugs are filed, and confusion rules the day with regard to what is meant with this new construct. Everyone thinks it is about types but the implementation thinks it is about method application. And therein lies the bug.

If our goal in language design is to remove friction and enable efficiency among developers using the language, subverting expectations and insisting everyone is wrong is a bug with regard to that goal.

And as we’re discovering, the place where the “bug” is best seen is in the divergent handling of right-associative methods.

Given all this, I think it would very much be a step in the wrong direction to double down on this conflicted definition of extension, remove capabilities from classes, and force the language into a different pattern. That’s a very heavy handed “fix” for the implementation of extension failing to match expectations.

1 Like

The defining feature of Scala is not implicits but that it subverts expectations.

That is, in its dimension as an “academic” language, it demands continual challenge to our ways of looking at computation.

A helpful example of on-the-ground accommodation is language.experimental.relaxedExtensionImports.

A counter-example would be the discussion around break:

No, I think we have to change the mindset of people instead. Breaks are aborts and there should be only one kind of abort.

That is not esoteric, but simply says all that verbiage around NonFatal was just talk. The NonFatalists must feel themselves challenged. What problem were they trying to solve?

So, however “right-associative methods” shakes out, it will not be on the basis of expectations. Maybe the way to say it is that Scala is the language of educators who are willing to change minds.

As a counterpoint, if the concept of extension methods is broadly conceptualized using a framework of types, it’s probably a good idea to have a think about why such a framing is helpful to so many people, and consider if there isn’t some underlying truth that is missed by framing it strictly in terms of the mechanics of application.

2 Likes

Ok. But a good educator doesn’t confuse and obfuscate a complex concept for the student. An effective educator clarifies and simplifies understanding, removing obstacles to understanding. Breaking the behavior of : suffixes in extensions is confusing and obfuscating, not clarifying and simplifying.

I think this is quite a stretch with regard to this issue. We’re not fundamentally changing the concept of computation by deciding which operand is the recipient in a right-associative method call. In fact, in your other post on the other thread, it seems to me that you have claimed that the reason for changing this is because it is difficult to implement in the compiler. That’s not purity of ideological computation. That’s convenience. The seeking of which isn’t a bad thing, by the way. But we should seek it through non-destructive means, not by breaking established patterns. Deprecate them, introduce new ones… OK. But not just flat out breaking them.

I surely made no such claim, but I did point out that the “expectation” or “illusion” of left-to-right evaluation (which I addressed in another reply, that the spec leads us to that expectation) does not correspond to a definition (which is possible in Scala 2 but not Scala 3) that requires the receiver be evaluated first.

That would be an argument in favor of extension methods as “regular methods”; they are associated with types by the familiar implicit scope that makes them visible.

Whether “extension (x) def m(y) means x m y or m(x)(y)” is clarifying may be a matter of taste, but I think your larger concern is about breakage and change. That has been a theme in many of the discussions about Scala 3. I would encourage pursuing answers to questions such as, “How long can I keep my implicit classes?” Personally, I don’t have a global sense of the grand scheme, but I do know I can still use my beloved braces, which they have yet to pry from my hands post-rigor.