Change shadowing mechanism of extension methods for on par implicit class behavior

That’s odd. It didn’t work for me before. I have a file called OverloadedExtensions.scala in one project specifically as a workaround for this. (Originally created for 3.1!)

Maybe you had nominally but not actually circular dependencies that RC3 realized aren’t circular, so that it became possible to not compile everything simultaneously, and that change triggered the behavior?

Ok, I hadn’t seen that particular error before, so assumed it had introduced on 3.3.0-RC3, but apparently not. The 2nd, conflicting, overload was written while on 3.3.0-RC2 and initially compiled, so your incremental compilation explanation seems plausible.

See Support extension methods imported from different objects by odersky · Pull Request #17050 · lampepfl/dotty · GitHub for a possible solution.

8 Likes

There’s now a SIP for this change: SIP-54 - Multi-Source Extension Overloads. by sjrd · Pull Request #60 · scala/improvement-proposals · GitHub
If you have comments, please add them to this PR.

EDIT: Fixed link

3 Likes

Update: the proposal SIP-54 has been implemented and merged into the compiler. It is now available as an experimental feature that you can use as follows:

//> using scala 3.nightly

import scala.language.experimental.relaxedExtensionImports

object A:
  extension (s: String)
    def wow: Unit = println(s)
object B:
  extension (i: Int)
    def wow: Unit = println(i)

import A._
import B._
5.wow
"five".wow

(Based on this gist)

Note that you have to use a nightly build of the compiler.

Before we make it a stable feature, we would like to hear from the community if the current design and implementation work for you. In particular, we would be interested to know if there are still use cases where you used implicit classes in Scala 2 that you cannot migrate to extension methods in Scala 3.

8 Likes

It solves, for me, the use-case that could not be worked around, which was that extension method names for unrelated types from unrelated code bases would collide and basically prevent the extension method mechanism from scaling to a nontrivial degree of use. This was really critical, and it’s great that it now works as one would conceptually think it should!

It does not solve the irritating but workable-around problem that all overloaded methods must be defined in the same source file–and that extension methods are encoded as overloaded methods.

So if you have a mathematics library and you have, say, several different vector and matrix classes, all in the same namespace, and you want to be able to multiply a Double by each of them on the left–that is, x * m and x * v should work–you have to create a DoubleExtensions.scala file or somesuch, even though it would be much more natural to have the extension for each vector, matrix, etc., in the file that defined the data type.

This is awkward, but since there is a workaround, it just means that developing libraries is less pleasant than it could be.

As a user, one doesn’t notice that the library designer had to place the extensions all in the same file.

Note that there is no such restriction for extends AnyVal-based implicit class extensions. You can put them wherever it makes sense for them to belong, and they work.

1 Like

I experienced no such limitation. Can you give an example?

Sure. In my personal library, in addition to foreach, which returns unit, and tap, which is not monadic, I have a method I call use. It’s basically just x.use(f) = x.tap(_.foreach(f)).

So, in one file, Flow.scala, I add some things to Option. One would naturally define use there:

extension [A](option: Option[A])
  inline def use(inline f: A => Unit): option.type =
    (option: Option[A]) match
      case Some(a) => f(a)
      case _       =>
    option

But there’s no reason you can’t define the same thing index-by-index on an array (where you tap the whole array, not the element), except array extensions are in Data.scala:

extension (ai: Array[Int])
  inline def use(i: Int)(inline f: Int => Unit): ai.type = { f(ai(i)); ai }

These both live in the kse.flow namespace, but would be in two different files.

Except:

[error] -- [E161] Naming Error: /home/kerrr/Code/s3/kse3/flow/src/Flow.scala:607:13 ----
[error] 607 |  inline def use(inline f: A => Unit): option.type =
[error]     |  ^
[error]     |use is already defined as method use in /home/kerrr/Code/s3/kse3/flow/src/Data.scala
[error]     |
[error]     |Note that overloaded methods must all be defined in the same group of toplevel definitions

So, they all have to go into OverloadedExtensions.scala instead.

Just put the extension methods in some Ops objects in the different files and export them to your public source.
Flow.scala

package personalLib
object Flow:
  object Ops:
    extension [A](option: Option[A])
      inline def use(inline f: A => Unit): option.type =
        (option: Option[A]) match
          case Some(a) => f(a)
          case _       =>
        option

Data.scala

package personalLib
object Data:
  object Ops:
    extension (ai: Array[Int])
      inline def use(i: Int)(inline f: Int => Unit): ai.type = { f(ai(i)); ai }

PublicOps.scala

package personalLib
export Flow.Ops.*
export Data.Ops.*

Oh, somehow I missed that inline was kept with export, meaning that as long as the original extension is inline, the export is zero overhead. And you can always create the actual method with some weird name and have the extension be an inline call to that, so it’s completely general.

Consequently, this is a better way to do it. Thanks!

(I had been avoiding export as much as possible to avoid creating overly deep call stacks that can frustrate JIT-based inlining.)

Still a little awkward and still requires the extra file, but at least the logic stays where it makes sense.

1 Like

One other thing I noticed is that this case still doesn’t work:

//> using scala 3.nightly

import scala.language.experimental.relaxedExtensionImports

object A:
  extension (s: String)
    def wow(x: String): Unit = println(x)
object B:
  extension (s: String)
    def wow(x: Int): Unit = println(x)

import A._
import B._
"five".wow("seven") // error
"five".wow(7) // error

It does work if wow are real overloads, both defined in the same object.

Yes that’s to be expected. The argument available for resolving extension overloads is the extension argument itself, which here is String in both cases.

Ok, but that’s still an unfortunate corner case that you can run into.

1 Like

Maybe I misunderstood, but this seems incorrect, as the following works:

extension (s: String)
	def wow(x: String): Unit = println(x)
	def wow(x: Int): Unit = println(x)

"five".wow("seven")
"five".wow(7)

extension methods from imports are treated differently to those from lexical scope

I see, my follow up question was then “but why ?”, but this was actually answered in the SIP:

“It is not a goal of this proposal to allow resolution of arbitrary overloads of regular methods coming from multiple imports. Only extension method calls are concerned by this proposal. The complexity budget of relaxing all overloads in this way is deemed too high, whereas it is acceptable for extension method calls.”

Plus I did not know the following does not work (also explained in the SIP):

class Foo
class Bar

object A{ def normalMeth(foo: Foo): Foo = foo }
object B{ def normalMeth(bar: Bar): Bar = bar }

import A.*
import B.*

normalMeth(foo) // ambiguous

All of this was, as I understand it, to limit the potential can of worms that the complexity of overloading risks to unlock…

I think this is ok from the user’s point of view, if there are good and specific error messages that help the user to understand that the ambiguity error is caused by an extension method and how to circumvent the ambiguity.

2 Likes

Sooner or later, one will have to take the worm by the horns and re-think the whole overloading and implicit resolution process as a constraint solving problem. Otherwise, one will constantly stumble on such unexpected and seemingly arbitrary restrictions.

I am happy to let you know that the experimental implementation has been accepted by the SIP Committee to become a stable feature. It may be available in 3.3.1 or 3.4.0 depending on the assessment of the compiler team.

2 Likes

Since it is a source language change, it will be scheduled for the next minor release, Scala 3.4.0.