Updated Proposal: Revisiting Implicits

Or perhaps even:

extension (c: Circle) def circumference: Double

But thinking about this a bit more, I wonder how many extension methods really will be defined. Perhaps the singleton case syntax could be dropped and just the collective syntax retained?

Personally, I think that the less ways there are to write something in a language the simpler it becomes to read.

I like the extension syntax and I think it is a good idea to separate it from the given concept.


In given stringOps: AnyRef it looks like we are defining a given instance of AnyRef. This is confusing. Why would we do that? What is the sigificance of AnyRef here? It looks very low-level and out of place

Some people will write 0.75.pipe(tan), but it’s a fair point.

(On the other hand, there’s a quite a bit of boilerplate extending e.g. Double to include scala.math operations as postfix operators, e.g. 2.7.abs works, and people do use that sometimes instead of abs(2.7), so I think the ship has already sailed…but maybe we want to restrict the size of the fleet.)

Anyway, you can’t call def bippy(a: Foo)(b: Bar) as bippy(foo, bar) only bippy(foo)(bar), even though in the bytecode they’re the same thing, so if the extension syntax is fixed I recommend disallowing def (a: Foo)bippy(b: Bar) to be called as bippy(foo)(bar). Having to remember when a.quux(b) is the same as quux(a)(b) and when it’s different does not sound like fun to me.


In given stringOps: AnyRef it looks like we are defining a given instance of AnyRef . This is confusing. Why would we do that? What is the sigificance of AnyRef here? It looks very low-level and out of place

That’s precisely why we have extension syntax. No need to make the given equivalents nice.

1 Like

Yes, that’s a possibility.


I do think we should indeed forbid to call extension methods as if they were normal methods.

The main argument (the only one?) for allowing that was that it provided for a very simple explanation of the semantics of extension methods: they are simply desugared into the normal method equivalent. We can still preserve this simple explanation; we just have to say that both the definition and the use sites have the same semantics as if they were desugared into the normal method equivalent (assuming typechecking has already agreed that the program was correct).


Probably looks pretty crazy, but how about something along these lines to unify the 2 ways to expres extension methids?

  given Ctx[A]
    def (theA : A)
        f (otherA : A) = ???
        g (otherA : A) = ???
        [B] h (theB : B) = ???
    def (nested: Ctx[Ctx[A]]) i (otherA : A) = ???


  given Ctx[A]
    extend (theA : A) with
      def f (otherA : A) = ???
      def g (otherA : A) = ???
      def [B] h(theB : B) = ???
    extend (nested: Ctx[Ctx[A]]) with 
      def i (otherA : A) = ???

n-th edit: OK, the second one actually doesn’t even look crazy. A bit wordy if you want to have extension methods for a lot of “receivers” of different types but like it a lot

One other reason for allowing this is that it provides a helpful tool for debugging why an extension method doesn’t resolve. The example below is quite trivial, but the difference between the content of the two error messages can make a world of difference when typeclasses or generalized type constraints are involved.

def (a: String) bracket: String = s"[$a]"
println("hello world".bracket)
println(bracket("hello world"))

// Error: value bracket is not a member of Int
// println(5.bracket)

// Found:    Int(5)
// Required: String  
// println(bracket(5))


Clarification: I do agree this is something that probably shouldn’t be in the code when it’s done, so a linter rule would be entirely appropriate.

1 Like

There are some situations where the only way to call an extension method is as a normal method (example in #7821).


@nicolasstucki - Then we shouldn’t have those.

For instance

a org.com.edu.Ops.+ b

Yes, it’s absurdly ugly, but it’s also absurdly regular. Or

import org.com.edu.Ops.{ + => ops_+ }
a ops_+ b

I really think we need to step back and consider the overall regularity and simplicity of the resulting language.

Whenever we end up with “well, X isn’t so good, but because Y…”, we should think hard about whether Y is really worth it, and whether Y can’t be altered so that not-so-good-X is avoidable.


Let’s fix the compile-time messages, not allow multiple ways to call things and then force the user to switch between them in order to get useful messages.

If anything, this is even more of an argument to forbid going both ways: during debugging, people will randomly switch from postfix, as intended, to function-with-arguments style, and then leave it after it compiles.


That’s a nice goal, but until we get there, we should probably avoid crippling our ability to debug extensions when things go wrong :slight_smile:

Linters can help us remember to switch things back, until the error messages become clear enough to make this facility redundant

If extensions are so hard to debug that they motivate adding an irregularity to the language, we shouldn’t be using them. There shouldn’t be any “until”, except during development. If the feature goes live, it should be pleasant to work with.

Our current mechanism for extensions is worse, as the only way to debug them is to convert something like value.pure[F] to something like this:

new ApplicativeOps(value).pure[F]

So, while not ideal, the new way is considerably less painful to use than the old way - and despite this wart extension methods still provide enough benefits to justify their existence.

1 Like

Based on the feedback I got from here I have run some limited experiments and tried some alternative syntaxes. They can be summed up as follows:

  • use witness instead of given in instance definitions: PR #7928
  • use default instead of given in instance definitions: PR #7941
  • use with instead of given for parameters: PR #7973

The experiments reflect my belief that the semantics of the new contextual abstractions are sound and work well, but that there might still be room to make the syntax clearer.

Feedback on either the PRs or here is very welcome.


I think it could be a good idea to consider and decide upon some principles for the syntax before considering concrete keyword alternatives. I think it should be considered in this order:

  1. What “kind of word” should be used and how should the constructs be read in real-world language?
  2. Should the same keyword be used for definitions and parameters?
  3. Based on this, which word has the best connotations of “something that is applied implicitly”?

Some comments on the first point:

Some of the previous debate has included the question whether it should be a noun, adjective, or verb. Let’s investigate by example:

given Foo

To me, this is read as “a given instance of Foo”. As “an instance of X” could be shortened to “an X” we could read it simply as “a given Foo”. So given is here a modifier and thus an adjective. We might even say that it modifies the type Foo from a normal type to a given type.

(The verb variant, give Foo, would have a completely different meaning: the difference would be akin to the difference between imperative and declarative.)

But what does “given” mean? So far, that does not matter. If we decide that it should be a modifier, any adjective would do. It could be a word that has some of the “implicit” connotations: magic, enchanted, implied, auto, default, given, provided… or not: red, big, innocent, beautiful, strange. Yes, we could exchange given with beautiful and it would work just the same.

On the other hand:

witness of Foo has a slightly more complicated meaning. As a noun, it does not modify Foo, it is a thing in itself, which has semantic relationship to that which it it is a thing of. Thus we cannot replace it with any other noun, and the meaning of the word must be considered immediately. Previous proposed alternatives of this kind have included representative (repr) and instance (and probably others I don’t remember). We could generalize it to thing of Foo (which probably wouldn’t work that well in itself).

Actually, we could unify the two variants by saying that the general form is:

special instance of Foo

In the “given” variant, we leave out “instance of” and shorten it to special Foo and then exchange “special” with whatever adjective we like.

In the “witness of” variant, we exchange “special instance” with a word that has similar connotations to “special instance”.

After playing with these three alternatives for a while, here’s my evaluation:

There are three different “levels” of implicit definitions: instance definitions, context parameters, and context functions (i.e. implicit function types and closures). I believe it is best if each level has a different syntax. It’s less regular, but a lot easier to parse. That sentiment was also brought up in several comments on this thread.

There are several classes of implicit uses. The most important ones are

  • Context passing
  • Typeclasses
  • Proofs
  • Conversions
  • Extensions

Extensions have their own syntax now, and proofs and conversions can be seen as special cases of typeclasses. So that leaves “context passing” and “typeclasses” as the two principal flavors of implicits.

Let’s name the three explored alternatives after the name of the instance. followed by the name indicating contextual parameters. So it’s witness/given, default/given, and given/with. Here’s my evaluation how suitable these three combinations are for the two principal use classes:

                          context passing      typeclasses
witness/given             -                    +
default/given             ++                   -
given/with                +                    +

given/with has the edge in that it works for both use classes equally well, so I am pursuing this alternative further. PR 8017 is a complete implementation. In this implementation, both the previous given/given syntax and the new given/with syntax are supported, but the alternative to use => for conditional givens introduced in 0.20 has been removed. My plan is to get this merged by the next Dotty release early February, and to switch everything to the new syntax afterwards. In the PR the tests already use the new syntax but the main implementation does not.

We would then use one or two 6 weeks release cycles to try the new syntax in depth, and hopefully come to a final decision afterwards. I had hoped that we would be in feature freeze by now, but it’s very important to get this right, so I think we should give ourselves the time needed to reflect on this.


This prompted a question in my mind: What other use cases are there in the wild? :slight_smile:

1 Like

I read through the docs and a few examples of the PR. I like the big picture, in particular

  • the separation of concerns (given instances, implicit conversions, extension methods)
  • the way extension methods are defined
  • with clauses (I could live with a different keyword, but no strong feelings)

But I still have a hard time getting used to the definition syntax of given instances. I have two concerns.

1 – The syntax for defining given instances is different than defining ordinary values or methods. In given i as T { }

  • the definition’s type is T, the new as keyword is somehow doing what : does for normal definitions
  • i doesn’t introduce a new type, but it’s defined with a block (not with =), similar to an object definition
  • as can be read the wrong way around: “foo as bar” can mean “i take foo and give it the name bar”. Here it’s the other way around, I guess the meaning is “foo is defined as bar”.

The situation also reminds me of Java annotations, where a new syntax was invented to define annotation types.

2 – One has to remember how given instances are represented / desugared (val vs lazy val vs def). I’m pretty sure advanced users (and people defining given instances are advanced users) at least need to know, and some of them would be able to easily control it.

So I basically prefer to use the syntax of ordinary definitions. The only thing that maybe looks less good is defining anonymous instances, but I think it’s still a better compromise. We could use a marker (_) or even just leave the name away.

given object intOrd extends Ord[Int] {}
given object _ extends Ord[Int] {}
given object extends Ord[Int] {}

given val intOrd: Ord[Int] = ...
given val _ : Ord[Int] = ...
given val : Ord[Int] = ...

given def intListOrd with Ord[Int]: Ord[List[Int]] = ...
given def _ with Ord[Int]: Ord[List[Int]] = ...
given def with Ord[Int]: Ord[List[Int]] = ...

given def listOrd[T] with (ord: Ord[T]): Ord[List[T]] = ...
given def _ [T] with Ord[T]: Ord[List[T]] = ...
given def [T] with Ord[T]: Ord[List[T]] = ...

Just out of curiosity, are we actually considering changing implicits again between Dotty 0.22 and Scala 3? I’m trying to gauge feature stability and syntax is a large part of that. The way given works in 0.22 is really good and more then enough for my needs. Can I actually start writing a large framework on top of it or is everything going to be ripped right out from under me in the next SIP meeting?