Updated Proposal: Revisiting Implicits

I guess that’s supposed to be called with either of

xs.largest(5)
xs.largest with(myOrd)(5)

?

The latter looks really weird.

What about switching the given parameters to match the collective extension order instead of the extension method order? That is, it expands to

def [T](xs: List[T]) with (Ordering[T]) largest(n: Int) = ...

This has the additional nice property of making extension on a trivial mechanical rewrite, simple enough for anyone to do in their head.

In this case, the allowed syntax for an explicitly given parameter would be one of

(xs with myOrd).largest(5)
(xs)(myOrd).largest(5)

either of which seem okayish to me (I’d mildly prefer the former).

Then we have to consider the within-language-consistency of xs with t. On the one hand, it is a little bit confusing because it reminds one of creating an anonymous class with a trait mixed in, but that isn’t what’s happening here. On the other hand, functionally it is quite a close analogy; given parameters are use-site mix-ins, basically, whereas implemented traits are compile-site mix-ins. So it’s kinda okay?

Anyway, I hope we can avoid def foo(x: X) with (y: Y)(z: Z = zero). The parens prevent one from becoming utterly baffled, but it’s still counterintuitive and not particularly pretty.

It’s actually

xs.largest.with(myOrd)(5)

The . is mandatory.

What about switching the given parameters to match the collective extension order instead of the extension method order? That is, it expands to

def [T](xs: List[T]) with (Ordering[T]) largest(n: Int) = …

That is currently not allowed. The leading part can only have one (non-implicit) parameter. We could potentially generalize it, but it would take work to do so.

That’s unusual syntactically (since xs.largest is not an instance), but it’s clearer than what I suggested / imagined. I’m not sure it’s better than an extra set of parens (i.e. xs.largest(with (myOrd))(5)).

I’m assuming that xs.largest(5) is also okay? So if you wrote

class C {
  def biggest with (x: X) (i: Int) = ???
}

you could or could not call it as c.biggest(5)? If no, then you shouldn’t with xs.largest(5) either, which kind of torpedoes the utility of extension methods. So the answer must be either, “no, but this is irregular”, or “yes, you can call it that way”.

Maybe it’s not worth messing with it more. With clauses followed by normal parameter blocks are syntactically kind of a wart as it stands, but it’s probably livable-with as a somewhat awkward corner of the language that most people can stay away from.

I’m assuming that xs.largest(5) is also okay?

Yes, of course,

I’m sympathetic to this concern, however I think it makes more sense to spend the time to get to something everyone is Ok with, rather than port to the proposed syntax now, then port again after 3.1.

I really hope we have not reached a fixed point. Like many before me, I believe this proposal as it currently stands (which is not inherently different than how it started) has a negative net value, especially given its motivation – to simplify implicits and make them more approachable.

I started playing around with a complete (theoretical) alternative proposal, partly based on many ideas of others that were brought up in these discussions (@LPTK @lihaoyi @LukaJCB to name a few). There is a discussion about it here.

The general idea of the proposal is to completely break the abstraction of “implicit” / “given” and provide each feature – dependency injection, type classes, extension methods and type conversions – its own syntax, while also introducing a way to resolve “implicit interpretations” locally.

I would really like to hear the opinions on that proposal from people who opposed many aspects of this current proposal, as I believe my proposal answers many of their reservations. I would be interested in the opinions of FP veterans especially (@alexandru), as type classes are much more commonly used in that community.

1 Like

I think this usage of as is still a bit confusing. Have you considered using extends instead?

given ord extends Ord[T] { ... }

This is similar to the existing object definitions.

By the way, this raises a question. Consider the following type class:

trait Ordering[A] {
  def compare(a1: A, a2: A): Int
}

And the following instance:

given orderingInt as Ordering[Int] {
  val foo = "bar"
  def compare(x: Int, y: Int) = x - y
}

Is it valid to write orderingInt.foo? Do we create an anonymous subclass of Ordering[Int] when we implement this instance?

The expansion of givens is explained here. So the answer is yes, you can write orderingInt.foo.

I believe as is much better than extends. In my previous comment I explained how the as syntax literally translates into spoken language. extends is doubtful already for objects (making a value extend a class is a stretch).

This might be an ignorant question (and I suspect that it has already been debated to death), but it seems that “as” is just giving a name to a given. But in Scala, we can already name things that don’t change (i.e. val)

E.g. rather than …

given ord as Ord[T] { ... }

, is there a reason why …

val ord = given Ord[T] { ... }

… isn’t sufficient, particularly if naming givens is expected to be the exception rather than the norm?

Thanks,
Rob

1 Like

While it can seem awkward, considering how rare it is to explicitly pass implicit parameters, I think the cost is low. And in my opinion, the case is quite strong for allowing normal parameter blocks to follow implicit parameter blocks. It aids usability in generic programming. Martin’s example shows the issue I’m talking about:

This kind of type dependency is extremely common in generic programming, where implicits are responsible for computing the some of the types needed in the parameter list of a method. As it stands, these types need to be added as type parameters, which can lead to a blowup in type parameters and reduced readability/writability of generic functions. It forces you to reason about type inference and implicit resolution at the same time, which gets very confusing. I have run into this issue a fair bit when doing generic programming.

Interspersing implicit and explicit parameters would help a lot here, as you can make the implicit parameter precede any explicit parameters whose types depend on it. That way you know exactly when/how those types are computed. I can give you a concrete example from shapeless, with a method that updates a value in a generic record. We can start with a naive-ish implementation:

final class RecordOps[L <: HList](val l : L) extends AnyVal with Serializable {
  // ...
  def updateWith[W, V](k: Witness)(f: V => W)
    (implicit modifier: Modifier[L, k.T, V, W]): modifier.Out = modifier(l, f)
  // ...
}

Here the resolution of the implicits depends on the modifier function’s parameter type V. As a result, V must be annotated at the use-site (I’ve just verified this), which makes this method totally unusable for nested records. In the actual implementation in shapeless, the issue is solved in a somewhat roundabout way:

final class RecordOps[L <: HList](val l : L) extends AnyVal with Serializable {
  // ...
  def updateWith[W](k: WitnessWith[FSL])(f: k.instance.Out => W)
    (implicit modifier: Modifier[L, k.T, k.instance.Out, W]): modifier.Out = modifier(l, f)
  type FSL[K] = Selector[L, K]
  // ...
}

Here WitnessWith, in combination with a Selector that identifies the type of value associated with a record key, is used to compute the type of f’s parameter so it doesn’t have to be annotated. Essentially, WitnessWith is used to hack implicit resolution earlier into the parameter list. It uses machinery that honestly I don’t fully understand, but it looks like it involves macros and implicit conversions and depends on having a singleton Witness to anchor onto. It seems rather complex and brittle. However, if explicit parameter lists could follow implicit ones, the solution would be clear:

final class RecordOps[L <: HList](val l : L) extends AnyVal with Serializable {
  // ...
  def updateWith[W](k: Witness) with (selector: Selector[L, k.T]) (f: selector.Out => W)
    with (modifier: Modifier[L, k.T, selector.Out, W]): modifier.Out = modifier(l, f)
  // ...
}

This is simpler than the current implementation, and I hate to imagine how the issue would have to be resolved in cases where WitnessWith doesn’t cut it.

3 Likes

It seems I missed the rationale for going back to the keyword being out of the parentheses, as in: we have to write (...) with (...) instead of (...)(with ...).

To me it’s a regression, because:

  • It’s very awkward to have this keyword floating between parameter lists, and no spacing convention seems to make it look better (written (...) with(...) it also looks strange).

  • it makes the following non-implicit parameter lists look weird (are they part of the with?), and I agree with @julianmichael those are important. The purported advantage of having the keyword out was that it made it clear the whole parameter list is modified by the keyword, but if we allow following up with non-implicit parameter lists, we end up with exactly the same ambiguity, but at the level of parameter lists.

  • It’s not symmetric with the use sites, which will apparently have to use (...).with(...), and using this syntax at the declaration site too would feel even weirder.

A solution to all these problems was proposed a long time ago by @Blaisorblade: use special brackets. Why has this not been considered seriously (as far as I can tell)?

For instance, use [[ord: Ord[Int]]] or {{ord: Ord[Int]}}. Examples:

// anonymous:
def max[A](x: A, y: A){{ Ord[A] }}: A = ...

// named:
def max[A](x: A, y: A){{ord: Ord[A]}}: A = ...

// function type:
def m[A]: (A, A) => {{ Ord[A] }} => A = max

// type class instance:
given [A]{{ Ord[A] }} as Ord[List[A]] {
  ...
}
given listOrd[A]{{ Ord[A] }} as Ord[List[A]] = ...
7 Likes

Of course, special brackets have been proposed repeatedly. But in my book that’s by far the worst of all possible solutions since it is so inscrutable and lexically displeasing.

2 Likes

I just merged the context/with syntax into master. I am very optimistic that this is fit to be the final state but we will give ourselves some time to experiment with it to make sure.

Does it still work?
The documentation says:

  • An alias given can have type parameters and implicit parameters just like any other given, but it can only implement a single type.

It is not obvious. Will that feature be documented?

Would it be possible to implement a given over a mutable variable via something like:

var x: T = ...
given with () as T= ... 

?

Does it still work?

Sure, modulo the recent syntax change:

given [ByName] as T = x

ByName is chosen for legibility, it could be any name.

The alternative that you mentioned using with () is not supported.

2 Likes

Are there any binary compatibility risks with using anonymous aliases?
if there are no risks it will be still useful to document it. So the usual user can implement plain given val:

val x: T = ...
given as T= x

Using val for all givens would be misleading since some of them are defs, and some of them are lazy vals.

1 Like

I’d like to propose a variant that I think maximizes readability + regularity. It’s mostly a combination of what’s been discussed before, with one novel idea.The gist of it is this:

given [T] if Ord[T] for Ord[List[T]]  { ... }

given if (outer: Context) for Context = ...

def foo(given Context) = { ... }

More examples can be found here.

There are four notable points:

(1) Keyword for precedes the type (instead of “as” or “of”).

(2) Keyword if is used for conditional instances (instead of “with” or “given”).

(3) Same keyword given for instances and parameters.

(4) Parentheses around given parameters.

I’ll comment a bit on each point.

(1) The for variant translates better into real-world language. How to read it:

Example: given for Ord[Int] { }

Two alternatives that are mostly synonymous:

a) Using “given” as adjective, i.e. “a given instance”: Let there be defined a given instance for Ord of Int with the following implementation.

b) Using “given” as verb, i.e “x is given for y”: Let there be given for Ord of Int an instance with the following implementation.

(2) I think the idea has been that conditional instances and context parameter should have the same syntax. This is a legacy from the original implicit syntax. The new, high-level abstraction is conditional given instances. There is no notion of “parameters” here, and as such it can have a completely unique syntax. with does not have any connotations of “conditionality”, thus if.

How to read it:

Example: given [T] if Ord[T] for Ord[List[T]] { }

a) Let there be defined, for any type T, if there exists a given instance for Ord of T, a given instance for Ord of List of T with the following implementation.

b) Let there, for any type T, if there exists a given instance for Ord of T, be given for List of Ord of T an instance with the following implementation.

(roughly)

(3) I think there should be some kind of symmetry between context instances and context parameters. Using the same keyword is a simple way to achieve that. The alternative with has the following problems:

  • it’s a bit overloaded
  • it does not have any connotation of “implicitness”
  • it does not have any semantic relationship with “given”
  • used without parentheses, it has a syntax collision with new Foo with Bar
  • it requires additional syntax for context function types

(4) Using parentheses around context parameters avoids the “warts” of .given and spaces around :.

conversions can be seen as special cases of typeclasses

No, unfortunately this is not the case. For example, inline conversions are NOT typeclasses. They cannot be used as values of a typeclass because they are NOT valid at runtime! Currently dotty makes you jump through some hoops to define a macro conversion – you must first define a subclass of Conversion that will error at runtime, an error situation that’s forced by the new encoding – and only then you can define an inline conversion:

trait MyInlineConversion[-A, +B] extends Conversion[A, B] {
  def apply(a: A): B = throw new Error("""Tried to call a macro conversion
at runtime! This Conversion value is invalid at runtime, it must be applied
and inlined by the compiler only, you shouldn't summon it, sorry""")
}

trait DslExpr[+A]
given MyInlineConversion[A, DslExpr[A]] {
  inline def apply(expr: A): DslExpr[A] = {
    // analyze expr and produce new DslExpr...
    ...
  }
}

More so, the Conversion typeclass rules out two more types of implicit conversions:

  1. Path-dependent conversions such as implicit def x(a: A): a.Out are inexpressible with Conversion, there’s no syntax to express Conversion[(a: A), a.Out]

  2. Path-dependent conversions that summon more implicit parameters that depend on the input value, such as implicit def x(a: A)(implicit t: Get[a.T]): t.Out are inexpressible with Conversion, there’s no way to append more implicit argument lists to the def apply(a: A): B method of Conversion!

All of the above types of conversions are heavily used in mainstream Scala libraries, such as Akka, Sbt & Shapeless. I point the specific usages in the dotty issue

I do not believe that implicit conversions deserve to be gimped by completely ruling out at least two of their currently used forms and by making the third form - macro conversions - inconvenient to define and unsafe to use (a library user always risks to summon a dud Conversion object that will error at runtime if there are macro conversions in scope). As such I think we need to address the following issues:

  1. Create a syntax specifically for conversions, that will bring back path-dependency and would better express intent than the given Conversion definitions:
 conversion on (a: A) with (t: Get[a.T]): t.Out = ...
  1. Create a new type InlineConversion[A, B] - a super type of Conversion, that would be summonable ONLY in inline defs. That way, abstractions on top of summonable macros can be easily built, but at the same time the Conversion type will be unpolluted by inline conversions that are invalid at runtime.

It’s true that inline conversions require boilerplate. But they don’t need to expose safety holes since the base trait (MyInlineConversion in the example) can be sealed. On the other hand, Scala 2 does not have inline at all, so I don’t see a regression here.

Path dependent implicit conversions are indeed not supported. Maybe we can introduce a Conversion class that allows dependent typing at some point. Or maybe that does not work and path-dependent implicit conversions are dropped for good. I personally don’t lose sleep over this, since the use cases seem to be questionable designs anyway. I would probably object to inventing a special language feature for this; that would send exactly the wrong signal.