Updated Proposal: Revisiting Implicits

While it can seem awkward, considering how rare it is to explicitly pass implicit parameters, I think the cost is low. And in my opinion, the case is quite strong for allowing normal parameter blocks to follow implicit parameter blocks. It aids usability in generic programming. Martin’s example shows the issue I’m talking about:

This kind of type dependency is extremely common in generic programming, where implicits are responsible for computing the some of the types needed in the parameter list of a method. As it stands, these types need to be added as type parameters, which can lead to a blowup in type parameters and reduced readability/writability of generic functions. It forces you to reason about type inference and implicit resolution at the same time, which gets very confusing. I have run into this issue a fair bit when doing generic programming.

Interspersing implicit and explicit parameters would help a lot here, as you can make the implicit parameter precede any explicit parameters whose types depend on it. That way you know exactly when/how those types are computed. I can give you a concrete example from shapeless, with a method that updates a value in a generic record. We can start with a naive-ish implementation:

final class RecordOps[L <: HList](val l : L) extends AnyVal with Serializable {
  // ...
  def updateWith[W, V](k: Witness)(f: V => W)
    (implicit modifier: Modifier[L, k.T, V, W]): modifier.Out = modifier(l, f)
  // ...
}

Here the resolution of the implicits depends on the modifier function’s parameter type V. As a result, V must be annotated at the use-site (I’ve just verified this), which makes this method totally unusable for nested records. In the actual implementation in shapeless, the issue is solved in a somewhat roundabout way:

final class RecordOps[L <: HList](val l : L) extends AnyVal with Serializable {
  // ...
  def updateWith[W](k: WitnessWith[FSL])(f: k.instance.Out => W)
    (implicit modifier: Modifier[L, k.T, k.instance.Out, W]): modifier.Out = modifier(l, f)
  type FSL[K] = Selector[L, K]
  // ...
}

Here WitnessWith, in combination with a Selector that identifies the type of value associated with a record key, is used to compute the type of f’s parameter so it doesn’t have to be annotated. Essentially, WitnessWith is used to hack implicit resolution earlier into the parameter list. It uses machinery that honestly I don’t fully understand, but it looks like it involves macros and implicit conversions and depends on having a singleton Witness to anchor onto. It seems rather complex and brittle. However, if explicit parameter lists could follow implicit ones, the solution would be clear:

final class RecordOps[L <: HList](val l : L) extends AnyVal with Serializable {
  // ...
  def updateWith[W](k: Witness) with (selector: Selector[L, k.T]) (f: selector.Out => W)
    with (modifier: Modifier[L, k.T, selector.Out, W]): modifier.Out = modifier(l, f)
  // ...
}

This is simpler than the current implementation, and I hate to imagine how the issue would have to be resolved in cases where WitnessWith doesn’t cut it.

3 Likes

It seems I missed the rationale for going back to the keyword being out of the parentheses, as in: we have to write (...) with (...) instead of (...)(with ...).

To me it’s a regression, because:

  • It’s very awkward to have this keyword floating between parameter lists, and no spacing convention seems to make it look better (written (...) with(...) it also looks strange).

  • it makes the following non-implicit parameter lists look weird (are they part of the with?), and I agree with @julianmichael those are important. The purported advantage of having the keyword out was that it made it clear the whole parameter list is modified by the keyword, but if we allow following up with non-implicit parameter lists, we end up with exactly the same ambiguity, but at the level of parameter lists.

  • It’s not symmetric with the use sites, which will apparently have to use (...).with(...), and using this syntax at the declaration site too would feel even weirder.

A solution to all these problems was proposed a long time ago by @Blaisorblade: use special brackets. Why has this not been considered seriously (as far as I can tell)?

For instance, use [[ord: Ord[Int]]] or {{ord: Ord[Int]}}. Examples:

// anonymous:
def max[A](x: A, y: A){{ Ord[A] }}: A = ...

// named:
def max[A](x: A, y: A){{ord: Ord[A]}}: A = ...

// function type:
def m[A]: (A, A) => {{ Ord[A] }} => A = max

// type class instance:
given [A]{{ Ord[A] }} as Ord[List[A]] {
  ...
}
given listOrd[A]{{ Ord[A] }} as Ord[List[A]] = ...
7 Likes

Of course, special brackets have been proposed repeatedly. But in my book that’s by far the worst of all possible solutions since it is so inscrutable and lexically displeasing.

2 Likes

I just merged the context/with syntax into master. I am very optimistic that this is fit to be the final state but we will give ourselves some time to experiment with it to make sure.

Does it still work?
The documentation says:

  • An alias given can have type parameters and implicit parameters just like any other given, but it can only implement a single type.

It is not obvious. Will that feature be documented?

Would it be possible to implement a given over a mutable variable via something like:

var x: T = ...
given with () as T= ... 

?

Does it still work?

Sure, modulo the recent syntax change:

given [ByName] as T = x

ByName is chosen for legibility, it could be any name.

The alternative that you mentioned using with () is not supported.

2 Likes

Are there any binary compatibility risks with using anonymous aliases?
if there are no risks it will be still useful to document it. So the usual user can implement plain given val:

val x: T = ...
given as T= x

Using val for all givens would be misleading since some of them are defs, and some of them are lazy vals.

1 Like

I’d like to propose a variant that I think maximizes readability + regularity. It’s mostly a combination of what’s been discussed before, with one novel idea.The gist of it is this:

given [T] if Ord[T] for Ord[List[T]]  { ... }

given if (outer: Context) for Context = ...

def foo(given Context) = { ... }

More examples can be found here.

There are four notable points:

(1) Keyword for precedes the type (instead of “as” or “of”).

(2) Keyword if is used for conditional instances (instead of “with” or “given”).

(3) Same keyword given for instances and parameters.

(4) Parentheses around given parameters.

I’ll comment a bit on each point.

(1) The for variant translates better into real-world language. How to read it:

Example: given for Ord[Int] { }

Two alternatives that are mostly synonymous:

a) Using “given” as adjective, i.e. “a given instance”: Let there be defined a given instance for Ord of Int with the following implementation.

b) Using “given” as verb, i.e “x is given for y”: Let there be given for Ord of Int an instance with the following implementation.

(2) I think the idea has been that conditional instances and context parameter should have the same syntax. This is a legacy from the original implicit syntax. The new, high-level abstraction is conditional given instances. There is no notion of “parameters” here, and as such it can have a completely unique syntax. with does not have any connotations of “conditionality”, thus if.

How to read it:

Example: given [T] if Ord[T] for Ord[List[T]] { }

a) Let there be defined, for any type T, if there exists a given instance for Ord of T, a given instance for Ord of List of T with the following implementation.

b) Let there, for any type T, if there exists a given instance for Ord of T, be given for List of Ord of T an instance with the following implementation.

(roughly)

(3) I think there should be some kind of symmetry between context instances and context parameters. Using the same keyword is a simple way to achieve that. The alternative with has the following problems:

  • it’s a bit overloaded
  • it does not have any connotation of “implicitness”
  • it does not have any semantic relationship with “given”
  • used without parentheses, it has a syntax collision with new Foo with Bar
  • it requires additional syntax for context function types

(4) Using parentheses around context parameters avoids the “warts” of .given and spaces around :.

conversions can be seen as special cases of typeclasses

No, unfortunately this is not the case. For example, inline conversions are NOT typeclasses. They cannot be used as values of a typeclass because they are NOT valid at runtime! Currently dotty makes you jump through some hoops to define a macro conversion – you must first define a subclass of Conversion that will error at runtime, an error situation that’s forced by the new encoding – and only then you can define an inline conversion:

trait MyInlineConversion[-A, +B] extends Conversion[A, B] {
  def apply(a: A): B = throw new Error("""Tried to call a macro conversion
at runtime! This Conversion value is invalid at runtime, it must be applied
and inlined by the compiler only, you shouldn't summon it, sorry""")
}

trait DslExpr[+A]
given MyInlineConversion[A, DslExpr[A]] {
  inline def apply(expr: A): DslExpr[A] = {
    // analyze expr and produce new DslExpr...
    ...
  }
}

More so, the Conversion typeclass rules out two more types of implicit conversions:

  1. Path-dependent conversions such as implicit def x(a: A): a.Out are inexpressible with Conversion, there’s no syntax to express Conversion[(a: A), a.Out]

  2. Path-dependent conversions that summon more implicit parameters that depend on the input value, such as implicit def x(a: A)(implicit t: Get[a.T]): t.Out are inexpressible with Conversion, there’s no way to append more implicit argument lists to the def apply(a: A): B method of Conversion!

All of the above types of conversions are heavily used in mainstream Scala libraries, such as Akka, Sbt & Shapeless. I point the specific usages in the dotty issue

I do not believe that implicit conversions deserve to be gimped by completely ruling out at least two of their currently used forms and by making the third form - macro conversions - inconvenient to define and unsafe to use (a library user always risks to summon a dud Conversion object that will error at runtime if there are macro conversions in scope). As such I think we need to address the following issues:

  1. Create a syntax specifically for conversions, that will bring back path-dependency and would better express intent than the given Conversion definitions:
 conversion on (a: A) with (t: Get[a.T]): t.Out = ...
  1. Create a new type InlineConversion[A, B] - a super type of Conversion, that would be summonable ONLY in inline defs. That way, abstractions on top of summonable macros can be easily built, but at the same time the Conversion type will be unpolluted by inline conversions that are invalid at runtime.

It’s true that inline conversions require boilerplate. But they don’t need to expose safety holes since the base trait (MyInlineConversion in the example) can be sealed. On the other hand, Scala 2 does not have inline at all, so I don’t see a regression here.

Path dependent implicit conversions are indeed not supported. Maybe we can introduce a Conversion class that allows dependent typing at some point. Or maybe that does not work and path-dependent implicit conversions are dropped for good. I personally don’t lose sleep over this, since the use cases seem to be questionable designs anyway. I would probably object to inventing a special language feature for this; that would send exactly the wrong signal.

They must inherit from Conversion, because of that the user can summon them with innocent

def x(a: A) with (Conversion[A, B]): B = a

given MyInlineConversion[A, B] { inline def apply ... }
sealed trait MyInlineConversion[A,B] extends Conversion[A, B]

Where expression x(a) will blow up at runtime because x will get MyInlineConversion from implicit search and apply it at runtime. Even if you can omit a stub definition, you can’t omit a "legal" upcast of an inline conversion to a non-inline runtime Conversion value.

Err, Scala 2 has macros, which are widely used in the exact same way:

implicit def dsl[A](expr: A): DslExpr[A] = macro DslExpr.inspectExpr[A]

Hard disagree. With weaker conversions, these “questionable designs” will transform from one-liners into multiple lines of slow, imperative and opaque inline code, that would benefit no one. At least the inline/macro conversions issues must be addressed, I do not predict their usage would decline one bit.

I have expressed my concerns about the absence of abstract given definitions and my concerns about anonymous definitions, and I would like to insist one last time on those points.

In Scala 2, abstract implicits are occasionally defined by developers (here is one example from my codebase, here is another one from cats). The current syntax in Scala 2 is the following:

implicit def functor: Functor[F]

In Scala 3, the syntax will be:

def functor: Functor[F]
given as Functor[F] = functor

The new approach is a bit more verbose, but it also goes against the principle of “intent vs mechanism”: here, we have to understand and think in terms of the underlying mechanism because there is no language construct that matches our intent.

About anonymous instances, I agree that they are much more pleasant to define than the named ones, but we lack experience in using them. How are they referred to in error messages or by IDEs? I think we should address these questions before committing to supporting anonymous instances or we should make them experimental.

6 Likes

This scheme doesn’t work well with alias instances:

given ord as Ord[T] = otherOrd

I had raised the suggestion for extensions to only use the “Collective Extension” syntax (to simplify the language). Martin had countered that the collective extension syntax isn’t sufficient to implement type classes.

I have read about type classes in Scala 2 in the past, and have some use cases where they would probably have been useful, but I shied away from them because they looked too complicated to define/understand. I wasn’t convinced that a regular engineer would be able to understand the code, and perhaps even I wouldn’t understand the code after coming back to it six months later.

I wasn’t aware of Simulacrum, but from a language user perspective this is fairly close to what I would like type classes to be. I.e. something that is fairly concisely defined in the language where the construct is easily identified.

Hence, without understanding the exact mechanics of how it works, and I appreciate that the devil is in the detail, I think that I was hoping that typeclasses in Scala 3 might look something more like this:

trait SemiGroup[T] {
  @infix def combine (x: T)(y: T): T
}

typeclass trait Monoid[T] extends SemiGroup[T] {
  def unit: T
}

typeclass instance Monoid[String] {
  def combine (x: String)(y: String): String = x.concat(y)
  def unit: String = ""
}

typeclass instance Monoid[Int] {
  def combine (x: Int)(y: Int): Int = x + y
  def unit: Int = 0
}

extension on (xs: List[T]) {
  def sum[T: Monoid]: T = xs.foldLeft(Monoid[T].unit)(_ combine _)
}


---


trait Functor[F[_]] {
  def map[A, B](x: F[A])(f: A => B): F[B]
}

typeclass trait Monad[F[_]] extends Functor[F] {
  def flatMap[A, B](x: F[A => B])(f: F[A]): F[B]
  def map[A, B](x: F[A])(f: A => B): F[B] = x.flatMap(f `andThen` pure)

  def pure[A](x: A): F[A]
}

typeclass instance listMonad as Monad[List] {
  def flatMap[A, B](xs: List[A])(f: A => List[B]): List[B] =
    xs.flatMap(f)

  def pure[A](x: A): List[A] =
    List(x)
}

// ReaderMonad omitted, I didn't under the the =>> syntax.

Note, I’m not saying the the code above can work, but just to give an indication of what I was hoping to see. With the current proposed design, I think that I would still either avoid defining typeclasses, or add additional code comments to explain what is being done. Or perhaps wait for someone to port something like Simulacrum to Scala 3.

I think ultimately, I was hoping that “givens/implicits” are treated as low level language machinery that folks don’t generally need to understand or make use of, with cleaner high level abstractions sitting on top of them.

Parts of the Scala language are exceptionally nice to use, but it worries me that parts of the language (currently need to be used by library definitions/implementations) are complex enough that I can’t fully understand the code without investing considerable mental agility.

1 Like

I think that’s very close to how typeclasses effectively look right now:

trait SemiGroup[T] {
  def (x: T) combine (y: T): T
}

trait Monoid[T] extends SemiGroup[T] {
  def unit: T
}

given Monoid[String] {
  def (x: String) combine (y: String): String = x.concat(y)
  def unit: String = ""
}

given Monoid[Int] {
  def (x: Int) combine (y: Int): Int = x + y
  def unit: Int = 0
}

extension on [T](xs: List[T]) {
  def sum[T: Monoid]: T = xs.foldLeft(summon[Monoid[T]].unit)(_ combine _)
}

Forgive me if I’ve missed some recent syntax change.

2 Likes

Yes, I agree the structure is similar (but this doesn’t match the syntax that Martin pointed me to) but please consider someone who is relatively unfamiliar with the nuances of the language and who tries to read this code. They will probably google both “given” and “extension”. The help page for extensions should be pretty straight forward, but I anticipate that the help pages for given will be a lot more complex because they can be used in different ways.

The above example doesn’t say to me that Scala 3 naturally supports typeclasses . It says to me that typeclasses can be simulated using a mixture of given, extensions, and context bounds. My perception is that that this makes the language harder for “regular” users as opposed to “power” users.

2 Likes

I would rephrase that as

typeclasses can be naturally expressed in Scala using existing features

I actually don’t know of a programming language that has an explicit typeclass keyword. Rust has traits. Even Haskell doesn’t explicitly say that something is a “typeclass”. The syntax is

class Semigroup a where
  combine :: a -> a -> a

instance Semigroup Int where
  combine x y = x + y

Haskell simply doesn’t have OOP features. Seeing as “Scala combines object-oriented and functional programming in one concise, high-level language”, it follows quite naturally that a Scala class or trait can be either or both a OO class or FP (type)class. I know there are people who disagree and say that everything has to be a separate isolated feature. But Scala has always gone the other way, being a “scalable” language.

1 Like

How about something like this?

typeclass SemiGroup[T] {
  @infix def combine (x: T)(y: T): T
}

typeclass Monoid[T: SemiGroup]
  def unit: T
}

lens Monoid {
  typeinstance StringMonoid implements SemiGroup[T], Monoid[String] {
    def combine (x: String)(y: String): String = x.concat(y)
    def unit: String = ""
  }

  typeinstance IntMonoid implements SemiGroup[T], Monoid[String] {
    def combine (x: Int)(y: Int): Int = x + y
    def unit: Int = 0
  }

  extension ListMonoidOps[T: Monoid] extends List[T] {
    def sum: T = this.foldLeft(Monoid.unit)(_ combine _)
  }
}

---

import Monoid

object Usage {
  List(1, 2).sum
  List("a", "b").sum
}

lens Usage includes Monoid

and this:

typeclass Functor[F[_]] {
  def map[A, B](x: F[A])(f: A => B): F[B]
}

typeclass Monad[F[_]: Functor]
  def flatMap[A, B](x: F[A => B])(f: F[A]): F[B]
  def pure[A](x: A): F[A]
}

lens Monad {
  typeinstance MonadFunctor[F[_]: Monad] implements Functor[F] {
    def map[A, B](x: F[A])(f: A => B): F[B] = Monad.flatMap(Monad.pure(f))(x)
  }

  typeinstance ListMonad implements Monad[List] {
    def flatMap[A, B](xs: List[A])(f: A => List[B]): List[B] =
      xs.flatMap(f)
    def pure[A](x: A): List[A] =
      List(x)
  }
}

---

import Monad

object Usage {
  List(1,2).map(_ + 1)
}

lens Usage includes Monad

But traits shouldn’t be both OO class and type classes (which are not inherently FP). They are two different models of polymorphism, and conflating both of their quite different semantics into one construct is extremely confusing.

Scala has not always gone this way; for instance, def is an “isolated” feature that could be replaced with function values.