Dotty Type classes

I like the idea, but isn’t one of the main problems that this currently doesn’t work:

def foo[F[_] : Monad : Traverse] = implicitly[Functor[F]]

If I understood correctly, this still wouldn’t work with this scheme.

2 Likes

I believe so as well. We’d have to specify somehow that Monad and Traversable have the same Functor instance. That’s what I was referring to when I mentioned sharing constraints.

@odersky

We’d have to specify somehow that Monad and Traversable have the same Functor instance. That’s what I was referring to when I mentioned sharing constraints.

Well, we can somewhat awkwardly express it already:

def foo[F[_]]: FooApply[F] = new FooApply[F]

final class FooApply[F[_]] {
  def apply[FunctorImpl <: Functor[F], MonadImpl <: Monad[F], TraverseImpl <: Traverse[F]]()
  (implicit F: FunctorImpl, M: MonadImpl, T: TraverseImpl,
   finalImpl: HasFinal[Functor[F], FunctorImpl], ev1: MonadImpl <:< FunctorImpl, ev2: TraverseImpl <:< FunctorImpl) =
  ??? // implicitly[Functor[F]]
}

Where HasFinal[C, I] is a magical type class that checks whether I satisfies the canonicity requirements for C – that methods of C in I are final and don’t call non-final members (implementable with a blackbox macro)

Or with a few helpers:

def foo[F[_]](implicit M: CanonBy[Functor[F], Monad[F]], T: CanonBy[Functor[F], Traverse[F]]): Functor[F] =
  ??? // implicitly[Functor[F]]

class CanonBy[C, I <: C](val value: I)

object CanonBy {
  implicit def canonical[C, I <: C, CImpl <: AnyRef, IImpl]
    (implicit cImpl: Narrow.Aux[C, CImpl], iImpl: Narrow.Aux[I, IImpl],
     valid: HasFinal[C, CImpl], ev1: IImpl <:< CImpl, ev2: IImpl <:< I)
    : CanonBy[C, I] = new CanonBy(iImpl.value)
}

trait Narrow[T] {
  type Out
  def value: Out
}

object Narrow {
  type Aux[T, O] = Narrow[T] { type Out = O }
  implicit def narrowest[T, N <: T with AnyRef](implicit n: N): Narrow[T] { type Out = n.type } =
    new Narrow[T] { type Out = n.type ; val value = n }
}

The resolver would have to be extended to take into account HasFinal evidence …

Okay, so what if we reused the idea of a marker trait from Martin’s original proposal and used it to suppress the implicit ambiguity errors before specialization.

So that this:

def foo[F[_] : Monad : Traverse] = implicitly[Functor[F]]

Would work if e.g. Functor extends from Typeclass.
Then when you call foo and specialize its type parameter, it could check if their base instance is the same like @kai described and throw a compilation error if not.

Does that make sense?

I am not sure which Marker thread you meant?

That’s exactly what we do in our plugin: https://github.com/scalaz/scalaz-plugin/blob/master/test/files/pos/ambigious_typeclass_parameters.scala

Note the Typeclass marker.

Ah I see now, you meant TypeClass as a marker.

Other possibilities would be to require that every combined context bound is locally coherent, or to introduce new syntax for coherent context bounds. See https://github.com/lampepfl/dotty/issues/4234#issuecomment-421950625

Not quite, scalaz-plugin just assumes that any marked implicit is coherent and picks arbitrary, it doesn’t check the implementations.

I’d be fine personally with requiring every combined context bound to be locally coherent, I’m not so sure about the proposed syntax foo[A: B & C] might be confusing with & meaning something different when used as types.

One potential drawback I could see is, that if you do end up having two different base implementations, the compile error would happen a bit removed from the actual problematic definition.
Example:

def foo[F[_]: Monad: Traverse](fa: F[Int]): F[String] = fa.map(_.toString)

class ListFunctor extends Functor[List] { def map ... }
class ListFunctor2 extends Functor[List] { def map ... }
implicit object ListFunctor extends ListFunctor
implicit object ListMonad extends ListFunctor ...
implicit object ListTraverse extends ListFunctor2 ...

foo(List(1, 2, 3)) // Error List is not coherent in it's Functor instance

Though I reckon in real code this will happen pretty much never.

[A : B & C] happens to be available as syntax, since & is normally defined only for first-order types, and B and C` are higher-kinded in context bounds. But maybe that’s a bit too cute. I propose to try to require coherence for all combined bounds first and fall back to specialized syntax if that proves too restrictive.

I’d also hope that your tests would catch this.

[A : B & C] happens to be available as syntax, since & is normally defined only for first-order types, and B and C` are higher-kinded in context bounds. But maybe that’s a bit too cute. I propose to try to require coherence for all combined bounds first and fall back to specialized syntax if that proves too restrictive.

Sounds good to me :+1:

I’d also hope that your tests would catch this.

Yes, that should definitely be the case.

I prepared a new proposal for extension methods which has large overlaps with LucaJCB’s. Instead of giving special significance to this as a parameter name, it introduces a new definition syntax that mirrors usage. Also, it ties down the schema for translating expansion methods, linking it with implicit resolution. The trick to say extension methods are available if their enclosing object is available as an implicit is obvious in retrospect, but so far it had at least not crossed my mind to do this.

6 Likes

Reading (before mentioned) related Kotlin discussion (Type Classes for Kotlin) I found this post which looks much close to what I personally would expect from extension methods.
Rewriting example from that post in some Ring-like fashion, code may look like this (valid Kotlin code):

object Main1 {

    interface AddMonoid<T> {
        infix fun T.add(op2 : T) : T
        val zero: T
    }

    interface MullMonoid<T> {
        infix fun T.mull(op2 : T) : T
        val single: T
    }

    class SomePolynomial {

        fun <T, C> apply(a: T, b: T, c: T, d: T, explicitCtx: C) where C : AddMonoid<T>, C : MullMonoid<T> =
                // similar to Scala's {import explicitCtx._; ... }
                // FYI: `with` defined as `inline fun <T, R> with(receiver: T, block: T.() -> R): R`
                with(explicitCtx) {
                    // (a * b) + (c * d) ; `mull` and `add` are resolved as extension methods from `explicitCtx`
                    (a mull b) add (c mull d)
                }
    }

    @JvmStatic
    fun main(arg: Array<String>) {
        val pol0 = SomePolynomial()
        println("double<+: +  , *: *  >: (1 * 2) + (3 * 4): " + pol0.apply(1.0,2.0,3.0,4.0,
                object : DoubleAddMonoidImpl, DoubleMullMonoidImpl {}))
        println("double<+: max, *: min>: (1 min 2) max (3 min 4): " + pol0.apply(1.0,2.0,3.0,4.0,
                object : DoubleMaxAsAddMonoidImpl, DoubleMinAsMullMonoidImpl {}))
    }

    interface DoubleAddMonoidImpl : AddMonoid<Double> {
        override fun Double.add(op2: Double): Double = this + op2

        override val zero: Double
            get() = 0.0
    }

    interface DoubleMullMonoidImpl : MullMonoid<Double> {
        override fun Double.mull(op2: Double): Double = this * op2

        override val single: Double
            get() = 1.0

    }

    interface DoubleMaxAsAddMonoidImpl : AddMonoid<Double> {
        override fun Double.add(op2: Double): Double = Math.max(this, op2)

        override val zero: Double
            get() = Double.NEGATIVE_INFINITY
    }

    interface DoubleMinAsMullMonoidImpl : MullMonoid<Double> {
        override fun Double.mull(op2: Double): Double = Math.min(this, op2)

        override val single: Double
            get() = Double.POSITIVE_INFINITY

    }
}

Here import of extension methods (into scope) occurs using with inline function, so that if Scala will allow use of import for the same purposes, I think it would be right decision.

Also, for resolving local “coherence collisions” I would like to use that approach, that means “bundle into one parameter as many extensions implementations as possible”, so in contrast to pining some implementation method as finals (as was proposed before), I would rather suggested implement all extensions classes (implementations) as traits (without abstract methods).

From the above example (after translating to Scala) code may look like

  trait AddMonoid[T] {
    // in fact should be like `def add(@this op1: T, op2 : T) : T`
    def add(op1: T, op2 : T) : T
    val zero: T
  }

  trait MullMonoid[T] {
    // in fact should be like `def mull(@this op1: T, op2 : T) : T`
    def mull(op1: T, op2 : T) : T
    val single: T
  }

  // @ImplicitAndTargetForAutoMixing - assume to be marked by something like that
  trait DoubleAddMonoidImpl extends AddMonoid[Double] {
    override def add(op1: Double, op2: Double): Double = op1 + op2

    override val zero: Double = 0.0
  }

  // @ImplicitAndTargetForAutoMixing - assume to be marked by something like that
  trait DoubleMullMonoidImpl extends MullMonoid[Double] {
    override def mull(op1: Double, op2: Double): Double = op1 * op2

    override val single: Double = 1.0

  }

So main point here that I would probably like to actually mark that implementation traits for “automatic mixing” (and what was achieved - like DoubleAddMonoidImpl with DoubleMullMonoidImpl) should become candidate for injection into implicit argument.
More precisely, I can make this by my own, by providing into scope explicit definition of that mixed impl:

  trait DouubleAddMullRindImpl extends DoubleAddMonoidImpl with DoubleMullMonoidImpl
  implicit object DouubleAddMullRindImpl extends DouubleAddMullRindImpl

But I would prefer that compile do that job for me (and let me get rid from boilerplating)

So in general main idea of this trick - that traits mixing is itself probably nearly the best way for resolving coherency issues (for example DouubleAddMullRindImpl with DoubleMullMonoidImpl can be mixed without any clashes, since DouubleAddMullRindImpl already “absorbs” DoubleMullMonoidImpl and no coherence issue are introduced by that mixing them together, but in case if “implicit impl traits” which appear to match to some implicate argument can not be mixed without errors (automatically), then call side code should be responsible for “explicitly providing that implicit implementation” for that argument type)

Rewritten in Scala call site code may look like this

  class SomePolynomial {

    def apply[T](a: T, b: T, c: T, d: T)(implicit ctx: AddMonoid[T] with MullMonoid[T]): T = {
      import ctx._
      // (a * b) + (c * d) ;
      // can not use infix form `(a mull b) add (c mull d)` since `mull` and `add` are not extension methods ...
      // so we should stick with prefix from
      add(mull(a, b), mull(c, d))
    }
  }

In this case [T : AddMonoid & MullMonoid] notation looks for me really good, but I would also
enforce it meaning by allowing implicitly[AddMonoid[T] with MullMonoid[T]] - it should actually return that “mixed implementation”

implicitly[AddMonoid[T] with MullMonoid[T]] - it should actually return that “mixed implementation”

I don’t see how it could do that. There are two implementation objects, not one.

Yes now [T : AddMonoid : MullMonoid] means that 2-wo invisible implicit arguments are defined.
But I keep in mind my example

  class SomePolynomial {

    def apply[T](a: T, b: T, c: T, d: T)(implicit ctx: AddMonoid[T] with MullMonoid[T]): T = {
      import ctx._
      // (a * b) + (c * d) ;
      // can not use infix form `(a mull b) add (c mull d)` since `mull` and `add` are not extension methods ...
      // so we should stick with prefix from
      add(mull(a, b), mull(c, d))
    }
  }

where that implicit method argument defined explicitly, and just mean, that it would be nice to have some construction, lets name it [T: AddMonoid &&& MullMonoid] which will expand to that single implicit argument
(implicit ctx: AddMonoid[T] with MullMonoid[T]), so then example can be equivalently rewritten as

  class SomePolynomial {

    def apply2[T : AddMonoid &&& MullMonoid](a: T, b: T, c: T, d: T): T  = {
      val ctx2 = implicitly[AddMonoid[T] with MullMonoid[T]]
      // `import (implicitly[AddMonoid[T] with MullMonoid[T]])._` unfortunately unavailable ...
      import ctx2._
      // (a * b) + (c * d) ;
      // can not use infix form `(a mull b) add (c mull d)` since `mull` and `add` are not extension methods ...
      // so we should stick with prefix from
      add(mull(a, b), mull(c, d))
    }
  }