Updated Proposal: Revisiting Implicits

Anyway, Discourse tells me I am posting too much, which is probably true. So I’ll take a break for a while.

1 Like

How does Dotty determine purity? Is there new rule for missing keyword inference?

A definitely non simple example cats/core/src/main/scala/cats/instances/option.scala at main · typelevel/cats · GitHub :

  implicit val catsStdInstancesForOption: Traverse[Option]
    with MonadError[Option, Unit]
    with Alternative[Option]
    with CommutativeMonad[Option]
    with CoflatMap[Option]
    with Align[Option] =
    new Traverse[Option]
      with MonadError[Option, Unit]
      with Alternative[Option]
      with CommutativeMonad[Option]
      with CoflatMap[Option]
      with Align[Option] {

      def empty[A]: Option[A] = None

      def combineK[A](x: Option[A], y: Option[A]): Option[A] = x.orElse(y)

      def pure[A](x: A): Option[A] = Some(x)

      override def map[A, B](fa: Option[A])(f: A => B): Option[B] =
        fa.map(f)

      (...)
      }

How can anyone (i.e. humans) tell that right hand side of catsStdInstancesForOption is pure?

2 Likes

This is my favorite #metoo quote.

3 Likes

The overview (in criticism 4) states:

The syntax of implicit parameters also has shortcomings. It starts with the position of implicit as a pseudo-modifier that applies to a whole parameter section instead of a single parameter. This represents an irregular case with respect to the rest of Scala’s syntax.

If this is the motivation, one infers that this is fixed. But apparently it is not. Given also applies to the entire parameter block, not to individual parameters.

There are no illustrations of this one way or the other in the examples.

The grammar does state that this is impossible (by not admitting f(x, given y)), but reading grammars is not the easiest (though it is the most precise) way to figure out what syntax is legal.

Even the grammar doesn’t say whether you can give only parts of an argument list. Under Given Clauses, in section Multiple Given Clauses, this example is given:

f(global)(given ctx)(given sym, kind)

but it doesn’t say whether f(global)(given ctx)(given sym) with kind inferred is okay or not.

(Also, I believe that f resolving to f(global) is a mistake? If the definition was f(u: Universe = global) that would be okay, but as is there seems no way to resolve the first parameter block as it isn’t given.)

Good point, is there any argument against this, any reason to keep nested givens? If they don’t add anything I too think they should go.

One thing I was wondering about, will the new encoding of type-parametrized implicit conversions pose a performance issue? In Scala 2, the encoding is like this:

implicit def foo[B](list: List[B]): Option[B] = list.headOption

and so the overhead is just a method call which is almost free. Now the encoding is something like this (forgive syntax errors)

given [B]: Conversion[List[B], Option[B]] = _.headOption

Will this create a new allocation of a Conversion object at each conversion site? That might add up to a lot of extra allocations.

3 Likes

Actually it does not prove that suggested decision is not worse.
What can we learn from other languages?
There is well known dirty operator which name is ‘go to’
Everybody agree that it leads to bad code in most cases. Almost everything can be done without ‘goto’.
So what have language designers done in such cases:

  • They dropped the feature
  • They put warring into documentation(there are cases where it is impossible to remove go to, This operator allows jumping to absalute adress in c)

I have seen they put a fuse in some cases.
But I have never seen that someone do puzzle others with language expression.
Would it be normal to see go to in the expresion like:

  continue *val // where val is the void pointer to absolute address. 

I think it would cause bewilderment.
Why can the following be considered as good?

given [Dummy]: T = x

There is interesting argumentation:

I think it is undeniable. But what does it prove?
I can say that an esoteric programming language is sufficient to implement any algorithm.
Does it prove that it is good language for programming?

I’m trying to understand what are those “cute evaluation tricks” people have been playing with implicits but shouldn’t. Could some one give an example?

2 Likes

The argument of mechanism/intent is simplified here. For a library author building DSLs or APIs, no matter it’s given or implicit, it’s a mechanism to their purpose.

If that is the case, how can we tell two different mechanisms/features in a formal language? I have the following in mind:

  1. Syntactic simplicity
  2. Semantic simplicity
  3. Expressiveness

For syntactic simplicity, given wins a little bit for being shorter. However, it loses in that it introduces too many new syntactic forms.

For semantic simplicity, implicit is better, as it’s easy to reason what’s the formal semantics based on the existing language constructs val, lazy val and def. Whether such details make a difference due to compiler or JVM optimization is a different issue, as programmers should always be able to reason program behavior based on formal semantics — that is how they learn a new language and reason about programs. I think in languages where the formal semantics becomes obscure or complex is a disadvantage instead of a merit.

For expressiveness, it seems implicit on par with given. However, implicit gives more control, thus slightly better.

4 Likes

Users don’t want to implicitly pass arguments, that’s just a low-level implementation mechanism detail. What users actually want are Type classes.

That’s why Dotty’s givens are better, because they are closer to what users actually want (Type classes) and further from implementation mechanism detail (implicit argument passing) as implicits are in Scala 2.x.

And I personally would go even further and made givens even closer to Type classes. One thing that could be done which comes to mind is to force givens to be always anonymous (currently they can be named or anonymous).

I would be really hesitant to propose this until things are more stable, see this comment for more details:

I am not sure what you mean by “users”, but for us we do not want to explicitly pass implicit parameters very often and typeclasses aren’t the only use for implicits (they are one use).

Implicits are often used as a better solution to things like ReaderT (mainly because implicits are commutative, unlike ReaderT and they also don’t pollute function signatures).

Dotty already has a proposal for typeclasses so the logical conclusion of your argument would be completely removing given

1 Like

Spot the difference:

trait SemiGroup[T] with
  def (x: T) combine (y: T): T

trait Monoid[T] extends SemiGroup[T] with
  def unit: T

object Monoid with
  def apply[T](given Monoid[T]) = summon[Monoid[T]]

given Monoid[String] with
  def (x: String) combine (y: String): String = x.concat(y)
  def unit: String = ""

given Monoid[Int] with
  def (x: Int) combine (y: Int): Int = x + y
  def unit: Int = 0

def sum[T: Monoid](xs: List[T]): T =
    xs.foldLeft(Monoid[T].unit)(_.combine(_))
trait SemiGroup[T] {
  def (x: T) combine (y: T): T
}

trait Monoid[T] extends SemiGroup[T] {
  def unit: T
}

object Monoid {
  def apply[T](implicit Monoid[T]) = implicitly[Monoid[T]]

  implicit object String extends Monoid[String] {
      def (x: String) combine (y: String): String = x.concat(y)
      def unit: String = ""
  }

  implicit object Int extends Monoid[Int] {
    def (x: Int) combine (y: Int): Int = x + y
    def unit: Int = 0
  }
}

def sum[T: Monoid](xs: List[T]): T =
    xs.foldLeft(Monoid[T].unit)(_.combine(_))
3 Likes

I have a beef with implicit conversions. These are bad as everyone knows. I have a library (not yet released, unfortunately) which provides for principled, safe, type-conversions. There are a ton of missing details, but the gist of it is the following:

trait Convert[-A, +B] {
  def apply(x: A): B
}
implicit class AsMethod[A](val self: A) {
  def as[B](implicit c: Convert[A, B]): B = c(self)
}

(please excuse my scala 2 syntax, I’m not familiar enough with dotty yet).

There is a typeclass Convert[A,B] which can be used for converting expressions of type A to something of type B, but the conversion must be done explicitly with the as extension method:

val x: Int = "1729".as[Int]

I find it extremely useful, safe, and not overly onerous to write code this way.

I would like for Convert[A,B] to extend Function[A,B] (because it is clearly a function from A to B) but if I were to do this in Scala 2, every implicit instance of Convert[A,B] would become a candidate for an implicit conversion, which is exactly what I want to avoid.

At first blush, it would seem that the Conversion[A, B] class in the above proposal provides what I want, but it doesn’t, since a given instance of this class allows for implicit conversions (which are bad). However, I don’t just want to use Function[A, B] as my typeclass, because I don’t want every given instance of a function to act as a conversion (Imagine that I had a given instance of type List[String] in scope. Do I really want that to provide an (explicit) conversion from Int to String? No, of course not.)

So what I need is a typeclass that sits in between Function[A,B] and Conversion[A,B] which can be used for explicit conversions (via the as extension method) but not for implicit conversions. If I had my way, this class would be called Conversion[A,B] and the implicit version in the current proposal would be called ImplicitConversion[A,B] (or, better still, UnsafeConversion[A,B]). Implementors would be strongly encourage to provide many instances of Conversion[A,B], but not the dangerous implicit variety.

Of course, in my library the as extension method must be explicitly imported. If it were up to me, I would make this available in the Predef as an extension method available on every type. I would then banish implicit conversions to the dustbin of history.

2 Likes

I’d be a bit surprised if this wasn’t one of those things that tends to appear all over the place (we’ve got an internal version of this as well), for the simple reason that (as you pointed out), it’s really easy to tack a .as[B] on the end of an A if you need a B, and it avoids most of the issues with implicit conversions.

1 Like

I agree that a typeclass for explicit conversions via as would be useful. How about

trait Convertible[-A, +B] extends (A => B) with
  def (x: A) as: B

I think Convertible is a good name for this since it describes a capability: being converted by calling the as method. I.e. analogously, if we’d want to avoid category-speak, a Functor would be called Mappable since it provides a map method, instead of being called Map directly.

EDIT: Or maybe Converter, which emulates what we do for converting between Scala and Java collections.

4 Likes

I would just like to say that the narrative that implicit conversions come directly from the devil is not holding. I have half a dozen projects where I need to lift literals to something that belongs in the type hierarchy of the particular project, usually Int => ConstantInt or Double => Constant. I don’t see how that has to go through an extra indirection IntAsConstant extends Convertible[Int, ConstantInt]; what I do and need is just implicit def intAsConstant(i: Int) = new ConstantInt(i). I would prefer not to pay extra steps and allocations.

4 Likes

Yes, also for example in json libraries like lihaoyi’s ujson:

Js.Obj(
  "key1" -> 1,
  "key2" -> "hi",
  "key3" -> Js.Array(1, "str")
)

which achieves this api through implicit conversions of Int and String to JsValue.

Or even, in 2.13 the collections framework doubles down on use of implicit conversions with for example the IterableOnce#to method

List(1).to(List)
List(1 -> 2).to(Map)
List(1).to(SortedSet)

which is really

List(1).to(IterableFactory.toFactory(List))
List(1 -> 2).to(MapFactory.toFactory(Map))
List(1).to(EvidenceIterableFactory.toFactory(SortedSet))
3 Likes

that encoding requires the converted-to type to be inferrable or else it will result in ambiguous implicits:


given Convertible[Int, String] = ???
given Convertible[Int, List[Int]] = ???

val string: String = 1.as // ok
1.as // error

It would be nicer if we could have 1.as[Target] syntax so that the caller can provide the target type in-line

5 Likes

In my experience these conversions are very error prone. What is the meaning of arr in this code?

val items = Seq(Js.Num(42), Js.Str("foo"))
val arr   = Js.Arr(items)

The first answer would probably be “arr is a JSON array containing the number 42 and the string “foo””.

But this code actually builds a JSON array containing a single item, which is another JSON array containing the number 42 and the string “foo”.

This is because the type signature of Js.Arr is actually the following:

object Js {
  def Arr(items: Js*): Js = ...
}

So, the above code should not type check (we should have written new Js.Arr(items) — note the usage of new), but it does because Seqs can be implicitly converted to JSON arrays…

3 Likes