Updated Proposal: Revisiting Implicits

The argument of mechanism/intent is simplified here. For a library author building DSLs or APIs, no matter it’s given or implicit, it’s a mechanism to their purpose.

If that is the case, how can we tell two different mechanisms/features in a formal language? I have the following in mind:

  1. Syntactic simplicity
  2. Semantic simplicity
  3. Expressiveness

For syntactic simplicity, given wins a little bit for being shorter. However, it loses in that it introduces too many new syntactic forms.

For semantic simplicity, implicit is better, as it’s easy to reason what’s the formal semantics based on the existing language constructs val, lazy val and def. Whether such details make a difference due to compiler or JVM optimization is a different issue, as programmers should always be able to reason program behavior based on formal semantics — that is how they learn a new language and reason about programs. I think in languages where the formal semantics becomes obscure or complex is a disadvantage instead of a merit.

For expressiveness, it seems implicit on par with given. However, implicit gives more control, thus slightly better.

4 Likes

Users don’t want to implicitly pass arguments, that’s just a low-level implementation mechanism detail. What users actually want are Type classes.

That’s why Dotty’s givens are better, because they are closer to what users actually want (Type classes) and further from implementation mechanism detail (implicit argument passing) as implicits are in Scala 2.x.

And I personally would go even further and made givens even closer to Type classes. One thing that could be done which comes to mind is to force givens to be always anonymous (currently they can be named or anonymous).

I would be really hesitant to propose this until things are more stable, see this comment for more details:

I am not sure what you mean by “users”, but for us we do not want to explicitly pass implicit parameters very often and typeclasses aren’t the only use for implicits (they are one use).

Implicits are often used as a better solution to things like ReaderT (mainly because implicits are commutative, unlike ReaderT and they also don’t pollute function signatures).

Dotty already has a proposal for typeclasses so the logical conclusion of your argument would be completely removing given

1 Like

Spot the difference:

trait SemiGroup[T] with
  def (x: T) combine (y: T): T

trait Monoid[T] extends SemiGroup[T] with
  def unit: T

object Monoid with
  def apply[T](given Monoid[T]) = summon[Monoid[T]]

given Monoid[String] with
  def (x: String) combine (y: String): String = x.concat(y)
  def unit: String = ""

given Monoid[Int] with
  def (x: Int) combine (y: Int): Int = x + y
  def unit: Int = 0

def sum[T: Monoid](xs: List[T]): T =
    xs.foldLeft(Monoid[T].unit)(_.combine(_))
trait SemiGroup[T] {
  def (x: T) combine (y: T): T
}

trait Monoid[T] extends SemiGroup[T] {
  def unit: T
}

object Monoid {
  def apply[T](implicit Monoid[T]) = implicitly[Monoid[T]]

  implicit object String extends Monoid[String] {
      def (x: String) combine (y: String): String = x.concat(y)
      def unit: String = ""
  }

  implicit object Int extends Monoid[Int] {
    def (x: Int) combine (y: Int): Int = x + y
    def unit: Int = 0
  }
}

def sum[T: Monoid](xs: List[T]): T =
    xs.foldLeft(Monoid[T].unit)(_.combine(_))
3 Likes

I have a beef with implicit conversions. These are bad as everyone knows. I have a library (not yet released, unfortunately) which provides for principled, safe, type-conversions. There are a ton of missing details, but the gist of it is the following:

trait Convert[-A, +B] {
  def apply(x: A): B
}
implicit class AsMethod[A](val self: A) {
  def as[B](implicit c: Convert[A, B]): B = c(self)
}

(please excuse my scala 2 syntax, I’m not familiar enough with dotty yet).

There is a typeclass Convert[A,B] which can be used for converting expressions of type A to something of type B, but the conversion must be done explicitly with the as extension method:

val x: Int = "1729".as[Int]

I find it extremely useful, safe, and not overly onerous to write code this way.

I would like for Convert[A,B] to extend Function[A,B] (because it is clearly a function from A to B) but if I were to do this in Scala 2, every implicit instance of Convert[A,B] would become a candidate for an implicit conversion, which is exactly what I want to avoid.

At first blush, it would seem that the Conversion[A, B] class in the above proposal provides what I want, but it doesn’t, since a given instance of this class allows for implicit conversions (which are bad). However, I don’t just want to use Function[A, B] as my typeclass, because I don’t want every given instance of a function to act as a conversion (Imagine that I had a given instance of type List[String] in scope. Do I really want that to provide an (explicit) conversion from Int to String? No, of course not.)

So what I need is a typeclass that sits in between Function[A,B] and Conversion[A,B] which can be used for explicit conversions (via the as extension method) but not for implicit conversions. If I had my way, this class would be called Conversion[A,B] and the implicit version in the current proposal would be called ImplicitConversion[A,B] (or, better still, UnsafeConversion[A,B]). Implementors would be strongly encourage to provide many instances of Conversion[A,B], but not the dangerous implicit variety.

Of course, in my library the as extension method must be explicitly imported. If it were up to me, I would make this available in the Predef as an extension method available on every type. I would then banish implicit conversions to the dustbin of history.

2 Likes

I’d be a bit surprised if this wasn’t one of those things that tends to appear all over the place (we’ve got an internal version of this as well), for the simple reason that (as you pointed out), it’s really easy to tack a .as[B] on the end of an A if you need a B, and it avoids most of the issues with implicit conversions.

1 Like

I agree that a typeclass for explicit conversions via as would be useful. How about

trait Convertible[-A, +B] extends (A => B) with
  def (x: A) as: B

I think Convertible is a good name for this since it describes a capability: being converted by calling the as method. I.e. analogously, if we’d want to avoid category-speak, a Functor would be called Mappable since it provides a map method, instead of being called Map directly.

EDIT: Or maybe Converter, which emulates what we do for converting between Scala and Java collections.

4 Likes

I would just like to say that the narrative that implicit conversions come directly from the devil is not holding. I have half a dozen projects where I need to lift literals to something that belongs in the type hierarchy of the particular project, usually Int => ConstantInt or Double => Constant. I don’t see how that has to go through an extra indirection IntAsConstant extends Convertible[Int, ConstantInt]; what I do and need is just implicit def intAsConstant(i: Int) = new ConstantInt(i). I would prefer not to pay extra steps and allocations.

4 Likes

Yes, also for example in json libraries like lihaoyi’s ujson:

Js.Obj(
  "key1" -> 1,
  "key2" -> "hi",
  "key3" -> Js.Array(1, "str")
)

which achieves this api through implicit conversions of Int and String to JsValue.

Or even, in 2.13 the collections framework doubles down on use of implicit conversions with for example the IterableOnce#to method

List(1).to(List)
List(1 -> 2).to(Map)
List(1).to(SortedSet)

which is really

List(1).to(IterableFactory.toFactory(List))
List(1 -> 2).to(MapFactory.toFactory(Map))
List(1).to(EvidenceIterableFactory.toFactory(SortedSet))
3 Likes

that encoding requires the converted-to type to be inferrable or else it will result in ambiguous implicits:


given Convertible[Int, String] = ???
given Convertible[Int, List[Int]] = ???

val string: String = 1.as // ok
1.as // error

It would be nicer if we could have 1.as[Target] syntax so that the caller can provide the target type in-line

5 Likes

In my experience these conversions are very error prone. What is the meaning of arr in this code?

val items = Seq(Js.Num(42), Js.Str("foo"))
val arr   = Js.Arr(items)

The first answer would probably be “arr is a JSON array containing the number 42 and the string “foo””.

But this code actually builds a JSON array containing a single item, which is another JSON array containing the number 42 and the string “foo”.

This is because the type signature of Js.Arr is actually the following:

object Js {
  def Arr(items: Js*): Js = ...
}

So, the above code should not type check (we should have written new Js.Arr(items) — note the usage of new), but it does because Seqs can be implicitly converted to JSON arrays…

3 Likes

I think either Convertible or Converter would work fine, although I think the latter probably reads better. I also considered Cast because it’s nice and short, but it carries connotations that I’m not entirely comfortable with. My first choice would be Conversion, but I’m not married to the name by any means. The most important thing is for it to be in the standard library sitting between the existing Conversion[A,B] and Function[A,B] types. Of course, I’d like to have an as extension method in the standard library as well, but this is much less important. It’s easy enough to put in a library and import it. At least with the typeclass included, people will begin to provide implementations.

Just to be clear: my comments regarding the evil of implicit conversions were made half in jest. Implicit conversions do have their place, and I am not seriously advocating for their wholesale removal. However, like mutability, and other constructions which are frequently and easily abused, their use should be strongly discouraged. I believe that a conversion typeclass along with an as method goes a long way to eliminating the desire and need to use implicit conversions.

1 Like

I’m fond of “Convertible”, for the simple reason that I would only have to change the import statements, as that’s what I named our version of this :slight_smile:

I agree, I’ve found that even in the places where the target type could be inferred by the compiler, it’s much harder for humans to follow without some indication what you’re intending to convert to.


These are the proposed signatures so far:

Our internal one looks like this:

trait Convertible[A, B] extends (A => B)

I’ve not really felt any pain points from having invariant type parameters, but as the other two seem to be converging on covariant input and contravariant output, I’m wondering if there’s something I’m missing?

Instead of enabling [x.as](http://x.as)[A] to implicitly apply some function f, why don’t we just write f(x)?

Generally, it’s more convenient if there’s a canonical mapping from A to B

Sometimes it doesn’t matter:

a.map(_.as[B]))
// vs
a.map(convertAToB(_))

Other times, it absolutely does:

seq.map(foo).map(bar).map(baz).as[JObject]
convertSeqToJObject(seq.map(foo).map(bar).map(baz))

Mostly, having something like this really helps when there’s a canonical conversion that you want to avoid having to remember where it lives, or seem trivial but are really easy to accidentally get wrong (Joda to Java 8 time classes are a good example).

1 Like

Why? This goes against every convention we have in the collections library. Putting a Seq inside Seq.apply gives a nested Seq. Putting a Set inside Set .apply results in a nested Set. Putting any collection inside another collection’s apply method results in a a nested collection. Why would putting a collection inside a JSON collection’s apply method flatten it out?

If you want a conversion, instead of a wrapping, you use .from. Just like any other collection.

There’s nothing “incorrect” about havingg a nested JSON array, just as there is nothing incorrect about nested Seqs. The fact that the nesting is not part of the type signature is just a fact of life dealing with JSON, and not sufficient reason to throw out all our conventions for new ones

3 Likes

The difference is that we expect the apply methods of the collections to create new collections, primarily because collections are, of course, about collections. In contrast, we expect a JSON library to be primarily about converting things to JSON and back. Just like Js.Str(String) converts a String to a JSON string, so it is completely natural to expect Js.Arr(Seq) to convert a Seq to a JSON array. Why isn’t it Js.Str.from(String)?

I guess you can argue that the current API is better, but for sure it is not immediately obvious. So there is a potential for error. On the other hand, is that error really due to implicits, or couldn’t you just easily have the same problem without implicits?

1 Like

I’d say it probably isn’t directly related to implicits, as you could have the same issue without it. A more implicit-centric error is that methods like Json.obj and Json.arr provide cut points for the parser, so if you have a typo it’ll narrow down a bit the search space.

On the other hand, Json4s has a DSL which is completely driven by implicit conversions, and if you misplace a comma somewhere in the middle the whole thing fails to resolve and you get very little indication where the error is. It’s really unpleasant to debug.

JSON libraries do multiple things: conversion, construction, serialization, parsing, and much more. You cannot call the wrong method in the wrong part of a library and expect to get the right output. ujson.Arr and friends are for you to conveniently construct JSON fragments, not as a way to convert Scala datatypes to JSON

All this is documented thoroughly, with reference docs, blog posts, and lots of online examples, all following existing standard library conventions down to the exact same method names and signatures. If that isn’t enough, there’s literally nothing else I can give

You are right though that the debate over apply vs from has nothing to do with implicit conversions

3 Likes