Updated Proposal: Revisiting Implicits

I would just like to say that the narrative that implicit conversions come directly from the devil is not holding. I have half a dozen projects where I need to lift literals to something that belongs in the type hierarchy of the particular project, usually Int => ConstantInt or Double => Constant. I don’t see how that has to go through an extra indirection IntAsConstant extends Convertible[Int, ConstantInt]; what I do and need is just implicit def intAsConstant(i: Int) = new ConstantInt(i). I would prefer not to pay extra steps and allocations.

4 Likes

Yes, also for example in json libraries like lihaoyi’s ujson:

Js.Obj(
  "key1" -> 1,
  "key2" -> "hi",
  "key3" -> Js.Array(1, "str")
)

which achieves this api through implicit conversions of Int and String to JsValue.

Or even, in 2.13 the collections framework doubles down on use of implicit conversions with for example the IterableOnce#to method

List(1).to(List)
List(1 -> 2).to(Map)
List(1).to(SortedSet)

which is really

List(1).to(IterableFactory.toFactory(List))
List(1 -> 2).to(MapFactory.toFactory(Map))
List(1).to(EvidenceIterableFactory.toFactory(SortedSet))
3 Likes

that encoding requires the converted-to type to be inferrable or else it will result in ambiguous implicits:


given Convertible[Int, String] = ???
given Convertible[Int, List[Int]] = ???

val string: String = 1.as // ok
1.as // error

It would be nicer if we could have 1.as[Target] syntax so that the caller can provide the target type in-line

5 Likes

In my experience these conversions are very error prone. What is the meaning of arr in this code?

val items = Seq(Js.Num(42), Js.Str("foo"))
val arr   = Js.Arr(items)

The first answer would probably be “arr is a JSON array containing the number 42 and the string “foo””.

But this code actually builds a JSON array containing a single item, which is another JSON array containing the number 42 and the string “foo”.

This is because the type signature of Js.Arr is actually the following:

object Js {
  def Arr(items: Js*): Js = ...
}

So, the above code should not type check (we should have written new Js.Arr(items) — note the usage of new), but it does because Seqs can be implicitly converted to JSON arrays…

3 Likes

I think either Convertible or Converter would work fine, although I think the latter probably reads better. I also considered Cast because it’s nice and short, but it carries connotations that I’m not entirely comfortable with. My first choice would be Conversion, but I’m not married to the name by any means. The most important thing is for it to be in the standard library sitting between the existing Conversion[A,B] and Function[A,B] types. Of course, I’d like to have an as extension method in the standard library as well, but this is much less important. It’s easy enough to put in a library and import it. At least with the typeclass included, people will begin to provide implementations.

Just to be clear: my comments regarding the evil of implicit conversions were made half in jest. Implicit conversions do have their place, and I am not seriously advocating for their wholesale removal. However, like mutability, and other constructions which are frequently and easily abused, their use should be strongly discouraged. I believe that a conversion typeclass along with an as method goes a long way to eliminating the desire and need to use implicit conversions.

1 Like

I’m fond of “Convertible”, for the simple reason that I would only have to change the import statements, as that’s what I named our version of this :slight_smile:

I agree, I’ve found that even in the places where the target type could be inferred by the compiler, it’s much harder for humans to follow without some indication what you’re intending to convert to.


These are the proposed signatures so far:

Our internal one looks like this:

trait Convertible[A, B] extends (A => B)

I’ve not really felt any pain points from having invariant type parameters, but as the other two seem to be converging on covariant input and contravariant output, I’m wondering if there’s something I’m missing?

Instead of enabling [x.as](http://x.as)[A] to implicitly apply some function f, why don’t we just write f(x)?

Generally, it’s more convenient if there’s a canonical mapping from A to B

Sometimes it doesn’t matter:

a.map(_.as[B]))
// vs
a.map(convertAToB(_))

Other times, it absolutely does:

seq.map(foo).map(bar).map(baz).as[JObject]
convertSeqToJObject(seq.map(foo).map(bar).map(baz))

Mostly, having something like this really helps when there’s a canonical conversion that you want to avoid having to remember where it lives, or seem trivial but are really easy to accidentally get wrong (Joda to Java 8 time classes are a good example).

1 Like

Why? This goes against every convention we have in the collections library. Putting a Seq inside Seq.apply gives a nested Seq. Putting a Set inside Set .apply results in a nested Set. Putting any collection inside another collection’s apply method results in a a nested collection. Why would putting a collection inside a JSON collection’s apply method flatten it out?

If you want a conversion, instead of a wrapping, you use .from. Just like any other collection.

There’s nothing “incorrect” about havingg a nested JSON array, just as there is nothing incorrect about nested Seqs. The fact that the nesting is not part of the type signature is just a fact of life dealing with JSON, and not sufficient reason to throw out all our conventions for new ones

3 Likes

The difference is that we expect the apply methods of the collections to create new collections, primarily because collections are, of course, about collections. In contrast, we expect a JSON library to be primarily about converting things to JSON and back. Just like Js.Str(String) converts a String to a JSON string, so it is completely natural to expect Js.Arr(Seq) to convert a Seq to a JSON array. Why isn’t it Js.Str.from(String)?

I guess you can argue that the current API is better, but for sure it is not immediately obvious. So there is a potential for error. On the other hand, is that error really due to implicits, or couldn’t you just easily have the same problem without implicits?

1 Like

I’d say it probably isn’t directly related to implicits, as you could have the same issue without it. A more implicit-centric error is that methods like Json.obj and Json.arr provide cut points for the parser, so if you have a typo it’ll narrow down a bit the search space.

On the other hand, Json4s has a DSL which is completely driven by implicit conversions, and if you misplace a comma somewhere in the middle the whole thing fails to resolve and you get very little indication where the error is. It’s really unpleasant to debug.

JSON libraries do multiple things: conversion, construction, serialization, parsing, and much more. You cannot call the wrong method in the wrong part of a library and expect to get the right output. ujson.Arr and friends are for you to conveniently construct JSON fragments, not as a way to convert Scala datatypes to JSON

All this is documented thoroughly, with reference docs, blog posts, and lots of online examples, all following existing standard library conventions down to the exact same method names and signatures. If that isn’t enough, there’s literally nothing else I can give

You are right though that the debate over apply vs from has nothing to do with implicit conversions

3 Likes

What’s wrong with 1.as: Target?

Other than being a visually unusual way to specify types, nothing.

That’s a big caveat though, I can’t remember the last time I saw code which specified the type for a method using that idiom, but that may simply be an artifact of the style of the code I generally work with.

At the risk of going down a tangent, I’d expect Js.Arr(items) to take a Seq[Js] and return the flat JSON array, and Js.arr(items) to take Js:_* and return the nested array. The first looks like a companion object shorthand for a call to new, the second looks like a DSL-style helper, so their behavior is surprisingly counter to the intuition I’ve built up about Scala conventions.

1 Like

Compare

foo.as[Bar]
  .baz(1.as[Baz], "hi")
  .bumble()
  .as[String]
  .split(",")


((foo.as: Bar)
  .baz((1.as: Baz), "hi")
  .bumble()
  .as: String)
  .split(",")
1 Like

I see your point, but you don’t need the parens inside the argument to baz. Also, you could use pipe:

foo.pipe[Bar](_.as)
  .baz(1.as: Baz, "hi")
  .bumble()
  .pipe[String](_.as)
  .split(",")

Ok so you’ve come up with a workaround that somewhat mitigates the deficiencies of the API. Though not fully, because .pipe[Bar](_.as) is still much worse than .as[Bar], both ergonomically as well as wrt performance. What is the downside of as[T], which is worth reaching for such a much more heavy, complex workaround as piping?

5 Likes

I did not say there was a downside to it. In fact, it’s pretty easy to achieve that syntax:

trait Convertible[-A, +B] extends (A => B) with
  def[B0 >: B] (x: A) as: B0 = apply(x)

used as:

given Convertible[Int, String] = _.toString

println(1.as[String])

(It’s irrelevant to the discussion, but calling piping a heavy, complex mechanism is a stretch, don’t you think? It’s just postfix function application.)

I actually do think that it is a very heavy mechanism to pull in, for what we’re trying to achieve. And all the features and chain of logic that goes into arriving at the design of the expression 1.pipe[String](_.as) is very kludgy IMO.

We start with a task “I want to convert 1 to a String.”

-> I then use the fact that conversions in Scala3 are done via x.as so I use x.as
-> I then see an ambiguous implicit error saying there are multiple implicits that provide an as method.
-> If I am very familiar with the language already, I correctly diagnose that the problem is that implicit resolution works in such a way that I need to provide the right ascribed type in order to disambiguate which implicit applies. If I am not already familiar with the language, I am instead frustrated, and this roadblock tarnishes my impression of Scala.

-> If I am more experienced, I decide to work around this by ascribing the target type with : String, and continue.
-> If I am an extreme keener, probably one of the biggest Scala nerds at my organization, I will know about obscure corners in the stdlib and the pipe higher order method, which allows me to refactor 1.as into postfix position as 1.pipe(_.as) which is more convenient because pipe allows me to ascribe a type to the result of the lambda, so I ascribe String there to get 1.pipe[String](_.as).

-> After this point, any colleagues I have which are coming from Java/JS/Kotlin/Go/Python/PHP are irritated that for basic things like converting Int to String, they have to learn about higher order combinators like pipe, anonymous function syntax with _, decorators, ambiguous implicit errors, etc. The colleagues that are very familiar with Scala will debate with me in PR’s about what the point of this .pipe[String](_.as) is, and I have to explain the logic. We continue to have difference of opinion about whether 1.pipe[String](_.as) is better or if (1.as: String) is better. As a general rule, one can tell who wrote which parts of the code by their use of 1.as: String vs 1.pipe[String](_.as).

5 Likes

I’m not sure that’s irrelevant, as it would need to be used extensively to make _.as: B work without adding a bunch of parens.

Also, isn’t pipe hidden behind an import of scala.util.chaining._?