I’ll argue three points: First, that implicit conversions are evil. Second, that Scala 3 might not need them anymore since there are better alternatives in many cases. Third, that there might be a migration path how we could limit their scope and eventually drop them altogether.
Implicit conversions are Evil
Implicit conversions are evil, for several reasons.
- They make it hard to see what goes on in code. For instance, they might hide bad surprises like side effects or complex computations without any trace in the source code.
- They usually give bad error diagnostics when they are not found, so code using them feels brittle and hard to change to developers not intimately familiar with the codebase.
- Type inference could be much better if there were no implicit conversions. Better means: faster, more powerful, and more predictable.
This last point was driven home to me when I worked on #9255. Quoting from the comment:
Now this PR also fixes #8311. It turns out the problem is not solvable in general, as long as we have implicit conversions, but we can solve it for the specific case of extension methods. The problem appears in the following situation:
There is a call
f.m(s)
wheref
takes context parameters that need to be inferred. Let sayf
has type(using T): R
ands
has typeS
. When inferring an argument forf
, can we make use of the knowledge thatS <: T
?
In general the answer is no, since we might later add an implicit conversion between
S
andT
. SoS
is not necessarily a subtype ofT
. But ifm
is an extension method we can do it. In this case, the call was rewritten fromm.f(s)
and we have already resolvedm.f
, so no implicit conversion can be inserted around them
anymore.
I then realized that the situation described is just one example of a pattern that appears over and over in the type inferencer. Ideally, the kind of local type inference we do works as follows:
- we gather what we know from the context, in the form of subtype constraints
- at certain points we solve for a particular type variable in the context, typically as late as possible.
So, a more precise context means we know more for type inference, overloading resolution, and implicit search and can do a better job.
Implicit conversions cripple this scheme. With implicit conversions there’s much less we can tell about the context, since implicit conversions might end up to be inserted anywhere. So what we actually do is drop information we know from the context in the form of temporarily forgetting parts of the expected type. Then, if there is a problem such as an ambiguous implicit or an ambiguous overload we iteratively “re-disvover” some parts of the expected type and try again. Every part we re-discover in this way means we make a decision that a subtype relationship holds and therefore an implicit conversion should not be inserted. This is a complicated dance. It’s very ad-hoc and can heal only some errors but not others. For instance in #8311 an implicit was already inferred but it was the wrong one. That’s a situation where no healing is possible (in the general case; the extension method case has a solution).
So, without implicit conversions, we’d have better context information everywhere, we would avoid a lot of special cases and would avoid trying inference steps several times.
Scala 3 might not need implicit conversions
Scala 3 needs them much less than Scala 2 since in many cases there are better ways to do things. Many implicit conversion essentially add new members, which is now done with extension methods. Other conversions map to an expected argument type (for instance, using the “magnet pattern”). In that case the conversion can be passed as a type class and invoked explicitly at the call site.
As an example, consider this example from the doc pages:
object Completions {
// The argument "magnet" type
enum CompletionArg {
case Error(s: String)
case Response(f: Future[HttpResponse])
case Status(code: Future[StatusCode])
}
object CompletionArg {
given fromString as Conversion[String, CompletionArg] = Error(_)
given fromFuture as Conversion[Future[HttpResponse], CompletionArg] = Response(_)
given fromStatusCode as Conversion[Future[StatusCode], CompletionArg] = Status(_)
}
import CompletionArg._
def complete(arg: CompletionArg) = arg match {
case Error(s) => ...
case Response(f) => ...
case Status(code) => ...
}
}
We can re-formulate complete
as follows:
def complete[T](arg: T)(using c: Conversion[T, CompletionArg]) = c(arg) match {
case Error(s) => ...
case Response(f) => ...
case Status(code) => ...
}
This still uses the concept of Conversion
, but it’s no longer an implicit conversion. The conversion is applied explicitly whereever it is needed. The idea is that with the help of using clauses we can “push” the applications of conversions from user code to a few critical points in the libraries.
There might be a migration path
In Scala 2, defining an implicit conversion requires a language import
import scala.language.implicitConversions
Arguably, the language import should be at the use site instead. That’s where the surprises happen, and that’s where a language import could be a useful hint that something hidden goes on. So in Scala 3.0 we also flag the use of implicit conversions as a feature warning if no implicitConversions
language import is given. Some common conversions coming from the Scala 2 library are in a special allow list and do not lead to warnings. This is what’s currently implemented.
Missing language imports give warnings, but in this case we could tighten the rule and make it an error if an implicit conversion is inserted in code that is not under the implicitConversions
language import. This could be done for 3.1. The handful of common Scala standard library conversions would stay exempted.
So in 3.1 we’d have a situation where insertions of implicit conversions are errors unless there is a language import of implicitConversions
Then in 3.2 we could turn things around and not even look for implicit conversions unless there’s the language import. This could inform type inference: we’d have simpler and stronger type inference algorithms if the language import is not given. At this point we also need to rewrite the standard library to drop any of the conversions that were previously exempted.
That’s about a far as we need to plan ahead. Over time, implicit conversions might become a curious dialect feature, a bit like XML literals are now. And maybe one day the community will feel that their usefulness no longer warrants the maintenance cost. Or not. It does not really matter. The important point would be that mainline code without the language import does not use implicit conversions and gets better type inference in return.
Discussion
What do you think? The contentious issue is clearly the second one: Are there good alternatives for implicit conversions in all cases? We have to go over the standard library and dotty compiler to see whether that’s the case at least for these. It’s clear that the proposal will not fly if code using common standard library functions needs a language import to work.
One tricky point that’s already apparent is conversions such as Predef.augmentString
that add a whole lot of methods from some trait to a type. E.g. augmentString
adds all Seq
ops to String
. There are over 100 such operations and it’s a pain to repeat them for every type that gets them by a decorator conversion. We can avoid multiple forwarders by using the “push conversions into library code” trick. I.e. there could be an extension that subsumes augmentString
and arrayOps
and other conversions like them. Roughly like this:
extension [Coll[_], T](xs: Coll[T])(using toOps: Conversion[T, IterableOps[T]]):
def head: T = toOps(xs).head
def tail: Coll[T] = toOps(xs).tail
...
So that means we have to write all forwarders only once, which might be good enough. Maybe we could even go further if we changed the language to allow exports in extensions. Something like this:
extension [Coll[_], T](xs: Coll[T])(using ops: Conversion[T, IterableOps[T]]):
export ops(xs)._
Note that at present this part is pure speculation, and should not be taken as a proposal. I just wanted to bring it up to illustrate that if we identify a common use pattern of implicit conversions that’s not well covered we also could think of new language ideas to ameliorate the situation. As long as the new features are more predictable and modular than the implicit conversions they replace it would be a net win.
EDIT: I was too optimistic about the timeline. As of 3.0 we still allow implicit conversions without feature warning where the conversion is in the companion object of its target type. That covers all implicit classes, and a large part of implicit constructors, as @lihaoyi defines them. This is necessary for cross building between 2.13 and 3.0. So the fastest possible migration scheme would look like this:
3.1 Flag all implicit conversions with a feature warning unless a language import is present (with the exception of some versions in stdlib that will go away in 3.2)
3.2 Error on all implicit conversions without language imports; rewrite stdlib
3.3 Turn on better inference where no language import is given.