Can We Wean Scala Off Implicit Conversions?

I’ll argue three points: First, that implicit conversions are evil. Second, that Scala 3 might not need them anymore since there are better alternatives in many cases. Third, that there might be a migration path how we could limit their scope and eventually drop them altogether.

Implicit conversions are Evil

Implicit conversions are evil, for several reasons.

  • They make it hard to see what goes on in code. For instance, they might hide bad surprises like side effects or complex computations without any trace in the source code.
  • They usually give bad error diagnostics when they are not found, so code using them feels brittle and hard to change to developers not intimately familiar with the codebase.
  • Type inference could be much better if there were no implicit conversions. Better means: faster, more powerful, and more predictable.

This last point was driven home to me when I worked on #9255. Quoting from the comment:

Now this PR also fixes #8311. It turns out the problem is not solvable in general, as long as we have implicit conversions, but we can solve it for the specific case of extension methods. The problem appears in the following situation:

There is a call f.m(s) where f takes context parameters that need to be inferred. Let say f has type (using T): R and s has type S . When inferring an argument for f , can we make use of the knowledge that S <: T ?

In general the answer is no, since we might later add an implicit conversion between S and T . So S is not necessarily a subtype of T . But if m is an extension method we can do it. In this case, the call was rewritten from m.f(s) and we have already resolved m.f , so no implicit conversion can be inserted around the m anymore.

I then realized that the situation described is just one example of a pattern that appears over and over in the type inferencer. Ideally, the kind of local type inference we do works as follows:

  • we gather what we know from the context, in the form of subtype constraints
  • at certain points we solve for a particular type variable in the context, typically as late as possible.

So, a more precise context means we know more for type inference, overloading resolution, and implicit search and can do a better job.

Implicit conversions cripple this scheme. With implicit conversions there’s much less we can tell about the context, since implicit conversions might end up to be inserted anywhere. So what we actually do is drop information we know from the context in the form of temporarily forgetting parts of the expected type. Then, if there is a problem such as an ambiguous implicit or an ambiguous overload we iteratively “re-disvover” some parts of the expected type and try again. Every part we re-discover in this way means we make a decision that a subtype relationship holds and therefore an implicit conversion should not be inserted. This is a complicated dance. It’s very ad-hoc and can heal only some errors but not others. For instance in #8311 an implicit was already inferred but it was the wrong one. That’s a situation where no healing is possible (in the general case; the extension method case has a solution).

So, without implicit conversions, we’d have better context information everywhere, we would avoid a lot of special cases and would avoid trying inference steps several times.

Scala 3 might not need implicit conversions

Scala 3 needs them much less than Scala 2 since in many cases there are better ways to do things. Many implicit conversion essentially add new members, which is now done with extension methods. Other conversions map to an expected argument type (for instance, using the “magnet pattern”). In that case the conversion can be passed as a type class and invoked explicitly at the call site.

As an example, consider this example from the doc pages:

object Completions {

  // The argument "magnet" type
  enum CompletionArg {
    case Error(s: String)
    case Response(f: Future[HttpResponse])
    case Status(code: Future[StatusCode])
  }
  object CompletionArg {
    given fromString     as Conversion[String, CompletionArg]               = Error(_)
    given fromFuture     as Conversion[Future[HttpResponse], CompletionArg] = Response(_)
    given fromStatusCode as Conversion[Future[StatusCode], CompletionArg]   = Status(_)
  }
  import CompletionArg._

  def complete(arg: CompletionArg) = arg match {
    case Error(s) => ...
    case Response(f) => ...
    case Status(code) => ...
  }
}

We can re-formulate complete as follows:

  def complete[T](arg: T)(using c: Conversion[T, CompletionArg]) = c(arg) match {
    case Error(s) => ...
    case Response(f) => ...
    case Status(code) => ...
  }

This still uses the concept of Conversion, but it’s no longer an implicit conversion. The conversion is applied explicitly whereever it is needed. The idea is that with the help of using clauses we can “push” the applications of conversions from user code to a few critical points in the libraries.

There might be a migration path

In Scala 2, defining an implicit conversion requires a language import

import scala.language.implicitConversions

Arguably, the language import should be at the use site instead. That’s where the surprises happen, and that’s where a language import could be a useful hint that something hidden goes on. So in Scala 3.0 we also flag the use of implicit conversions as a feature warning if no implicitConversions language import is given. Some common conversions coming from the Scala 2 library are in a special allow list and do not lead to warnings. This is what’s currently implemented.

Missing language imports give warnings, but in this case we could tighten the rule and make it an error if an implicit conversion is inserted in code that is not under the implicitConversions language import. This could be done for 3.1. The handful of common Scala standard library conversions would stay exempted.

So in 3.1 we’d have a situation where insertions of implicit conversions are errors unless there is a language import of implicitConversions Then in 3.2 we could turn things around and not even look for implicit conversions unless there’s the language import. This could inform type inference: we’d have simpler and stronger type inference algorithms if the language import is not given. At this point we also need to rewrite the standard library to drop any of the conversions that were previously exempted.

That’s about a far as we need to plan ahead. Over time, implicit conversions might become a curious dialect feature, a bit like XML literals are now. And maybe one day the community will feel that their usefulness no longer warrants the maintenance cost. Or not. It does not really matter. The important point would be that mainline code without the language import does not use implicit conversions and gets better type inference in return.

Discussion

What do you think? The contentious issue is clearly the second one: Are there good alternatives for implicit conversions in all cases? We have to go over the standard library and dotty compiler to see whether that’s the case at least for these. It’s clear that the proposal will not fly if code using common standard library functions needs a language import to work.

One tricky point that’s already apparent is conversions such as Predef.augmentString that add a whole lot of methods from some trait to a type. E.g. augmentString adds all Seq ops to String. There are over 100 such operations and it’s a pain to repeat them for every type that gets them by a decorator conversion. We can avoid multiple forwarders by using the “push conversions into library code” trick. I.e. there could be an extension that subsumes augmentString and arrayOps and other conversions like them. Roughly like this:

extension [Coll[_], T](xs: Coll[T])(using toOps: Conversion[T, IterableOps[T]]):
  def head: T = toOps(xs).head
  def tail: Coll[T] = toOps(xs).tail
  ...

So that means we have to write all forwarders only once, which might be good enough. Maybe we could even go further if we changed the language to allow exports in extensions. Something like this:

extension [Coll[_], T](xs: Coll[T])(using ops: Conversion[T, IterableOps[T]]):
  export ops(xs)._

Note that at present this part is pure speculation, and should not be taken as a proposal. I just wanted to bring it up to illustrate that if we identify a common use pattern of implicit conversions that’s not well covered we also could think of new language ideas to ameliorate the situation. As long as the new features are more predictable and modular than the implicit conversions they replace it would be a net win.

EDIT: I was too optimistic about the timeline. As of 3.0 we still allow implicit conversions without feature warning where the conversion is in the companion object of its target type. That covers all implicit classes, and a large part of implicit constructors, as @lihaoyi defines them. This is necessary for cross building between 2.13 and 3.0. So the fastest possible migration scheme would look like this:

3.1 Flag all implicit conversions with a feature warning unless a language import is present (with the exception of some versions in stdlib that will go away in 3.2)
3.2 Error on all implicit conversions without language imports; rewrite stdlib
3.3 Turn on better inference where no language import is given.

19 Likes

If we can get rid of implicit conversions without too much pain, it would be great if we did. However, should implicit conversions from the stdlib (or elsewhere) be warnings/unavailble depending on compiler flags? What’s sauce for the goose might have to be sauce for the gander. If the arguments that implicit conversions are evil because they make things less transparent and inference worse are true – and they are – then having them only on some whitelisted set of stdlib methods makes transparency not exactly better.

Is there a solution yet for type parameters on generic extensions yet? i.e. can the example of augmentString include

extension (Coll[_], T)(xs: Coll[T])(using toOps: Conversion[T, IterableOps[T]]):
  def head: T = toOps(xs).head
  def tail: Coll[T] = toOps(xs).tail
  def map[B](f: T => B): Coll[B] = toOps(xs).map(f)

or are other workarounds needed?

3 Likes

There is a quote that I like from RFC 1925 that can probably be applied to a lot of software design:

(12) In protocol design, perfection has been reached not when there
is nothing left to add, but when there is nothing left to take
away.

Since implicit conversions cause problems, and given that it would make the language simpler (both implementation and for programmers) then I support a migration path aimed at removing them, particularly if the common uses cases can be reasonably expressed in other ways.

4 Likes

Actually the most important conversions to StringOps and ArrayOps should already map 1-1 to extension methods, cause those are pure value classes. Though, again, the limitation of type parameters on generic extensions will cause problems for ArrayOps. It seems that getting rid of that limitation should be a big priority.

There are more problematic conversions to WrappedString and ArraySeq. I always forget why they’re necessary.

To illustrate this, here’s an example where dotty and scalac handle implicit conversions differently due to implementation details of type inference: Implicit conversion is not applied when the compiler has to also infer a type parameter · Issue #8803 · lampepfl/dotty · GitHub, we can’t imitate scalac here without worsening our type inference in general.

I’ve also been experimenting with better type inference for lambdas: [Prototype] Better type inference for lambdas (e.g., as used in folds) by smarter · Pull Request #9076 · lampepfl/dotty · GitHub (this would for example allow expressions like xs.foldLeft(Nil)((acc, x) => x :: acc) to typecheck without having to ascribe the type of Nil ), I haven’t yet thought about how this would interact with implicit conversions, but its likely to significantly complicate the implementation or even end up being a showstopper for this feature.

6 Likes

Indeed, right now dotty itself is compiled with -language:implicitConversions, if I turn that option off (as well as -Xfatal-warnings), I get 4876 warnings in dotty-compiler itself! (dotty warns at every usage of implicit conversion). Most of those are things like int2Char and implicit conversions defined in dotty itself, but I think that shows clearly that getting rid of implicit conversions isn’t going to be easy.

As just one example: option2Iterable in the standard library is really convenient, are we prepared to lose that?

3 Likes

The problem is that people react to this sort of things by globally enabling implicitConversions in their build (even dotty does it as mentioned above, and it’s also neatly packaged in the often recommend sbt plugin GitHub - typelevel/sbt-tpolecat: scalac options for the enlightened) and then not worrying about it ever again, just like macros.

4 Likes
 def complete[T](arg: T)(using c: Conversion[T, CompletionArg]) = c(arg) match {
    case Error(s) => ...
    case Response(f) => ...
    case Status(code) => ...
  }

This is not a general replacement for the previous example as long as type parameters cannot be partially applied: you will then no longer be able to provide the other type parameters without explicitly providing the type parameter meant for the implicit Conversion[T, V], which is useless busywork.

If we can partially apply type parameters, it’s still a bit odd that there’s a type parameter there that we don’t ever really expect people to pass in, but at least people will be able to ignore it and go about their lives pretending it doesn’t exist and things will “just work”.

1 Like

That’s a good use case to ponder. Option already is a subtype of IterableOnce so that means we can pass an option as an argument to all functions that accept iterables or iterators. I believe option2Iterable exists mainly to support all Iterable methods on Option. So that could be done with extension methods. Other conversions to Iterable could use toSeq.

So yes, it’s possible. The downside is we need to define the extension methods, which can be a pain.
The upside is, we can tailor the extension methods to only those that make sense. For instance, I would argue that Some(1) ++ List(2, 3) does not make sense; we should write Some(1).toSeq ++ List(2, 3) for that. On the other hand, Some(1).map(f), Some(1).filter(f), Some(1).toSet all do make sense.

2 Likes

Apart from the issue above, I do believe that implicit conversions do have some pretty good use cases:

  1. Implicit constructors, where we do not want any extension methods. For these, I place the implicits in the companion object. This is a common pattern throughout ~all my libraries (fastparse, os-lib, requests-scala, ammonite, mill, sourcecode, …)

  2. Implicit constructors + extension methods: in many cases that I do want extension methods, I also want an implicit constructor as above. e.g. I want java.lang.String to have the Seq[Char] methods on it, but I also want to be able to pass in a String where Seq[Char] is expected. Ditto for String and fastparse.P[_], org.scalajs.js.Array[T] => Seq[T], etc.

One downside of implicit conversions is that it is too easy to make use of (2.) above accidentally. e.g. in Scalatags, I want an implicit constructor String => Frag, but because of how the imports are set up I end up with a bunch of extension methods on String that I never really wanted. Similarly, there are cases where people want extension methods alone, and the implicit conversion mechanism gives them an implicit constructor they never want to use directly.

These are definitely pain points with implicit conversions, but I think we have to acknowledge that there are some use cases for (2.) that we do want, and (1.) I think is a relatively problem-free usage pattern that I have not heard any complaints about despite very heavy usage throughout all my libraries.

One option is we keep implicit conversions around and just make them more inconvenient to use, while providing special syntax for implicit constructors and extension methods, to encourage people to use those weaker features directly rather than reaching for the more powerful implicit conversions.

Another option is we simply remove implicit conversions, and tell people who want both an implicit constructor and a set of extension methods to define both explicitly. It’s possible that the use cases where people want both are uncommon enough that the added boilerplate would cause no great hardship

5 Likes

The problem is that people react to this sort of things by globally enabling implicitConversions in their build (even dotty does it as mentioned above, and it’s also neatly packaged in the often recommend sbt plugin GitHub - typelevel/sbt-tpolecat: scalac options for the enlightened) and then not worrying about it ever again, just like macros.

Yes, I guess we cannot solve that problem with technical means alone. But if type inference really gets noticeably better without implicit conversions people will have an incentive to stop doing that.

6 Likes

Could this use-case not be better handled with a String | Seq[Char] union type? We could potentially go even further by allowing extension methods against such unions.

Alternatively, something like a CharSeqRepr typeclass might be used.

The standard library is special since it can co-evolve with the compiler. I.e. the exempt conversions in the library are precisely those we need to drop once missing language imports become hard errors. There’s no point in warning about these in user programs before that because we have already committed ourselves that they will go away (transparently, for user source code). Now there are a couple of tricky aspects to this:

  • It would be a binary breaking change that demands a recompile. Tasty could not help in that case.
  • We’d have to make sure that the redesigned standard library remains usable from Scala 2.

That’s why I timed it to come for 3.2, since that’s the current provisional timeframe when we would re-organize the standard library and ship it in Tasty as standard format.

Is there a solution yet for type parameters on generic extensions yet?

Not for 3.0, but we would like to have this at some later point. But in any case, you can workaround the problem by splitting the extension into several, each with different type parameters.

1 Like

An implicit conversion is applied automatically by the compiler in three situations:

  1. If an expression e has type T , and T does not conform to the expression’s expected type S .
  2. In a selection e.m with e of type T , but T defines no member m .
  3. In an application e.m(args) with e of type T , if T does define some member(s) named m , but none of these members can be applied to the arguments args .

Actually we broadly use the first point(I assume for implicit constructors ).
Is there any sense to get rid only of 2 and 3 points?

2 Likes

There’s also the use case of extension methods with explicitly provided type params, like the following from scodec which had to be encoded as an implicit class:

implicit class AsSyntax[A](private val self: Decoder[A]) extends AnyVal {
  def as[B](using iso: Iso[A, B]): Decoder[B] = self.map(iso.to)
}

val d: Decoder[(Int, String)] = ???
val e: Decoder[Foo] = d.as[Foo]
2 Likes

That’s basically what I propose: “make them more inconvenient to use” = “require a language import for all code that uses them”.

Partial type parameters will hopefully come soon; they are very useful in their own right anyway. Also, agreed, we should think about good ways to express the use cases you mention. They are very common in my experience also. For implicit constructors which simulate closed sums I think union types are an interesting alternative. Taking a lexer as example, we have a type Token and would like to be able to write string literals as tokens. We could set it up like this:

type PreToken = Token | String
extension (pt: PreToken)
  def toToken: Token = pt match
    case pt: String => Str(pt)
    case pt: Token => pt

Then use PreToken whereever we also want to accept string literals.

5 Likes

Yes, good point. Another reason why we want to have multiple type parameter sections that can take explicit type parameters independently from one another. Everybody wants to have this. It’s just that the implementation effort required means it will have to be shipped after 3.0.

3 Likes

That’s basically what I propose: “make them more inconvenient to use” = “require a language import for all code that uses them”.

I would argue that language imports are a failed experiment: I’m not aware of anyone who has ever hit a language import warning or error and done anything other than added the language import and forgotten about it, and they’re usually at the top of the file in a list of 100 other imports that nobody will ever go back and look at.

Furthermore, at the usage-site an implicit conversion is nothing more than an implicit constructor + a set of extension methods, so why should one require a usage-site language import while the other doesn’t? If anything, making them inconvenient at the definition site is the right thing to do: it’s just the language imports aren’t really sufficiently inconvenient a mechanism to really affect user behavior.

If we wanted to make them inconvenient at the definition site, simply removing “implicit conversions” as a feature and making people do double work writing both an implicit constructor + a set of extension methods may be a sufficient inconvenience to make people think twice.

For implicit constructors which simulate closed sums I think union types are an interesting alternative.

Unions work for closed sum types, but then you lose the extensibility that implicit constructors provide. e.g. it is very common for my code to define additional implicit constructor T => Frag to make something compatible with Scalatags’ Frags, or T => geny.Readable or T => geny.Writable to make things compatible with those types.

10 Likes

I would be in favor of removing implicit conversions if the removal does not cause major disruptions in the library ecosystem, meaning we have indeed good alternatives for the most common use cases.

Speaking for myself, as the author of ~50K lines of library code including some DSL-style APIs, I already managed to get rid of them 100% some time ago. So even with code targeting Scala 2.12/2.13 there were no major hurdles.

Reducing language complexity and improving type inference would be a compelling combination of improvements.

5 Likes

I currently use implicit conversions as a workaround for scala’s eager widening in covariant types (so I use invariance with implicit conversion for widening). Assuming we can fix that, I see no problem in getting rid of them.