Proposed Changes and Restrictions For Implicit Conversions

This Pre-SIP proposes to go ahead with restricting implicit conversions. The topic has already been extensively discussed in Can We Wean Scala Off Implicit Conversions?. That thread was already very long, so am opening a new thread here.

Why Drop Unrestricted Implicit Conversions?

They are simply too dangerous. They might kick in at unforeseen points, and have unforeseen effects.

Compare with normal implicit parameters: Here we know a method expects an argument to be inferred and we know its type. So we know what to watch out for and we have a good idea what the shape of the inferred term will be.

But implicit conversions could kick in anywhere where two types don’t match. They might hide type errors. Even if they are used as intended they make it hard for a reader of the code to figure out what goes on.

Another important reason is that unrestricted implicit conversions make type inference less precise and less efficient. The problem is that we can never be sure whether some type in the context constrains a type variable. An implicit conversion might be inserted that makes the two types unrelated. This means we cannot propagate info as thoroughly as we would like. Also, where we do propagate we need to be prepared to backtrack and try different propagation strategies. This means duplicated work and slower type inference. The price of lower quality type inference is incurred even if no implicit conversions are inserted at all since the compiler cannot currently know that beforehand.

The previous thread discusses this in more detail.

Where Are Implicit Conversions Hard to Replace?

From the discussion on the previous thread emerged three areas where implicit conversions are currently difficult to replace. These are:

  1. When they are used as argument conversions to support variation in possible argument types. Such situations are pervasive in the standard library. For example:

    class Iterable[+A]:
      def concat(xs: IterableOnce[A]): Iterable[A] = ...
    

    Here, we also want to be able to pass an array to concat, or pass a string if the Iterable's element type is Char. These generalizations are achieved by defining implicit conversions from arrays and strings to sequences. A similar idea is the essence of the magnet pattern in libraries such as Akka Http.

  2. When they support bulk extensions. The implicit conversion from array to sequence has the welcome side-effect that it makes all sequence operations available on arrays. Without it, we’d have to define forwarders for all sequence operations manually as extension methods. Since there are over a hundred such methods, this is very tedious and hard to maintain. By contrast, the implicit conversion target can inherit most of these operations from base traits, so the effort to set this up is much lower.

  3. When doing design exploration. In an exploration phase, we might welcome the looser typing provided by implicit conversions. They help keep the code short and adapt easily to changes in types. For instance, a result of some method might be a String or a Text object. If there is an implicit conversion from String to Text, we can change the result type without having to change the body of the method.

How Can We Address This?

Old-style implicit conversions will be deprecated over the next versions of the language. New-style Conversion instances already require a language import and that import needs to be given at the point where they are inserted. (Old style conversions needed a language import at the point where they were defined, which turned out to be useless). If no language import is given, the compiler emits a feature warning.

I propose to tighten the rules in some future Scala 3.x version so that the language import for new-style Conversion instances becomes mandatory. If none is given at the point where they are inserted the compiler will then emit not just a feature warning but an an error. This means that code without the language import cannot have implicit-conversions inserted in arbitrary places. Therefore, we can use improved type-inference algorithms for such code that lead to better inferred types and better compile times.

To be able to do this, we need to address the three areas where implicit conversions were hard to replace. I propose to do this with a mixture of some smallish language extensions, library extensions, and tooling.

1. Argument Conversions

Argument conversions could in principle be replaced with typeclasses, but the change would be very disruptive to library designs and the resulting method signatures would become more complicated. Instead of going down that path, I propose to annotate parameters where argument conversions are allowed. Example:

class Iterable[+A]:
  def concat(xs: ~IterableOnce[A]): Iterable[A] = ...

Here, the ~ in front of the IterableOnce type indicates that we accept not only instances of IterableOnce but also values that can be implicitly converted to it. Implicit conversions can then be inserted for arguments of concat without a language import. This annotation makes argument conversions as predictable as other implicit arguments: the method definition tells us where they are allowed. The ~ syntax is not definite, it should be seen more as an example how we might want to express this.

2. Bulk Extensions

Extension methods are intended to replace in Scala 3 implicit classes and other implicit conversions. But implicit classes and conversions can support bulk extension through inheritance whereas current extension methods cannot. I propose to change this with two language tweaks.

  1. Allow export clauses in collective extensions.
  2. Allow the qualifier of an export clause to be an expression instead of a stable identifier.

This would let us write code like:

extension [A](xs: Array[A])
  export arrayOps(xs).*

This extension defines every method of arrayOps as an extension method for arrays. So where implicit classes use inheritance plus implicit conversions to achieve bulk extension, this new mechanism would use just aggregation, in the form it is provided by exports.

3. Explorative Programming

Explorative programming can still be supported by importing language.implicitConversions. To switch from exploration to stable code, one could have a tool or compiler setting that makes all implicitly inserted conversions explicit so that the language import can be dropped. To make explicit conversions nicer to use, we should offer an extension method inject in the Conversion class, like this:

abstract class Conversion[-T, +U] extends Function1[T, U]:
  def apply(x: T): U
object Conversion:
  extension [T](x: T) def inject[U]: U = apply(x)

Then an explicit conversion of a value a to a type B can be summoned by a.inject[B]. Or, if B can be inferred, it’s just a.inject.

Timeline

As a first step, we should add the necessary language and library extensions so that we can experiment with them. This could happen already in 3.1.

If things work out well, we could start making use-site language imports mandatory in 3.2.

The type inferencer could be upgraded one version later. The reason we cannot upgrade the type inferencer at the same time as the mandatory language import is that it would obscure error messages where implicit conversions were expected to be inserted.

15 Likes

This looks really great!

I think Bulk Extensions would solve my use-case of having to convert Quoted[Query[T]] into Query[T] (I.e. by tacking on all operations of Query[T] onto Quoted[Query[T]]) and Argument Conversions would solve my use-case of having to convert run(Query[T]) into run(Quoted[Query[T]]). I think Quill wouldn’t need to have implicit conversions with these two features!

My next question would be, how does this stuff look like in the Scala AST so that I can parse it?

This looks really nice. The export feature looks really useful in particular, even when not taken in the scope of this proposal.

Are there any downsides to it, that I’m not seeing right now, that led to the original design where the exported members had to be stable identifiere?

1 Like

How will we be able to use an argument conversion with operators(if, while) it is a killer feature for us. There are just no way to redefine “if” without loosing readability.

I appreciate the thoughtfulness you put into identifying different situations in which implicit conversions are legitimately useful (or at least, otherwise common).

But, one criticism (in the spirit of candor) is that I’m not sure the proposed solution follows from the examples. The proposed solution is to tighten restrictions on the Conversion construct, but what about instead making it unnecessary for the legitimate use cases among those you identified?

At least #2 (bulk extensions) seems like a legitimate use case (at least in Scala 2) which ought to be replaceable by normal extension methods, (though maybe the new import requirements interfere with that?).

In cases where I’ve used #1, it’s usually newtype-related; e.g. the method takes a TinyList and it’s annoying to have to say TinyList(List(foo)) everywhere, when List(foo) is obviously tiny (here meaning <= 256 elements). I think that’s probably solvable with inline in Scala 3 at the argument level, though it would probably be less clear (but more safe), and a little more work at the end of the day.

Another notion to put out there is that I can’t recall ever having legitimate use cases for implicit conversions that escape a project/package (at least, which aren’t addressed in Scala 3 already). Maybe forcing implicit conversions to be package-private to some parent package would help in restricting them without any new semantic rules?

I am not convinced that we’re addressing the problem at the right point here.

Even if they are used as intended they make it hard for a reader of the code to figure out what goes on.

Let’s solve that directly. @explicit { ... } produces warnings on everything inside that isn’t explicit, including implicit conversions. Now if you get into more than trivial difficulty, you can fix it with a recompile. Furthermore, IDEs can already show what’s going on AFAIK, but if there are any pieces missing, let’s fix them.

Another important reason is that unrestricted implicit conversions make type inference less precise and less efficient. The problem is that we can never be sure whether some type in the context constrains a type variable.

Normally this isn’t an issue–the type inference works fine. If there are assumptions that are needed to make it work better in some cases, it would be better to be able to (1) have the compiler tell you, if you supply a flag, that it’s struggling here (by instrumenting the depth and breadth of ; and (2) be able to manipulate the assumptions.

One way to do this locally is to utilize shadowing. If you import language.enableFeature.implicitConversions you would turn on the feature; import language.disableFeature.implicitConversions would turn it off. You can always get the behavior you want within your scope.


I think the suggestions are fine. I’m just not convinced that they really tackle the problem. Point (3) is basically just, “But sometimes you actually do want them”, and then we’re in the same position as before. Unless we get to the point where we say, “No, Scala doesn’t support that, period,” I think it’s better to solve the pain points than to guide people away from a feature that is still there but has pain points.

1 Like

Is this something that can be generally allowed (not just inside extensions) and also for imports?

1 Like

It can be generally allowed for exports but not for imports. It’s a question of evaluation order. Say you have

import f(x).a
...
a + a

when is f(x) evaluated? I believe it would have to be evaluated at the point where the import appears, so the code would rewrite to

val $temp = f(x)
...
$temp.a + $temp.a

The alternative semantics would rewrite at the point of access, i.e.:

...
f(x).a + f(x).a

That would not only surprising and potentially costly. It would not work at all if we import a type since type prefixes may not be expressions. On the other hand, evaluating at the point of import does not work either since we get into a mess for toplevel imports outside a class. So, it was a wise decision to restrict import prefixes to stable identifiers and we should keep it.

For exports, we have the same question.

export f(x).{a, b}

must mean

private val $temp = f(x)
def a = $temp.a
def b = $temp.b

since otherwise we could not export types. This time we do not have a problem with that interpretation, since vals are allowed wherever exports are allowed. Note that for extension methods
the two evaluation strategies lead to the same result.

extension (x: T)
  export f(x).a

rewrites to either

def a(x: T) = val $temp = f(x); $temp.a

or

def a(x: T) = f(x).a

but those two are equivalent.

Are there any downsides to it, that I’m not seeing right now, that led to the original design where the exported members had to be stable identifiere?

Just that we modeled exports after imports, and this would introduce a deviation between the two.

4 Likes

You can already do shadowing with language imports.

import language.implicitConversions

turns on implicit conversions and a nested

import language.implicitConversions as _

turns them off again.

2 Likes

Oh, that’s handy! I didn’t realize it would work like that.

I’m not 100% sure it should work like that. This compiles instead of complaining about no HashSet:

object Main {
  import collection.mutable.HashSet
  def main(args: Array[String]): Unit = {
    import collection.mutable.{HashSet => _}
    val a = HashSet.empty[String]
    a += "salmon"
    println(s"Hello, $a world!")    
  }
}

But, anyway, it’s good to have the capability!

It’s currently special cased for language features, and shadowings of root imports. But yes, we might want to generalize that to all imports. I.e. a suppressing import also suppresses all imports further out.

This is awesome. And I was completely unaware this was possible. Is this explicitly documented somewhere?

This is the first time I have ever seen this. And a quick Google search on “scala implicit conversions turning them off and then back on” didn’t show anything.

1 Like

What about annotating arguments at call site instead? This way IDE could highlight the conversion annotation differently, depending on whether it fires or not. Example:

someIterable.concat(~Array(a, b, c))

inject requires conversion, while conversion annotation declares an optional conversion, i.e. it’s inserted by compiler when it’s needed. That would also enable implicit conversions in ifs, while loops, calls to external librariers, etc

5 Likes

Thinking about this more, I don’t think I’d know when I write this I also want to accept things that can be implicitly converted to IterableOnce.

Scala 3 already took the more complicated type class approach with Converter. Maybe we should just take an implicit converter where there are now magnets, since those signatures are already complex. They don’t really become all that more complex, in the sense that you’re going to read the documentation to understand how you’re supposed to call it already in the case of magnets. The typeclass may make that clearer instead of less clear.

For the case where you have a non-magnet conversion like the example, maybe the call site annotation makes sense.

2 Likes

What about annotating arguments at call site instead? This way IDE could highlight the conversion annotation differently, depending on whether it fires or not. Example:

Big no to this. Having to know that a conversion is expected, is the same as writing it yourself. Furthermore, it’ll lead to people randomly trying to prefix things with ~ to see if it works (or whatever symbol/notation is chosen). Pretty much how I throw & and * randomly (figuratively speaking) in rust to try to satisfy the compiler because I refuse to track silly colored pointer types in my head.

3 Likes

The proposed use-site ~ is exactly the same as .into() in Rust, just spelled differently.

I’m not up to date with rust nor do I use it regularly, I just did the expedite introduction by reading the book twice, once in 2016 and once in 2020 and then writing a small project. I think my point still stands. Anything that fosters “casting” habits is a big no for me. (I call casting habits the act of trying the several cursory “casting” mechanisms to see if one makes the compiler shut up, such as as[Thing], ~thing, &thing, *thing, or .into() ).

A use site annotation is indeed redundant. One can just write argument.inject or argument.into or whatever we choose to call the conversion method.

But I believe a declaration site annotation does make sense. For instance in the concat method it’s very much by design that conversions should be admitted. Those conversions exist in the same library and the otherwise necessary overloads of concat are omitted since the designers knew that a conversion was available. The same holds for the magnet pattern. The magnet type is co-designed with the conversions into it.

Though it’s not so much concat that is designed to accept a converted argument. It’s actually IterableOnce that is designed to be converted to. So maybe it makes more sense to annotate the type/class declaration rather than every individual method declaration.

1 Like

I think it’s a matter of style.

I prefer a style where a language enables you more than it restricts you. So I am in favor of allowing implicit conversions, symbolic operators, infix methods, optional braces, and so on. This is very helpful for rapidly creating code that is readable and maintainable for someone who has similar preferences.

However, when one is primarily concerned with having to figure out, modify, and clean up code created by others who may have different preferences, then I can understand preferring more restrictions. I don’t fully understand why this cannot be adequately handled by linters and code-rewriters (not to mention IDEs). But I accept that it is an important consideration for some people despite having these other tools that can help. So perhaps it does make sense for this style. It sounds reasonably consistent with this sort of concern.

However, with all of these things, I think it is a good idea to ask: is this restriction which makes the language more complicated necessary even in light of other tools that we have or can build (with comparable amounts of work) to solve the same problem? To me it seems like the answer would be “it’s not worth it” in this case, but then I’m not a very good judge because my stylistic preference isn’t aligned with the goal here.

Still, I think it’s good to ask the question.

8 Likes