Pre-SIP: Allow fully implicit conversions in Scala 3 with `into`

I think the problem here is that this type-alias approach is forcing a bad dilemma on libraries who want to adopt it:

  1. If you keep the class name the same and add an typealias with a new name, you keep binary compatibility but break source compatibility: now all downstream code has to change to use the typealias instead of the original class name to work

  2. If you change the class name but add a typealias with the old class name, you break both binary compatibility and source compatibility: now all downstream code previously compiled against the old class name will stop working unless recompiled, and although method signatures won’t need to be changed, any calls to new MyCls or extends MyCls will be broken as well.

“forcing all libraries using implicit conversions to change and break all downstream code that will all need to be updated and/or recompiled” is simply not an acceptable migration plan. So we need to allow into as a modifier on existing types, traits, and classes, and given there isn’t any fundamental reason not to have it, that seems like the right way out of this dilemma

:point_up:

Anything that could help with that will be a much welcome addition.

What feels a bit strange to me is that on the one hand into is “just” an opaque type, but then if it’s the type of a method parameter it magically stops behaving like an opaque type.

3 Likes

Here’s a use case that’s important to me:

sealed abstract class Expr
case class Const(i: Int) extends Expr
case class Add(e1: Expr, e2: Expr) extends Expr

implicit def const(i: Int): Expr = Const(i)

val e = Add(1,Add(2,3))

Can something like this be supported somehow, either with this proposal or otherwise?

Ideally, my intent is to make scala.Int extends Expr, but of course I can’t change scala.Int.

I need it to be general even for non-primitive types, i.e. if Int in the example above were replaced by some arbitrary class. If I don’t control the source of that class, I can’t make it extends Expr.

There might be some relationship between this use case and the other discussion thread recently about sequence literals.

For this trivial version of the example it may be sufficient to declare

case class Add(e1: into[Expr], e2: into[Expr]) extends Expr

More generally I also want to be able to write:

val s = Seq[Expr](Add(1,2), 3, 4, 5)

To do so, would I need to change the signature of Seq.apply (which I do not control)? Or would I be able to write the following?

val s = Seq[into[Expr]](Add(1,2), 3, 4, 5)
1 Like

Yes what you suggest would all work.

val s = Seq[into[Expr]](Add(1,2), 3, 4, 5)

would do the trick.

I see why typeclasses can be an issue, but I think you can do this with union types.

In a little while, history will (almost) come full circle and we will write :sweat_smile:

def foo(x: <%A]): Unit = ???
7 Likes

By starting first from an implementation and language spec changes, that gives us no insight whatsoever on the impact of the language change, which is what we actually care about. We end up just talking in circles in hypotheticals with nothing anchoring us to the reality of the ecosystem except vague hand-waving. And there is no way at all to judge whose hand-waving is more correct!

To solve that, we need to flip the process on its head, and start by studying the various libraries in the ecosystem to really understand what implicit conversions are currently being used for. That is the only way we can find a reasonable path forward that does not involve breaking the entire Scala ecosystem that relies on this feature

Of course, we should pay close attention to avoiding breaking much of the code that relies on implicit conversions. But I’m not confident that redesigning the feature in such a way that maximizes the previous patterns supported without changes is the right way to go either.

It is quite likely that much of existing code was written a certain way simply because that is what functioned with the design of implicits at that time. That does not imply that it was the ideal design (nor that it was poorly designed, some awkward patterns are simply a result of some awkward choices from the compiler).

Even if we spend significant effort in classifying the problematic from unproblematic cases, I fear it could lead to tweaking the feature to minimize the number of previous patterns requiring changes. These kinds of arbitrary tweaks are precisely what can make the feature more confusing to work with, while requiring a few more migration adaptations could ideally lead to simpler code in the long run.

That being said, the “renaming the class Vs renaming usages” dilemma does appear to be a real issue meriting looking into more (arguably unideal even for new code, not just for backwards compatibility).

1 Like

I am wondering about some interactions with subtyping.

given Conversion[Int, A] = ???
type X >: into[A]
val x1: X = 1: into[A] // clearly ok
val x2: X = 1 // ok?

In this snippet for example, it is unclear to me if a conversion should get inserted for x2 without the ascription, since the expected type X is not strictly speaking of the shape into[_]? I suppose this also non-trivial from an implementation perspective.

def foo[T <: into[Int]](x: T) =
  x + 1 // ok?, or is explicit unwrapping needed?

Similarly, should we also be dropping into in method bodies for parameters like x: T?

On a minor note, I also think we should make the into opaque type covariant, unless there is a reason not to?

1 Like

val s = Seq[into[Expr]](Add(1,2), 3, 4, 5)

It’s pretty nice that the flexibility of into as a type expression let’s us use it here without being the author of the Seq.apply method.

At the same time, this also brings the question of where exactly applications of into can get inferred. Suppose we write Seq(Add(1,2), 3, 4, 5), then the application could still be valid by inferring into[Expr] as the type argument, but this is probably undesirable given there was no opt-in to conversions at neither of the definition-site or the use-site.
It seems unlikely in practice, but still, we might want to provide some sort of guarantees that it won’t be inferred anywhere, or that would defeat the original purpose.

I have added the following detailed description to the doc page. I think this answers some of your questions.

Details: Valid Conversion Target Types

To make the preceding descriptions more precise: An implicit conversion is permitted without an implicitConversions language import if the target type is a valid conversion target type. A valid conversion target type is one of the following:

  • a type of the form into[T],
  • a reference p.C to a class or trait C that is declared with an into modifier,
    which can also be followed by type arguments,
  • a type alias of a valid conversion target type,
  • a match type that reduces to a valid conversion target type,
  • an annotated type T @ann where T is a valid conversion target type,
  • a refined type T {...} where T is a valid conversion target type,
  • a union T | U of two valid conversion target types T and U,
  • an intersection T & U of two valid conversion target types T and U,
  • an instance of a type parameter that is explicitly instantiated to a valid conversion target type.

Inferred type parameters do not count as valid conversion target types. For instance, consider:

  trait Token
  class Keyword(str: String)
  given Conversion[String, Keyword] = KeyWord(_)

  List[into[Keyword]]("if", "then", "else")

This type-checks since the target type of the list elements is the type parameter of the List.apply method which is explicitly instantiated to into[Keyword]. On the other hand, if we continue the example as follows we get an error:

  val ifKW: into[Keyword] = "if"
  List(ifKW, "then", "else")         // error

Here, the type variable of List.apply is not explicitly instantiated, but is inferred to have type into[Keyword]. This is not enough to allow
implicit conversions on the second and third arguments.

Subclasses of into classes or traits do not count as valid conversion target types. For instance, consider:

into trait T
class C(x: Int) extends T
given Conversion[Int, C] = C(_)

def f(x: T) = ()
def g(x: C) = ()
f(1)      // ok
g(1)      // error

The call f("abc") type-checks since f’s parameter type T is into.
But the call g("abc") does not type-check since g’s parameter type C is not into. It does not matter that C extends a trait T that is into.

3 Likes

As a trial, following @lihaoyi’s suggestion, I have also added the following section to allow into as a modifier. The latest implementation PR supports this extension. It would be good to get your opinions on this part as well.

Alternative: into as a Modifier

The into scheme discussed so far strikes a nice balance between explicitness and convenience. But migrating to it from Scala 2 implicits does require major changes since possibly a large number of function signatures has to be changed to allow conversions on the arguments. This might ultimately hold back migration to Scala 3 implicits.

To facilitate migration, we also introduce an alternative way to specify target types of implicit conversions. We allow into as a soft modifier on classes and traits. If a class or trait is declared with into, then implicit conversions into that class or trait don’t need a language import.

Example:

into class Keyword
given stringToKeyword: Conversion[String, Keyword] = Keyword(_)

val dclKeywords = List("def", "val")
val xs: List[Keyword] = dclkeywords ++ List("if", "then", "else")

Here, the strings "if", "then", and "else" are converted to Keyword using the given conversion stringToKeyword. No feature warning or error is issued since Keyword is declared as into.

The into-as-a-modifier scheme is handy in codebases that have a small set of specific types that are intended to be the targets of implicit conversions defined in the same codebase. But it can be easily abused.

One should restrict the number of into-declared types to the absolute minimum. In particular, never make a type into to just cater for the possibility that someone might want to add an implicit conversion to it.

6 Likes

Is there any posibility that language operators like (‘if’, ‘while’) will get a type of the form into[T]?

It sounds quite complicated and unintuitive. What about instead refactoring implicit conversions so that:

def ++ (elems: into[IterableOnce[A]]): List[A]

is the exact equivalent of:

def ++[U](using Conversion[U, IterableOnce[A])(elems: U): List[A]

with U a synthetic type variable. That is, a syntactic sugar.

Pros:

  • easy to reason about
  • easy to implement (I guess)
  • allows to decommission the old scheme entirely

Cons:

  • in addition to requiring into in all relevant places, migration would in a number of cases behave not exactly the same and mandate further manuel tweaks.
  • sequences require a Fragable-like mechanism
2 Likes

Note that this would not be binary compatible, it might be with erased parameters ?

Kyo uses an implicit conversion to automatically lift plain values into computations. It’s an important usability improvement for its monadic APIs so users don’t need to distinguish between map and flatMap, which is a common point of friction when people are learning to use monadic APIs. Let me elaborate on how we got to the current design since it might be relevant for this discussion.

In earlier versions, the conversion used to be provided by the compiler via a type bound. Similarly to the into opaque type proposal, the compiler itself would consider a type A as a subtype of A < Any:

// note `>: A`
opaque type <[+A, -S] >: A = A | Kyo[A, S]

We eventually migrated to using an implicit def conversion to handle a problematic edge case. With the type bound, the compiler automatically nests computations when there’s a mismatch in pending effects:

// given a type that restricts the pending effects 
// (IO in this example)
def test1[A](v: A < IO): A < IO = v

// if a computation with a different pending effect set is passed, 
// the compiler automatically lifts `B < Choice` to `B < Choice < Any` 
// and then widens the computation to `B < Choice < IO` to match 
// the expected type given that `S` is contravariant.
def test2[B](v: B < Choice): B < Choice < IO = 
   test1(v)

To resolve this issue, the type bound was removed and replaced by an implicit conversion. The conversion requires an evidence WeakFlat that functions similarly to NotGiven but is defined as an alias to null and uses inline to reduce overhead. Ideally, we could eventually migrate to an erased NotGiven evidence when the feature is marked as stable.

At the time, we didn’t use Conversion due to the fact that its implementation can’t be marked as inline, which introduces overhead especially given the lack of specialization in the compiler. The other reason is the requirement of users having to enable conversions explicitly.

Fast forwarding, we recently introduced first-class support for computation nesting. Previously, the library had another evidence called Flat functioning as a hard requirement enforced via a macro that checked that a type was a concrete class type, which avoided soundness issues with nesting during effect handling.

With the recent changes, the library now handles nesting via internal boxing. The WeakFlat requirement is still present to prevent the edge case with pending effect set mismatches, but effect handling doesn’t require a Flat evidence anymore. Instead, when a generic type A is lifted by the implicit conversion, the library reflects on the runtime type of the value in order to introduce a boxing wrapper class if necessary, which provides proper support for nested computations without the need for Flat.

This is a central aspect of Kyo’s design and, if we’re not able to have a similar encoding once implicit def support gets removed, it would be a major issue for the project.

Looking at the proposals so far, the opaque type into approach seems to retain the properties we need but it would require changes across the codebase to mark all method parameters as convertible, which isn’t ideal.

The initial proposal of into as a modifier wouldn’t be viable for the project given that we use an opaque type for <, but the direction is more promising. The ability to trigger implicit conversion seems more of a property of the “destination” type and the behavior seems more predictable that way. Avoiding more complexity in method signatures would also be beneficial.

2 Likes

The into modifier on definitions resolves most of my major issues. It should be easy to adopt that in existing codebases by adding the modifier on the small subset of definitions which are intended to be implicit conversion targets. All of the com-lihaoyi libraries should be easy to migrate when the time comes (considering how long we’ve been maintaining backwards compat, maybe 2035???)

What concerns remain for me are largely bikeshedding:

  1. We should encourage people to use the definition-site modifier over the use-site wrapper type. Everyone probably agrees that it is bad to add implicit conversions between random unrelated types you don’t control, and that is what into[T] does. But implicit constructors for a type you do control is an exceedingly common and unproblematic pattern, which is what into trait provides. We should encourage into trait and into classs and discourage into[T] unless it is really-truly necessary

  2. into is a terrible choice of keyword. We already have a name for this stuff: implicit conversion, scala.Conversion, JavaConversions, JavaConverters, scala.jdk.CollectionConverters, etc… We should use that terminology rather than choosing a new meaningless name: e.g. ConvertTo[T] as the wrapper type, and Convertable trait Foo as the definition-site modifier. The exact name can be debated, but both names should have the word convert in there somewhere for sure!

  3. One thing I discussed briefly with @odersky in person is that for many (most?) implicit conversions, the user explicitly does not want them to add extension methods to the type. e.g. Just because I let you implicitly convert String => os.Path does not mean I want you to write "/tmp".segments! I believe this kind of incidental-extension-method-adding was the cause of a lot of the confusion in e.g. JavaConversions. In the most common cases implicit conversions can live in the companion object, which makes this problem go away, but for “orphan” conversions it could be worth cracking down on this behavior. There are still cases where you want the implicit conversion to also add extension methods (e.g. Array => Seq, String => fastparse.Parser, etc.), but those are the minority of use cases and maybe they could have an explicit opt-in for what is an uncommon and particularly sharp feature, especially since we now have first-class extension methods as a replacement for the bulk of extension-method use cases

If we’re bikeshedding, then I have the exactly opposite view. into is descriptive, unambiguous, and matches (implicitly) the use in Rust (so there’s prior art). ConvertTo works as a wrapper type because of the To but otherwise the directionality is unclear. into is obvious: you go into it, not from it outto something else. (Convertible especially seems to suggest more strongly that you turn into something else, not that something else turns into you.)

I do agree that it’s often undesirable to have conversions automatically enable methods. Having a way to pick which behavior is desired would be really nice.

I am uncertain about into MyTrait vs def foo(x: into[YourTrait]). I think the use cases are so different that I’m uneasy about recommending one over the other–rather, if both are offered, we should just clearly explain what they’re for. The first one is for me as a library creator who decides that the ergonomics of trait identity work better than typeclasses, and invites you to become MyTrait. The second one is for me as a library consumer who needs YourTrait to do stuff, but again when I decide that the ergonomics of direct conversion are nicer than typeclasses, and want to write methods that just work with less boilerplate.

6 Likes

Thanks for the detailed report. Would the second alternative, i.e. into as a modifier on the boxing wrapper class, work better than into constructors in your case?

Thank you all for your helpful comments. I see that the overall appreciation is positive so I made a formal SIP proposal:

I have incorporated many of your suggestions in that document.

4 Likes