Proposal: Changes to Implicit Conversions

Proposal: Changes to Implicit Conversions

Hello Scala Community!

This thread is the SIP Committee’s request for comments on a proposal to change how implicit conversions work in the language.

Summary of the proposal

Scala 2 already has implicit conversions, which are implicitly available instances of scala.Function1 (or “[methods] convertible to a value of that type” SLS 7.3). In short, this proposal adds a class scala.Conversion to the standard library, as a subtype of Function1. Implicit conversions are changed to be given instances of the new type. No new syntax is required. Old-style conversions will be phased out. A language flag will be required in order to either define or use “undisciplined” conversions, that is, conversions between types defined elsewhere.

This is one of a set of proposals that are collected under “Contextual Abstractions” in the Dotty docs.

This proposal and the others are motivated, together, in this section:

And then further material specific to conversions only is here:

Related discussions

Note: To channel discussions, we are dealing with the individual “Contextual Abstractions” proposals in separate threads. But it would be good to read and absorb all proposals there together before discussing individual parts.

The most relevant other discussions in this area are:

Note that both of these topics have now grown very long (632 posts and counting). But also, “I believe we are on the home stretch”, says Martin, so it’s appropriate to now also consider side topics such as conversions.

Because the various “Contextual Abstractions” proposal aren’t entirely independent of each other, the topic of conversions has already been touched upon repeatedly in the above threads. The following sections summarize and link to those sub-discussions.

Should conversions always be explicit?

This suggestion has been made at least twice, by @jdegoes and @jdolson:

  • Could/should we “Remove Implicit Conversions” entirely?

Then also a very similar suggestion made by @jdolson here:

Martin has responded as follows:

  • Updated Proposal: Revisiting Implicits
  • He writes: “I agree that a typeclass for explicit conversions via as would be useful”, but it’s unclear if he believes this typeclass (he suggests calling it Convertible) would exist in addition to, or instead of, Conversion.

@morgen-peschke notes that he has already made use of such a typeclass; see Updated Proposal: Revisiting Implicits

Followup posts on this subject continue through December 10th.

How are conversions related to extensions?

This was raised by @Ichoran at Updated Proposal: Revisiting Implicits

“I don’t feel that extensions are sufficiently unified with conversions, which can do exactly the same thing according to the docs. Either conversions shouldn’t allow you to call methods (i.e. a method call would not be a request to convert the type), or the unification should be clearer. In particular, all the extensions should be instances of Conversion…”

Martin was dismissive (“I think we are pretty much settled on the current design” of extensions). But there was some followup discussion about extension method syntax, which seemed to be shut down by Seb’s conclusion: “We should indeed forbid to call extension methods as if they were normal methods” (Updated Proposal: Revisiting Implicits) A few further posts followed, ending on January 8.

This thread of discussion was somewhat revived by @morgen-peschke in early February, in messages such as Updated Proposal: Revisiting Implicits – it wasn’t clear to me if this was directly applicable to the other questions about the design of conversions, or whether the Convertible typeclass was just being used as an example of bringing out broader issues around the overall implicits design.

Other conversions questions so far

Alternative implicits proposal

There is also an alternative implicits proposal submitted by Eyal Roth; the overall discussion thread on that is here:

And the section of the proposal specific to conversions is here:

For discussion

  • Is the proposal clear?
  • Should this proposal be accepted by the committee for Scala 3?
  • Should the proposal be modified before acceptance?
  • Naming. Are we to call these conversions “given conversions” now? (Currently the doc still says “implicit conversions”.)

Time frame

This topic will remain open for at least one month, to allow sufficient time to gather feedback.

6 Likes

DSLs use implicit conversions and I don’t think it’s likely to change.

3 Likes

Thanks for the comprehensive review of where things stand!

I continue to think that conversions and extension methods should either be simply syntactic sugar for each other, or should enable non-overlapping things. That is, either

  • Conversions automatically call code based on type inference or type ascription but not method names.
  • Extension methods do care about method names

or

  • Conversions automatically call code based on type inference, type ascription, and method names
  • Extension methods are syntactic sugar for creation of an anonymous conversion to a class with the methods being extended.

Everything else seems to muddle the two features together.

(For the record, I think I mildly prefer the first option–they are separate features that do separate things. But I could go either way.)

3 Likes

When dealing with DSLs, the one defining the conversion probably also defined the use site of where its “intended” for the conversion to be used. Those cases can also be encoded with a typeclass.

Despite thar, I’m still in favor of keeping conversions as powerful as they are in Scala 2. The new warnings are great, but a bit worried about performance, and the loss of path dependant conversions.

1 Like

Quill’s DSL uses implicit conversions to convert T => Quoted[T] (as well as Quoted[T] => T) so that things like this is possible:

val q = quoted { query[Person] }
run(q.map(p => p.name))

If you fully expand this, there are quote and unquote auto conversions:

val q = quoted { query[Person] }
run( quote( unquote(q).map(p => p.name) ) )

If not for auto conversions, these things would need to be done explicitly (at least the unquote part).

6 Likes

Don’t forget that these are macro conversions, that are getting gutted specifically - Updated Proposal: Revisiting Implicits. For example, you’ll have to define a parent trait that will throw a runtime error, and users will always be able to summon both Conversion[T, Quoted[T]] & Conversion[Quoted[T], T] as runtime values - and both of these will blow up at runtime

There was a brief preliminary discussion of this at the SIP retreat today. Summary:

  • All committee members who expressed opinions were either strongly or weakly in favor of the proposal in general.
  • There doesn’t seem to be any committee interest in the all-conversions-must-be-explicit option.
  • Nor any interest in bringing back implicit def.
  • Martin and several others were sympathetic to the concerns of @Ichoran and others about overlap with extension methods, but no one on the committee sees a good way to address them, either.
  • @nicolasstucki thinks we should consider somehow supporting path-dependent conversions. I invite him to post here about it.

Other questions and points didn’t come up. We’ll re-discuss after this thread’s one-month period has passed.

1 Like

I was also thinking about this. One way to support them would be to change the type that marks implicit conversions from Conversion[A, B] to just Conversion, which would be a marker type that can be combined with any function type of arity 1. With this change, the example given in the proposal would look like the following:

given (Conversion & (String => Token)) {
  def apply(str: String): Token = new KeyWord(str)
}

An example with a dependent function type would look like so:

given (Conversion & ((e: Entry) => e.Key)) {
  def apply(e: Entry): e.Key = ...
}

An alternative solution could be to have a dedicated arrow syntax for Conversion. For instance, ~>. Then, in the same way that we treat the type:

(e: Entry) => e.Key

as a syntax for the type

Function1[Entry, Entry#Key] {
  def apply(e: Entry): e.Key
}

We could treat the type:

(e: Entry) ~> e.Key

as an alias to:

Conversion[Entry, Entry#Key] {
  def apply(e: Entry): e.Key
}
1 Like

The review is really helpful! Thank you, that had to take a long time to assemble.

For the time being, I strongly object this proposal, and I would like to attempt and restate my objections in a concise manner:

  1. I have the feeling that this proposal will create a point of no return for givens, which are in my opinion extremely experimental and controversial, and I have many reservations to them (expressed in other discussions).

  2. Even if we accept givens, this proposal introduces a new Conversion trait / class with its own unique semantics. Introduction of “special traits” seem to me as a step towards irregularity of the language, and the more special the semantics of such traits are, the more irregular the language becomes. For further discussion on this – which started as a response to this comment – see this thread.

  3. IIUC, the unique semantics of Conversion are extremely powerful and significantly differ from the rest of given instances; that is, it is (a) applied without bounds, and (b) applied without an explicit method invocation. I believe this will only introduce further confusion into the semantics of implicits / givens.

For these reasons, I believe conversions deserve their own distinct construct.


I like this idea.

I would also like to have an option for extensions to be expressed as a single stateless instance, which is somewhat equivalent to having an object with the same extension methods, but with the specific instance given as a special parameter (like self in Python). If I’m not mistaken, opaques are planned to be implemented with such “singleton extensions”?

I’m not sure this could be expressed with conversions though.

@SethTisue - I suggested two ways in which overlap with extension methods could be addressed. Were the drawbacks of either discussed? If yes, what was/were the sticking point(s)?

2 posts were split to a new topic: DelayedInit and StringContext

Sorry, I’m unable to improve on what I already wrote above. Is there anything else in the minutes (https://docs.scala-lang.org/sips/minutes/2020-03-11-minutes.html)? And/or, perhaps someone else from the committee will chime in.

As best I can recall, that section of the discussion didn’t last very long and didn’t get very specific.

2 posts were merged into an existing topic: DelayedInit and StringContext

Note that neither this change, nor the PR by Nicolas Stucki (https://github.com/lampepfl/dotty/pull/8523) would support implicit conversions with dependencies in the implicit parameter list:

implicit def x(a: A)(implicit tc: TC[a.T]): tc.Out

The signature of Conversion cannot support such conversions at all. There are quite a few such conversions in akka-http and other libraries I listed in the issue in dotty

Shouldn’t ((a: A) => (tc: TC[a.T]) ?=> tc.Out) & Conversion work? At least if dependent function types, context queries and conversions all work together correctly.

2 Likes

This topic has now been open for over a month. Last call for posts, before I close it on Apr 22.

I think it would be premature to close this thread, as the discussion hasn’t reached a conclusion yet.
I’ve asked a question about the future of path-dependent and implicitly path-dependent conversions to Martin at the recent Scala Love conference, from the answer I optimistically conclude that there is an understanding that removing implicit def without a replacement for the usecases it can cover now is hopefully not on the table and Scala 3 may yet experiment with encodings of Conversion between 3.0 and 3.1 until we find one that covers the use cases. In that question I’ve asked about @nicolasstucki’s encoding specifically, but upon further thinking I’ve unfortunately found that all the proposals in this thread so far are unable to replace the features of implicit def with respect to path-dependency and macro support. I’ll outline the reasons for all of them:

  • @nicolasstucki’s proposal is to encode conversions as values of opaque type Conversion[Func <: Nothing => Any] = Func.

    Pros:

    1. Supports path-dependency with conversions of type Conversion[(x: X) => x.Out]

    2. Might support implicit path-dependency with types such as Conversion[(x: X) => (tc: TC[x.type]) ?=> tc.Out]

    Cons:

    1. Does not support inheritance and given instances, the opaque type cannot be mixed in together with another typeclass in one given instance, e.g. given as MyClassX[A] with Conversion[A => B]

    2. Macro conversions are not supported at all by this encoding, partially as a result of a lack of inheritance. A macro conversion is an inline function that needs access to the tree of the argument that’s under conversion to be able to convert it. Under the original Conversion proposal, macro conversions can still be defined, albeit clumsily, as follows:

      sealed trait ConversionWorkaround[A, B] extends Conversion[A, B] {
        // workaround for error "method apply of type (i: Int @InlineParam): (0 : Int) is an inline method, must override at least one concrete method"
        override def apply(a: A): B = throw new RuntimeException("Inline method called at runtime")
      }
      
      final class ZeroIntConv extends ConversionWorkaround[Int, 0] {
        inline override def apply(inline i: Int): 0 = inline i match {
          case 0 => 0
          case i => scala.compiletime.error(s"Bad number $i")
        }
      }
      
      given as ZeroIntConv = new ZeroIntConv
      

      Notice that we can’t define a macro with inline on the outside of given, like this inline given Conversion[Int, 0] = ..., because this will define a macro that returns a Conversion value, but that macro will not have compile-time access to the inline i: Int parameter. Therefore, if we make Conversion an opaque type, it can no longer be inherited and then there’s no way to place an inline modifier on the inside of the class body, on the apply method, as such this encoding rules out macro conversions and would in fact be a step back from the current proposal.

    3. It is not clear what is the return type of a conversion with implicit parameter lists. In Conversion[X => (TC[X] ?=> Y)] is X converted to Y? or to (TX[X] ?=> Y), or to both? How do we find out which? What about Conversion[(c: Context) ?=> c.In => (tc: TC[c.In]) ?=> (tc2: TC2[tc.Out]) ?=> tc2.Out]? Considering multiple types would likely slow down the type checker, so we probably want an encoding that is unambiguous about what part of the type is the final result type of the conversion.

  • @julienrf’s special arrow proposal fares a bit better, it can encode path-dependent and implicitly path-dependent conversions, it supports inheritance unlike Nicolas’ proposal, and by extension inline definitions; the special arrow can mark the intended final result type, when mixed among nonspecial arrows, e.g. in X => (TC[X] ?~> Y) and in (c: Context) ?=> c.In => (tc: TC[c.In]) ?=> (tc2: TC2[tc.Out]) ?~> tc2.Out the last ?~> arrow marks the implicit argument on the left and the final result type on the right, with the initial type being the sole non-implicit parameter in the chain of arrows. However, it does come with cons:

    Cons:

    1. Very hard to detect which given value is a Conversion. Being able to support multiple parameter lists means that eligible are not only values of ~> type, but also regular functions and implicit functions that return these values, such as the above X => (TC[X] ?~> Y) and (c: Context) ?=> c.In => (tc: TC[c.In]) ?=> (tc2: TC2[tc.Out]) ?~> tc2.Out. This may be hard to understand and hard to implement.

    2. No, or extremely clumsy access to trees of the implicit parameters in macro conversions. Consider how you would implement a macro conversion with implicit parameters in this encoding:

      given (X => TC[X] ?~> Y) {
        inline def apply(inline x: X): TC[X] ?~> Y = ...
      }
      

      Oops. We’ve just defined a macro that must return an implementation of a function TC[X] ?~> Y, but the macro itself does not have access to the tree of TC[X], it has no parameter inline tc: TC[X]. Can we chain macros and splice an instance of ?~> with another inline apply method? Maybe. Maybe we can even pass the i parameter forward. But even if all this trickery works, it will be very slow because we’ll be chaining macros that return more macros and cause more and more retyping cycles and it would look completely awful for other people to read.

  • Last is the @julienrf’s marker trait proposal, it shares traits with the special arrow proposal, it fixes the issue with hard to detect Conversions, because the Conversion marker must be on the outside of the function type, as in Conversion & (X => (TC[X] ?=> Y)), but its other cons are:

    Cons:

    1. The final result type is ambiguous with multiple parameter lists, same as in opaque type proposal.

    2. It’s still just as hard or even impossible, depending on the exact capabilities of dotc, to access the tree of the implicit arguments.

So, all of the above proposals do not succeed in neatly replacing Scala 2’s implicit def; of which the marker trait is I think the least problematic option. Even so, all of the above proposals will still require you to add workarounds for the X is an inline method, must override at least one concrete method limitation when defining inline apply method, because they’re all based on inheritance from a runtime function class and always require a materialized given object – there’s no way to define a macro-only conversion.

I deem support for inline conversions to be much more urgently important than path-dependency support because the Conversion + Macro pattern is very popular - it is used in quill and it is the basis of sbt and refined libraries and will continue to be, because refined needs to be able to execute arbitrary predicates on literal trees at compile-time - and convert the literals that pass into refinement-typed values – transparently to the user, one of the poster use cases for the Conversion + Macro pattern. And this pattern is also the basis of my company’s libraries, distage and LogStage.

Lastly, I propose my own proposal, which is to add a method-like syntax for defining conversions as well as a special arrow (or just allow path-dependency in infix (a: A) Conversion a.Out without the arrow), such as:

// complex conversion
inline conversion on (inline x: X)(using inline tc: TC[x.type]) as tc.Out = ...
// simple runtime conversion, without implicit arguments,
// multiple parameter lists _MUST_ be written as methods instead.
given MyTypeClass[A] with ((a: A) Conversion B[a.type]) 

This avoids the remaining cons of the other proposals - there is now a dedicated final result type in the conversion on syntax, trees of implicit arguments are accessible for macro conversions, the Conversion can be mixed in and complex conversions with implicit arguments lists will be converted to simpler Conversion objects, if they’re not inline, by applying them and discharging the implicit parameters. conv(_): Conversion[A, B].

That’s all I wanted to say, I do think this discussion is far from over, at least I hope all the arguments would be considered before the eventual replacement of implicit def, if any.

4 Likes

Thanks for the detailed summary, that’s very helpful. Let’s push the deadline out two weeks, to May 5, and see if further responses come in.