I think it would be premature to close this thread, as the discussion hasn’t reached a conclusion yet.
I’ve asked a question about the future of path-dependent and implicitly path-dependent conversions to Martin at the recent Scala Love conference, from the answer I optimistically conclude that there is an understanding that removing implicit def
without a replacement for the usecases it can cover now is hopefully not on the table and Scala 3 may yet experiment with encodings of Conversion between 3.0 and 3.1 until we find one that covers the use cases. In that question I’ve asked about @nicolasstucki’s encoding specifically, but upon further thinking I’ve unfortunately found that all the proposals in this thread so far are unable to replace the features of implicit def
with respect to path-dependency and macro support. I’ll outline the reasons for all of them:
-
@nicolasstucki’s proposal is to encode conversions as values of
opaque type Conversion[Func <: Nothing => Any] = Func
.Pros:
-
Supports path-dependency with conversions of type
Conversion[(x: X) => x.Out]
-
Might support implicit path-dependency with types such as
Conversion[(x: X) => (tc: TC[x.type]) ?=> tc.Out]
Cons:
-
Does not support inheritance and given instances, the opaque type cannot be mixed in together with another typeclass in one given instance, e.g.
given as MyClassX[A] with Conversion[A => B]
-
Macro conversions are not supported at all by this encoding, partially as a result of a lack of inheritance. A macro conversion is an inline function that needs access to the tree of the argument that’s under conversion to be able to convert it. Under the original Conversion proposal, macro conversions can still be defined, albeit clumsily, as follows:
sealed trait ConversionWorkaround[A, B] extends Conversion[A, B] { // workaround for error "method apply of type (i: Int @InlineParam): (0 : Int) is an inline method, must override at least one concrete method" override def apply(a: A): B = throw new RuntimeException("Inline method called at runtime") } final class ZeroIntConv extends ConversionWorkaround[Int, 0] { inline override def apply(inline i: Int): 0 = inline i match { case 0 => 0 case i => scala.compiletime.error(s"Bad number $i") } } given as ZeroIntConv = new ZeroIntConv
Notice that we can’t define a macro with inline on the outside of given, like this
inline given Conversion[Int, 0] = ...
, because this will define a macro that returns a Conversion value, but that macro will not have compile-time access to theinline i: Int
parameter. Therefore, if we makeConversion
an opaque type, it can no longer be inherited and then there’s no way to place aninline
modifier on the inside of the class body, on theapply
method, as such this encoding rules out macro conversions and would in fact be a step back from the current proposal. -
It is not clear what is the return type of a conversion with implicit parameter lists. In
Conversion[X => (TC[X] ?=> Y)]
isX
converted toY
? or to(TX[X] ?=> Y)
, or to both? How do we find out which? What aboutConversion[(c: Context) ?=> c.In => (tc: TC[c.In]) ?=> (tc2: TC2[tc.Out]) ?=> tc2.Out]
? Considering multiple types would likely slow down the type checker, so we probably want an encoding that is unambiguous about what part of the type is the final result type of the conversion.
-
-
@julienrf’s special arrow proposal fares a bit better, it can encode path-dependent and implicitly path-dependent conversions, it supports inheritance unlike Nicolas’ proposal, and by extension inline definitions; the special arrow can mark the intended final result type, when mixed among nonspecial arrows, e.g. in
X => (TC[X] ?~> Y)
and in(c: Context) ?=> c.In => (tc: TC[c.In]) ?=> (tc2: TC2[tc.Out]) ?~> tc2.Out
the last?~>
arrow marks the implicit argument on the left and the final result type on the right, with the initial type being the sole non-implicit parameter in the chain of arrows. However, it does come with cons:Cons:
-
Very hard to detect which given value is a Conversion. Being able to support multiple parameter lists means that eligible are not only values of
~>
type, but also regular functions and implicit functions that return these values, such as the aboveX => (TC[X] ?~> Y)
and(c: Context) ?=> c.In => (tc: TC[c.In]) ?=> (tc2: TC2[tc.Out]) ?~> tc2.Out
. This may be hard to understand and hard to implement. -
No, or extremely clumsy access to trees of the implicit parameters in macro conversions. Consider how you would implement a macro conversion with implicit parameters in this encoding:
given (X => TC[X] ?~> Y) { inline def apply(inline x: X): TC[X] ?~> Y = ... }
Oops. We’ve just defined a macro that must return an implementation of a function
TC[X] ?~> Y
, but the macro itself does not have access to the tree ofTC[X]
, it has no parameterinline tc: TC[X]
. Can we chain macros and splice an instance of?~>
with anotherinline apply
method? Maybe. Maybe we can even pass thei
parameter forward. But even if all this trickery works, it will be very slow because we’ll be chaining macros that return more macros and cause more and more retyping cycles and it would look completely awful for other people to read.
-
-
Last is the @julienrf’s marker trait proposal, it shares traits with the special arrow proposal, it fixes the issue with hard to detect Conversions, because the Conversion marker must be on the outside of the function type, as in
Conversion & (X => (TC[X] ?=> Y))
, but its other cons are:Cons:
-
The final result type is ambiguous with multiple parameter lists, same as in opaque type proposal.
-
It’s still just as hard or even impossible, depending on the exact capabilities of dotc, to access the tree of the implicit arguments.
-
So, all of the above proposals do not succeed in neatly replacing Scala 2’s implicit def
; of which the marker trait is I think the least problematic option. Even so, all of the above proposals will still require you to add workarounds for the X is an inline method, must override at least one concrete method
limitation when defining inline apply
method, because they’re all based on inheritance from a runtime function class and always require a materialized given object – there’s no way to define a macro-only conversion.
I deem support for inline conversions to be much more urgently important than path-dependency support because the Conversion + Macro pattern is very popular - it is used in quill
and it is the basis of sbt
and refined
libraries and will continue to be, because refined
needs to be able to execute arbitrary predicates on literal trees at compile-time - and convert the literals that pass into refinement-typed values – transparently to the user, one of the poster use cases for the Conversion + Macro pattern. And this pattern is also the basis of my company’s libraries, distage
and LogStage
.
Lastly, I propose my own proposal, which is to add a method-like syntax for defining conversions as well as a special arrow (or just allow path-dependency in infix (a: A) Conversion a.Out
without the arrow), such as:
// complex conversion
inline conversion on (inline x: X)(using inline tc: TC[x.type]) as tc.Out = ...
// simple runtime conversion, without implicit arguments,
// multiple parameter lists _MUST_ be written as methods instead.
given MyTypeClass[A] with ((a: A) Conversion B[a.type])
This avoids the remaining cons of the other proposals - there is now a dedicated final result type in the conversion on
syntax, trees of implicit arguments are accessible for macro conversions, the Conversion can be mixed in and complex conversions with implicit arguments lists will be converted to simpler Conversion
objects, if they’re not inline, by applying them and discharging the implicit parameters. conv(_): Conversion[A, B]
.
That’s all I wanted to say, I do think this discussion is far from over, at least I hope all the arguments would be considered before the eventual replacement of implicit def
, if any.