The proposed syntax (~) avoids a fresh type variable, and explicit conversion in RHS. I’m all for it.
Sorry, but no. This is simply wrong.
It doesn’t work the same way as ~
but sorry, it does the same as what I quoted.
Programming in Scala, first edition:
You can think of “T <% Ordered[T]” as saying, “I can use any T, so long as T can be treated as an Ordered[T].” […] For example, even though class Int is not a subtype of Ordered[Int], you could still pass a List[Int] to maxList so long as an implicit conversion from Int to Ordered[Int] is available. Moreover, if type T happens to already be an Ordered[T], you can still pass a List[T] to maxList.
<%
did the same as the typeclass-based approaches that were proposed. There’s a crucial difference between a call-site conversion (which is implied by ~
) and a definition site conversion taking an extra parameter (which is what <%
expanded to).
I have two big concerns, and a small.
The first one is with the fact that the conversion must be declared at definition site, which is a radical change compared to implicit conversion: it is no more actionnable by myself. If I’m using a third party library that is not maintained anymore, or that lags, or just for which the maintainer doesn’t want to use that annotation, it seems I’m a bit out of luck. This is particularly true for “intra company libraries”, where the motivation to make good and generic API is often crashed by the fact that it needs time and resources. So, if you want to follow scala 3 best practices, it gives two solutions for lib maintainers (one for the ones who don’t want to change their source too much with ~
, one for the one who want to follow what seems “the scala 3 way” with typeclasses), and 0 for users (well, or perhaps one with important change on the code base with the use of .inject[A]
everwhere it is needed, ie what I would do today with implicit classes).
Perhaps it is by designed, and we prefer to have an easy path for library designer than for users, and especially for users who were heavely relying on implicit conversion, who, in all cases, will likely just continue to add the correct imports and continu to use implicit conversions.
We have a linked use case, and I’m not exactly sure I understand if it will still be possible in the proposed future: we use implicit conversion only in tests to make them more understandable and limiting boilerplate, ie in the scope of a file to take care of construction/default value settings/etc. Typically: using a string where a UserId(value: String)
is used. In that case, it would makes zero sense to accept an implicit conversion at definition site (we really, really want to enforce an UserId
as only possible parameter), and making an explicit marker after the string like "2435-343sksre..".inject
defeats a bit the purpose of the conversion. So here: add correct import, keep them forever ?
The second is about the searchability of ~
, which is very bad. Even in code source, it’s a very common DSL element (out of my head: fastparse, json4s, lift use it).
The last, minor, is that I’m not sure how it works with variance annotations.
Bulk extensions seems really interesting.
The talk of typeclasses is a bit of a red herring. Typeclasses cannot satisfy all use cases of implicit conversions. Martin hasn’t explained it clearly, but here’s a minimal example:
def flattenChars(x: Seq[Seq[Char]]) = ???
flattenChars(Seq("hello", Seq('w', 'o', 'r', 'l', 'd'), Array('!')))
You could certainly make this work with most of the logic in typeclasses, but at the end of the day you end up needing at least one implicit conversion to pull in the typeclass and use it.
Typeclasses work great for heterogenous fixed-arity things like single parameters or tuples, they don’t work well with heterogenous variable-arity things like Seq[T]
s or T*
varargs which involve implicit conversions. This is also why the proposed ~
syntax, while superficially similar to <%
view bounds, is not equivalent.
The above example isn’t hypothetical: Scalatags uses this in its most core user-facing API, as do all the other HTML generation libraries that followed its style. Ammonite uses it when calling the REPL programmatically with bound variables. uPickle uses this in its JSON-construction API. All of these involve heterogenous implicit constructors being invoked in variadic method calls, which simply cannot be supported without implicit conversions
It may be possible some new variable-arity-tuple features in Scala 3 may make this kind of API possible in certain cases, but I doubt they’ll be able to totally cover the wide range of use cases that implicit constructors have today.
I think even with the proposed ~ syntax you will still have to resort to the Fragable
approach : Can We Wean Scala Off Implicit Conversions? - #37 by odersky
Yeah I saw that. I’m sure the new generic tuple feature can satisfy some use cases, but generic tuples are a new and very novel feature that has not been used heavily. In contrast, the existing “devil we know” of implicit constructors I’ve been living with for a decade now. It’ll take a lot of usage and porting before I can be truly confident that we truly don’t need implicit conversions for heterogenous variadic use cases, so I don’t expect that to happen until a while after Scala3 is released (and adopted!)
This would be worthwhile by itself when used in conjunction with opaque types.
This is unlikely to do what you want with opaque types, for example map
on IArray
should return an IArray
, but if you export the map
of the underlying Array
then it will return IArray
, and you can’t either have the compiler blindly always use the opaque type instead of the underyling type in the exported methods since whether or not this is meaningful depends on what the method does.
Implicit conversion enables one type to be morphed into another type. So it is a kind of syntax-free ad-hoc polymorphism, a feature not a bug. A very powerful feature indeed making Scala so fascinating language. I would argue to think again about how to preserve this power without uncontrolled blow-ups.
I’m tentatively against removing implicit conversions or enforcing prefixes like ~ to enable them. Reason being benign conversions like Int => Long are mostly expected and implicit conversions fill this space perfectly.
I’ve always found Scala honest in this sense: Where other languages silently convert some things and not others, Scala exposes the phenomenon both by name and functionality. The fact that int2Integer is an implicit conversion rather compiler magic is refreshing! (although to be fair I think it relies of Java autoboxing…? ironic if so)
For this reason I don’t like the idea to limit the mechanism by enforcing prefixes or magical language imports—those are ok for experimental features; stable features shouldn’t need import-flags. I also don’t like the direction taken with the Conversion
trait: In my mind it’s easier to explain and reason about a feature of the language than a special case in the compiler.
I do remember there being conflicts where implicit conversions meet union types and type inference in general. Perhaps the rules for implicit conversions could be made more strict in these cases without making the mechanism itself less desirable by means of uglification?
One problem I see with the ~ approach (as compared to typeclass) is that it might not allow to chain conversions like is currently possible with Conversion
:
import scala.language.implicitConversions
class BigRational(x: BigInt)
given int2bigInt: Conversion[Int, BigInt] = BigInt(_)
given bigInt2rational[U](using Conversion[U, BigInt]): Conversion[U, BigRational] = BigRational(_)
val res: BigRational = 1
That would imply having ~ in the signature, which I don’t think is feasible:
given bigInt2rational: Conversion[~BigInt, BigRational] with {
def apply(x: ~BigInt) = BigRational(x)
}
But the approach with Conversion
will stay valid, of course. The using clause of bigInt2rational
requires an implicit parameter of conversion type, not an implicit conversion to be inserted.
Ok, I get it : I can encode my conversions like I used to, and still have the benefits of the new scheme. Thanks.
String interpolation presents a case where implicit conversions are difficult to replace. Consider:
extension(sc: StringContext) def sql(params: Param*) = ...
sql"select a from t where b = $b and c > $c"
Supposing a: Int
and b: String
what type is Param
here? Any
is not suitable. It would usually be a magnet type with implicit conversions from types with given
evidence they can be SQL values.
In any other case I would make the conversion explicit. But for string interpolation that defeats the purpose because you get something hard to read like:
sql"select a from t where b = ${Param(b)} and c > ${Param( c)}"
Maybe StringContext needs some change?
A definition site “up-to-conversion” marker solves this quite elegantly:
extension(sc: StringContext) def sql(params: ~Param*) = ...
sql"select a from t where b = $b and c > $c"
This indicates that implicit conversions into the magnet type are also considered. That’s actually more informative than the original definition.
Seconding this.
A bit off topic, but custom interpolation where the values require a constraint (e.g. cats.Show
or anorm.ToParameter
) has been a real pain-point in Scala 2.
IntelliJ consistently misses them, and even the compiler can’t always pick them up. It probably goes back to the way the varargs are encoded, but this is something I had to deal with this week, and it really shouldn’t have been an issue.
This did not compile:
show"Foo the $bar"
This did compile:
s"Foo the ${bar.show}"
No other changes were needed, everything was in scope, it just couldn’t find the Show
instance that enabled the implicit conversion to Shown
required by the show
interpolator.
@odersky, I managed to create a close emulation of the ~
functionality using inline implicit conversion.
The implementation does two more things:
- Uses the value before it is widened (something that typeclass currently can’t give us without explicitly using a
Singleton
upper-bound). - Propagates the error message from a constrained conversion, so we are not just left off with a type-mismatch.
Scala version
The implementation assumes an updated summonInline
, which is available on 3.0.1-nightly.
Implementation
import scala.language.implicitConversions
trait ~[Of]:
type To <: Of
val value : To
object ~ :
trait Conversion[Of, From]:
type To <: Of
def apply(from : From) : To
///////////////////////////////////////////////////////////////////////////////////////////
// Reasons for workaround:
// 1. We cannot used transparent inline implicit conversion due to
// https://github.com/lampepfl/dotty/issues/12429
// 2. We cannot directly summonInline the conversion
// https://github.com/lampepfl/dotty/issues/12415
///////////////////////////////////////////////////////////////////////////////////////////
protected type Aux[Of, To0 <: Of] = ~[Of]{type To = To0}
protected trait WorkAround[Of, From0]:
type To <: Of
type From = From0
protected trait LowPriority:
transparent inline given [Of, From] : WorkAround[Of, From] = new WorkAround[Of, From]{type To = Nothing}
protected object WorkAround extends LowPriority:
transparent inline given [Of, From](using c : Conversion[Of, From]) : WorkAround[Of, From] =
new WorkAround[Of, From]{type To = c.To}
///////////////////////////////////////////////////////////////////////////////////////////
inline implicit def conv[Of, From](inline from : From)(using wa : WorkAround[Of, from.type]) : Aux[Of, wa.To] =
val c = compiletime.summonInline[Conversion[Of, wa.From]]
new ~[Of]:
type To = wa.To
val value : To = c(from).asInstanceOf[To]
Usage
trait Positive[T <: Int]
object Positive:
inline given [T <: Int] : Positive[T] =
inline val t = compiletime.constValue[T]
inline if (t > 0) new Positive[T] {}
else compiletime.error("Expected positive argument")
trait Even[T <: Int]
object Even:
inline given [T <: Int] : Even[T] =
inline val t = compiletime.constValue[T]
inline if (t % 2 == 0) new Even[T] {}
else compiletime.error("Expected even argument")
class Foo[T](value : T)
//Creating a `Foo` from Int, only if the integer is both positive and even
transparent inline given [F <: Int](using Positive[F], Even[F]) : ~.Conversion[Foo[_], F] = new ~.Conversion[Foo[_], F] {
type To = Foo[F]
def apply(from: F): Foo[F] = new Foo(from)
}
def foo(x : ~[Foo[_]]) = x.value
Testing
val fOK : Foo[4] = foo(4)
val fBadOdd = foo(3) //error: Expected even argument
val fBadNeg = foo(-1) //error: Expected positive argument
Implicit Conversions are still used in opaque types in base scala 3 libraries:
https://dotty.epfl.ch/api/scala/IArray$.html
implicit def genericWrapArray[T](arr: IArray[T]): ArraySeq[T]
Conversion from IArray to immutable.ArraySeq
Is it bad and should it be avoided?